I was using KeepChatGPT[1] for a while back in 2023-2024, pre-Gemini-in-Google era, and I was fascinated as to how it was able to mask being a user without needing any API or help from the end user. I stopped using it after 2024 because 1) Gemini and 2) It breaks quite a lot. I did however, like how you had an option to push the AI panel to the right, if only Google even considers doing so.
I have a little helper app I run sometimes that I have a button to push a query into ChatGPT and get a json response. You wouldn't even know OpenAI had any anti-bot tools because it doesn't get flagged at all. It just uses a webview inside WinForms.
Because the output you get can have hallucinations, which don’t happen with a deterministic tool. Furthermore, by getting the `jq` command you get something which is reusable, fast, offline, local, doesn’t send your data to a third-party, doesn’t waste a bunch of tokens, … Using an LLM to filter the data is worse in every metric.
I get that AI isn’t deterministic by definition, but IMHO it’s become the go-to response for a reason to not use AI, regardless of the use case.
I’ve never seen AI “hallucinate” on basic data transformation tasks. If you tell it to convert JSON to YAML, that’s what you’re going to get. Most LLMs are probably using something like jq to do the conversion in the background anyway.
AI experts say AI models don’t hallucinate, they confabulate.
Just because you haven't seen it hallucinate on these tasks doesn't mean it can't.
When I'm deciding what tool to use, my question is "does this need AI?", not "could AI solve this?" There's plenty of cases where its hard to write a deterministic script to do something, but if there is a deterministic option, why would you choose something that might give you the wrong answer? It's also more expensive.
The jq script or other script that an LLM generates is way easier to spot check than the output if you ask it to transform the data directly, and you can reuse it.
> but if there is a deterministic option, why would you choose something that might give you the wrong answer?
Claude Code can use jq if it's installed on your system. Also, the data transformation is usually part of a larger workflow where an LLM is being used. And honestly, Claude is going to know jq better than 95% of developers who use it. jq can do a lot of things but it’s not the most intuitive tool to learn.
An obvious best practice is to have the LLM use existing tools to confirm the correctness of its output.
LLMs will often helpfully predict made up tokens for the content of the data fields.
For 100% of jq use cases I have the data wouldn’t fit into context. But even for the smaller things, I have never, not even once, had an LLM not mangle data that is fed into it.
Take a feed of blog posts (and select the first 50 or so just to give the model a fighting chance). I’ll give you 80% likelihood of the output being invalid JSON. And if you manage to get valid JSON out of it, the actual dates, times and text content will have changed.
I already know that. That's why we have deterministic algorithms, to simplify that complexity.
You have much to learn, witty answers mean nothing here, particularly empty witty answers, which are no better than jokes. Maybe stand-up comedy is your call in life.
That is the issue. It's why Xcode development is really bad with AI models[0] -- because there are barely any text-based tutorials for it, so the models have to make a lot of assumptions and whatnot. Hence, they are really good at Python, JavaScript, and increasingly, Rust.
[1]: https://github.com/xcanwin/keepchatgpt
reply