Not even remotely
> LLM output is no different
It is different
A search result might take me to the wrong answer but an LLM might just invent nonsense answers
This is a fundamentally different thing and is more difficult to detect imo
> This is a fundamentally different thing and is more difficult to detect imo
99% of the time it's not. You validate and correct/accept like you would any other suggestion.
Not even remotely
> LLM output is no different
It is different
A search result might take me to the wrong answer but an LLM might just invent nonsense answers
This is a fundamentally different thing and is more difficult to detect imo