On the other hand, that exact line of thinking could be applied when taking about pair programming or getting advice from a more senior peer. In either case the answer is to critically think about the answers and seek confirmation elsewhere when needed.
You can ask it to specify or point out any logical errors you observed and it will correct itself. If there's a contradiction I know that the information might be incorrect. I'm also just using it as a starting point to prime my brain, of course as of today we still need other sources to verify the knowledge.
If I'm taught by any teacher, even a human one, I will blindly trust them. Also I require these human teachers to be perfect (omniscient, omnipotent, omnibenevolent) so that I avoid danger. This is how I assert the information is correct.
EDIT: guys the point of this comment was to show how ridiculous the requirements of the parent comment would be if applied to human teacher as well as to AI teacher
I am not even in an environment where ChatGPT is very present and yet I've seen that happening sitting right next to a person with exactly that thing happening.