I feel that discussion over papers like these so-often distill to conversations about how it's "impossible for a bot to know what's true", that we should just bite the bullet and define what we mean by "truth".
Some arguments seem to tacitly hold LLMs to a standard of full-on brain-in-a-vat solipsism, asking them to prove their way out, where they'll obviously fail. The more interesting and practical questions, just like in humans, seem to be a bit removed from that though.
I understood this purely as a pragmatic notion. LLM's produce some valid stuff and some invalid stuff. It would be useful to know which is which. If there's information inside the machine that we can extract, but isn't currently showing up in the output, it could be helpful.
It's not really necessary to answer abstractions about truth and knowledge. Just being able to reject a known-false answer would be of value.
Some arguments seem to tacitly hold LLMs to a standard of full-on brain-in-a-vat solipsism, asking them to prove their way out, where they'll obviously fail. The more interesting and practical questions, just like in humans, seem to be a bit removed from that though.