Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The most blatant whoppers that Google's AI preview makes seem to stem from mistaking satirical sites for sites that are attempting to state facts. Possibly an LLM could be trained to distinguish sites that intend to be satirical or propagandistic from news sites that intend to report accurately based on the structure of the language. After all, satirical sites are usually written in a way that most people grasp that it is satire, and good detectives can often spot "tells" that someone is lying. But the structure of the language is all that the LLM has. It has no oracle to tell it what is true and what is false. But at least this kind of approach might make LLM-enhanced search engines less embarrassing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: