Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I understand this sentiment now, but I've already made GPT work in ways which boggle the mind, in my own time, and I'm a nobody. So this concern might be rephrased as (something like) "Kids will need to learn how to use GPT to validate secondary sources."


> ...but it’s also important to demonstrate that ChatGPT is both useful and senile, and so you need to validate its responses with secondary sources.

> So this concern might be rephrased as (something like) "Kids will need to learn how to use GPT to validate secondary sources."

That's not a rephrase, it's an inversion. It's like saying kids will need to learn to validate medical advice from their doctor with information from mercola.com and other random alternative-health blogs.


LLM's and prompt interfaces aren't a source of truth in the first place. They could be used to compare sources, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: