Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs lack mechanisms for persistent memory, causal world modeling, and self-referential planning. Their transformer architecture is static and fundamentally constrains dynamic reasoning and adaptive learning. All core requirements for AGI.

So yeah, AGI is impossible with today LLMs. But at least we got to watch Sam Altman and Mira Murati drop their voices an octave onstage and announce “a new dawn of intelligence” every quarter. Remember Sam Altman 7 trillion?

Now that the AGI party is over, its time to sell those NVDA shares and prepare for the crash. What a ride it was. I am grabbing the popcorn.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: