Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Some observers have pointed out cases where CCC [sic] appears to regenerate artifacts strongly resembling existing implementations, including standard headers

Its not just that, we just had another thread here on HN recently discussing how the LLMs reproduce the entire work of Harry Potter with ~99% accuracy when prompted to do so with a jailbreak. This seems to contradict the “it’s remarkable progress” statement from the top of the article.

 help



Both can be true right? A model can be a savant memorizer _and_ a good reasoner?

LLMs can't reason because they fundamentally don't understand anything they generate.

The fundamental understanding is vague. I bet half of the people don't understand what we're talking about. And 90% of the people don't understand how they think, I guess. Moreover 90% of people 80% of time probably don't really think logically but do routine in a rudimentary thinking mode.

How does that contradict the claim at all?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: