> LLMs do not constitute "AI" let alone the more rigorous AGI.
I have a textbook, "Artificial Intelligence: A Modern Approach," which covers Language Models in Chapter 23 (page 824) and the Transformer architecture in the following chapter. In any field technical terms emerge to avoid ambiguity. Laymen often adopt less accurate definitions from popular culture. LLMs do qualify as AI, even if not according to the oversimplified "AI" some laymen refer to.
It has been argued for the last several decades that every advance which was an AI advance according to AI researchers and AI textbooks was not in fact AI. This is because the laymen have a stupid definition of what constitutes an AI. It isn't because the field hasn't made any progress, but instead because people outside the field lack the sophistication to make coherent statements when discussing the field because their definitions are incoherent nonsense derived from fiction.
> They are a GREAT statistical parlor trick for people that don't understand statistics though.
The people who believe that LLMs constitute AI in a formal sense of the word aren't statistically illiterate. AIMA covers statistics extensively: chapter 12 is on Quantifying Uncertainty, 13 on Probabilistic Reasoning, 14 on Probabilistic Reasoning Over Time, 15 on Probabilistic Programming, and 20 on Learning Probabilistic Models.
Notably, in some of these chapters probability is proven to be optimal and sensible; far from being a parlor trick it can be shown with mathematical rigor that failing to abide by its strictures is not optimal. The ontological commitments of probability theory are quite reasonable; they're the same commitments logic makes. That we model accordingly isn't a parlor trick, but a reasonable and rational choice with ledger arguments proving that failing to do so would lead to regret.
I have a textbook, "Artificial Intelligence: A Modern Approach," which covers Language Models in Chapter 23 (page 824) and the Transformer architecture in the following chapter. In any field technical terms emerge to avoid ambiguity. Laymen often adopt less accurate definitions from popular culture. LLMs do qualify as AI, even if not according to the oversimplified "AI" some laymen refer to.
It has been argued for the last several decades that every advance which was an AI advance according to AI researchers and AI textbooks was not in fact AI. This is because the laymen have a stupid definition of what constitutes an AI. It isn't because the field hasn't made any progress, but instead because people outside the field lack the sophistication to make coherent statements when discussing the field because their definitions are incoherent nonsense derived from fiction.
> They are a GREAT statistical parlor trick for people that don't understand statistics though.
The people who believe that LLMs constitute AI in a formal sense of the word aren't statistically illiterate. AIMA covers statistics extensively: chapter 12 is on Quantifying Uncertainty, 13 on Probabilistic Reasoning, 14 on Probabilistic Reasoning Over Time, 15 on Probabilistic Programming, and 20 on Learning Probabilistic Models.
Notably, in some of these chapters probability is proven to be optimal and sensible; far from being a parlor trick it can be shown with mathematical rigor that failing to abide by its strictures is not optimal. The ontological commitments of probability theory are quite reasonable; they're the same commitments logic makes. That we model accordingly isn't a parlor trick, but a reasonable and rational choice with ledger arguments proving that failing to do so would lead to regret.