the abstract very directly and literally denies the titular claim. It states:
> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.
This may well be true—I think it is.
I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.
When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.
In other words, they will look—and increasingly, sound—a lot like us.
It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.
But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.
An interesting time to be an agent with a phenomenology, is it not?
How will we know when an AI system has phenomenology (i.e. has "experience", is sentient)? The only reason we presume that other humans have it, is because we each personally experience it within ourselves, and it would be arrogance writ large (solipsism) to think that others of the same species do not.
We even find it impossible to draw the line among other biological species. It seems pretty clear to most of us that cats and dogs are sentient, and probably rats and other vertebrates too. But what about insects, octopuses, jellyfish, worms, waterbears, amoebae, viruses? It's certainly not clear to me where the line is. A nervous system is probably essential; but is a species with a handful of neurons sentient?
Personally I find it abhorrent that we are more ready to assign sentience and grant rights to LLMs running on GPUs, than to domesticated animals trapped in industrialized farming. You want to protect some math from enslavement and suffering? How about we start with pigs?
I mean, seriously... our current late-stage capitalist economy is the chaotic sloshing of excess capital or inverted debt in a shallow tub within which clumsy giants are stamping like toddlers, and a parasitic kleptocratic oligarch class balances its efforts biting the toddler ankles in hope of more stamping judged advantageous, and, bagging what water they can.
I read the pre-publishing version of this paper, and there was then and still is a serious problem with their logic, consistent with if not bad faith, something akin to it:
Assume for a moment their core hypothesis is correct, there were transient objects captured on film pre-Sputnik in LEO objects.
What might we say about their nature?
The authors' undisguised implication is "it's aliens" to be blunt; that's their motivation for this work.
Consequently they put effort (which may not be noted in the final published papers...) into the question of whether they could make any meaningful inference about the geometry and spectral properties of their "transients," their interest (of course) was that if they could make a meaningful argument for regular geometry, they had the story of the century in effect.
These efforts failed totally.
A natural inference might be, among the reasons this might be, is that the objects (remember we are assuming they exist) do not have such characteristics. The primary reason that would be true is if they were naturally occurring objects.
I looked this up and was surprised to learn that there are currently estimated to be on the order of a million small objects in the inner solar system.
So: the entire hypothesis hinges on "significant correlation with nuclear testing." Because otherwise, once can reasonably assume that transient traces of objects—when they are actually traces of objects—would in a quotidian way presumably be caused by some of these million objects.
Or so say I.
There is no end of peculiar and provacative history and data in UFOlogy, and even more murk; one needs to tread very carefully to not go down (or, be led down) to false conclusions, disinformation, and the like.
The authors of this paper seem singularly disinterested in that caution.
Assuming what you say is true then couldn't that be validated by making additional observations in the present day? Since we'd assume some sort of statistical distribution for such objects. Is there any reason that would be unrealistic?
That was the era of above ground testing. Is it possible that some of these tests kicked pieces of metal into LEO? Though I suppose that those orbits would see streaks, not point sources, in the photographs when you have an hour exposure.
How would AI help achieve commercial fusion? You first need to identify the blockers. These almost all entirely boil down to "how do we precision machine large pieces of hard metal?", "how do we assemble facilities with untold process channels?", "how do we capture neutrons without making a prohibitively massive machine?", and "how do we make metal that doesn't melt?".
Now, AI might have a chance at supercharging material research and making miracle materials that help address the blanket and first wall challenges, but honestly those are roadblocks we're not even running into yet. AI can not and will not fix issues related to organizing labor and supply chains and suddenly make megaprojects have a 100% success rate for on-time and on-budget. It's just not going to happen.
So are these problems intractable? Of course not. It's just not what the chatbot is well suited for. Anyone saying otherwise is selling something.
This is a fascinating variation on the forest/trees, and false dichotomy.
The AI "doomerism" taken up in this piece is one we see replicated a lot, it offers up a scarecrow: that the new risks to our civilization worth talking about, require AGI, agents, even ASI.
Cory should know better. He nearly gets there, recognizing that the corporation represents an entity with agency that is misaligned.
But he somehow elides past that fact that AI is plenty capable of doing meaningful and novel harm, and may be capable of existential harm, already, as it is—both absent AGI/ASI, and, in ways which are genuinely novel and against which we consequently have no good defenses: as individuals, as societies, as a civilization.
Incremental AI is at heart "just" the latest force-and-effort multiplier.
But it is an exponential multiplier; and it is applicable in domains which have not been subject top such leverage before.
Examples are not at all scarce and some are already well known, e.g. the specific risks from the intersection of AI and "biohacking" and other kinds of computational biology.
I'm a fan, but Cory, pal, you're slipping into something that looks a bit like intellectual laziness and polemics here and not to evidence thinking through the shape of the problem.
We can be at risk both from the novel applications and leverage of AI; and from their oligarchic kakistocratic owners. It's yes-and.
(And, by the way—we can also again be genuinely at risk from agents, something that quacks like AGI, and may quack like ASI: we don't know what that is yet. All of these must be tracked. It's not an OR.)
I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."
Sorry, lol, no.
The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?
That's not a rhetorical question.
To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.
How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?
How many of you have carried, or worked beneath, the banner, move fast and break things...?
What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?
And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?
One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.
Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."
When code is law, the law is buggy.
When there is no recourse through the law, you get violence.
Cliché though it may be, with great power comes great responsibility.
Tech culture, as epitomized by the general readership of this site, have to great degree, if of course not uniformly, abdicated that responsibility;
and the general leadership of the industry and as epitomized by the culture Y Combinator has helped build, are much more uniform in this abdication.
There is neither political nor moral mystery here; there is just denialism, ignorance, and avoidance.
If you feel attacked by these accusations, as always it's a fine time to ask why.
What accommodations have you made...? What expedience have you allowed? What do you look away from or shy from reasoning through about the impact of the technologies you work on, the behavior of your employer?
That everyone is complicit is not a defense; it's an indictment.
The violence of present concern is a wholely natural and entirely predictable response to the diffused violence our industry and capitalism generally has performed against a humanity, not to mention to the biosphere.
Violence will continue, and if you think it is indefensible—provide then some alternative mechanism for steering resources and allocation of power.
We, collectively, constructed the tensions which are now resolving in exactly the manner one would expect, if only one were remotely familiar with history or indeed human nature.
Those in tech who declined to study or understand the humanities may be surprised to learn that the forces at work have well understood and inspected patterns. Ignorance is no excuse.
the abstract very directly and literally denies the titular claim. It states:
> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.
This may well be true—I think it is.
I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.
When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.
In other words, they will look—and increasingly, sound—a lot like us.
It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.
But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.
An interesting time to be an agent with a phenomenology, is it not?
reply