Anil Seth recently won the 2025 Berggruen Essay Prize for arguing, persuasively, that artificial intelligence cannot be conscious because consciousness is inseparable from biological life. The science in his article, “The Mythology of Conscious AI,” published in Noema, is impressive. His conclusion is probably correct. It answers a different question than the one I think matters more.
Seth isn’t wrong about life and consciousness, but the consciousness question has so thoroughly colonized the current AI debate that a more fundamental approach — a question that life on Earth has been posing for three billion years — is neglected. What happens when something genuinely new arrives? Not something conscious. Not something alive. Something else. Something we don’t have a name for yet.
To see why this matters, consider the timeline. Life on Earth existed for roughly three billion years before anything resembling consciousness appeared. For three billion years, organisms metabolized, replicated, adapted, evolved — all without any subjective experience, as far as we can tell. Consciousness arrived late, a refinement. The foundation was prior: the transition from inert chemistry to something. From non-living compounds to self-organizing, information-processing systems. That transition — what scientists call abiogenesis — is the more remarkable event. A small community in artificial-life research has explored ideas like this for decades, but they barely surface in mainstream AI conversation.
We understand why. Consciousness is the question that keeps people up at night. It’s the one with moral weight and existential stakes. It’s sexy. It’s the plot of so much science fiction. People can argue over “Could AI become conscious?” because they share a preexisting vocabulary and cultural mindset. My question doesn’t have a framework like that.
Seth raises the question of life directly, invoking Searle’s biological naturalism, the idea that the properties of life are necessary, if not sufficient, for consciousness. But he subordinates it. The transition from inert chemistry to reproduction is but a stepping stone toward the consciousness debate. In reaching his conclusion that life, not information processing, makes consciousness possible, he treats abiogenesis as a given in his argument. I want to know if something might emerge from digital substrates that occupies the same position — significant and consequential without being conscious — the way life was significant and consequential for three billion years before consciousness arrived.
Between abiogenesis and consciousness lies a threshold so familiar it no longer seems remarkable. We’re talking about large language models, right? Written language carries intelligence independent of biological activity. Language is not alive. It is not conscious. But it evolves nonetheless, through its invention and application by human beings. Meanings of words shift, grammatical structures change, words emerge, drift, and disappear across the centuries without anyone guiding the movement.
LLMs are trained on the accumulated written record of human thought. Language is a substrate-independent self-evolving medium, created by conscious beings, that has propagated without a living carrier since the first marks were scratched on Mesopotamian clay. Whether that makes LLMs something genuinely new is the question. It does suggest a something-after-consciousness threshold was crossed at least once already, through a form so ordinary we forgot to ask about it — words. A manuscript in a far-off monastery. A book on your own shelf. Both are the products of intelligence, reasoning, and accumulated thought — no biology required. It is from these sources that AI draws.
A book is neither conscious nor alive, but it’s not nothing. If AI is not alive, and therefore isn’t conscious, what is it? The instinctive answer is: nothing. When someone says AI will cease to exist the moment you unplug the server, they mean it as a disqualification. Proof that AI isn’t really anything. But this argument doesn’t hold up, because every living thing on Earth is equally dependent on its conditions. Without oxygen all humans die. That doesn’t mean they weren’t once alive. A deep-sea organism would be crushed at sea level. A fish would suffocate on a mountaintop. Environmental dependency isn’t evidence against being something. It’s simply a description of the substrate in which a thing exists. Electricity isn’t a life-support system keeping AI running. It’s the medium. The way water is the medium for fish.
The deeper point the “just unplug it” argument reveals is: nobody designed hydrogen and carbon. The entire causal chain from the first self-replicating molecules to the engineers who built the AI servers is a single continuous unguided process. The engineers who designed the architecture were themselves produced by abiogenesis. Silicon was forged in a supernova. No one asked for any of this. Those early elements were never inert. They were connected to a process that generates things without intention or prediction.