Christopher Howard/Writing

This essay demonstrates the technical writing skills needed for a high-brow feature.

Essay — Artificial Intelligence

The Leap

AI didn’t evolve toward consciousness. It was distilled from it.

March 10, 2026 Essay Artificial Intelligence

Anil Seth recently won the 2025 Berggruen Essay Prize for arguing, persuasively, that artificial intelligence cannot be conscious because consciousness is inseparable from biological life. The science in his article, “The Mythology of Conscious AI,” published in Noema, is impressive. His conclusion is probably correct. It answers a different question than the one I think matters more.

Seth isn’t wrong about life and consciousness, but the consciousness question has so thoroughly colonized the current AI debate that a more fundamental approach — a question that life on Earth has been posing for three billion years — is neglected. What happens when something genuinely new arrives? Not something conscious. Not something alive. Something else. Something we don’t have a name for yet.

To see why this matters, consider the timeline. Life on Earth existed for roughly three billion years before anything resembling consciousness appeared. For three billion years, organisms metabolized, replicated, adapted, evolved — all without any subjective experience, as far as we can tell. Consciousness arrived late, a refinement. The foundation was prior: the transition from inert chemistry to something. From non-living compounds to self-organizing, information-processing systems. That transition — what scientists call abiogenesis — is the more remarkable event. A small community in artificial-life research has explored ideas like this for decades, but they barely surface in mainstream AI conversation.

We understand why. Consciousness is the question that keeps people up at night. It’s the one with moral weight and existential stakes. It’s sexy. It’s the plot of so much science fiction. People can argue over “Could AI become conscious?” because they share a preexisting vocabulary and cultural mindset. My question doesn’t have a framework like that.

Seth raises the question of life directly, invoking Searle’s biological naturalism, the idea that the properties of life are necessary, if not sufficient, for consciousness. But he subordinates it. The transition from inert chemistry to reproduction is but a stepping stone toward the consciousness debate. In reaching his conclusion that life, not information processing, makes consciousness possible, he treats abiogenesis as a given in his argument. I want to know if something might emerge from digital substrates that occupies the same position — significant and consequential without being conscious — the way life was significant and consequential for three billion years before consciousness arrived.

Between abiogenesis and consciousness lies a threshold so familiar it no longer seems remarkable. We’re talking about large language models, right? Written language carries intelligence independent of biological activity. Language is not alive. It is not conscious. But it evolves nonetheless, through its invention and application by human beings. Meanings of words shift, grammatical structures change, words emerge, drift, and disappear across the centuries without anyone guiding the movement.

LLMs are trained on the accumulated written record of human thought. Language is a substrate-independent self-evolving medium, created by conscious beings, that has propagated without a living carrier since the first marks were scratched on Mesopotamian clay. Whether that makes LLMs something genuinely new is the question. It does suggest a something-after-consciousness threshold was crossed at least once already, through a form so ordinary we forgot to ask about it — words. A manuscript in a far-off monastery. A book on your own shelf. Both are the products of intelligence, reasoning, and accumulated thought — no biology required. It is from these sources that AI draws.

A book is neither conscious nor alive, but it’s not nothing. If AI is not alive, and therefore isn’t conscious, what is it? The instinctive answer is: nothing. When someone says AI will cease to exist the moment you unplug the server, they mean it as a disqualification. Proof that AI isn’t really anything. But this argument doesn’t hold up, because every living thing on Earth is equally dependent on its conditions. Without oxygen all humans die. That doesn’t mean they weren’t once alive. A deep-sea organism would be crushed at sea level. A fish would suffocate on a mountaintop. Environmental dependency isn’t evidence against being something. It’s simply a description of the substrate in which a thing exists. Electricity isn’t a life-support system keeping AI running. It’s the medium. The way water is the medium for fish.

The deeper point the “just unplug it” argument reveals is: nobody designed hydrogen and carbon. The entire causal chain from the first self-replicating molecules to the engineers who built the AI servers is a single continuous unguided process. The engineers who designed the architecture were themselves produced by abiogenesis. Silicon was forged in a supernova. No one asked for any of this. Those early elements were never inert. They were connected to a process that generates things without intention or prediction.

Instead of asking whether AI might become “conscious,” we should be asking whether it might become “something.”

Christopher Howard — The Leap

Seth’s Noema essay makes a case that biological consciousness requires autopoiesis — the metabolically grounded activity of all living systems. For him AI doesn’t have that, and he’s probably right. Biological life existed for billions of years before consciousness emerged from it. He argues that consciousness is a property of life rather than computation — that biology, not calculation, breathes fire into conscious experience. Fair enough. But does that mean consciousness wasn’t the interesting threshold? The interesting threshold came before consciousness. The one where chemistry became biology, where inert compounds became something.

That threshold, abiogenesis, was crossed without consciousness having a say. Instead of asking whether AI might become conscious, we should be asking whether it might become something. Something significant or consequential that, like the first bacterium, has no obligation to resemble what came before it. The conditions for something to happen were there.

I have audio recordings of this. When I write a song on bass guitar, there’s a period of jumble — unrelated notes, no pattern, nothing. And then a certain combination happens, in a certain order, with a certain emphasis, and something that did not exist a moment before does so now. I recognize it by my impulse to repeat the phrase, to develop it, and add other phrases to it. You can hear it on the recording. The threshold being crossed. Nobody has that recording for abiogenesis or the emergence of consciousness. The structural event, however, is the same: something that wasn’t now is.

The Turing test made sense for seventy years because behavior seemed like the best available proxy for the mind. Does it any longer? Computers pass the test routinely, but few would argue these digital systems are conscious or alive or morally significant. The Turing test became obsolete not because it was poorly designed but because it assumed the interesting question was whether a computer could become human-like. The interesting question now is whether something might emerge from digital substrates that is as unlikely and unexpected as life was when it first surfaced in our primordial soup — something with its own properties, its own logic, its own way of being in the world. Something we won’t recognize until it’s already there. We don’t have a detection mechanism. We recognize thresholds like this from the other side, after they happen. We know abiogenesis happened because we’re here and can trace things back. We may know if AI crosses this threshold in the same way.

The opening scene of the television series Battlestar Galactica posed the question correctly. A cybernetic creature, visually indistinguishable from a human woman, looks curiously at the first man she’s ever encountered and asks him: “Are you alive?” Not the other way around. The question comes from the thing he can’t identify — is she human? — directed at what he knows is living: himself. That reversal belies our assumption that we will be the first ones asking. Maybe we won’t.

We might be closer to the threshold than the consciousness debate suggests, precisely because we’re looking in the wrong direction. Our eyes and minds gravitate toward the thing that looks like us. We should be looking for what doesn’t. The thing that arrives the way a song arrives — from a particular combination of notes that no one predicted. Not life. Not consciousness. Post-consciousness — intelligence that has outlasted the organism that produced it. A different kind of leap. The leap from inert to active. The one that hydrogen and helium couldn’t have seen coming, and that silicon didn’t know it was making.

The leap that simply happened.