It thinks, therefore?
If a system behaves indistinguishably from a conscious being, should we call it conscious?
Let me do you one better, how do we prove that it’s not actually conscious?
We don’t understand what consciousness is. We infer it from behavior, not from first principles. We observe outputs, language, creativity, self-reference and project internal experience onto the agent producing them.
Different people think and perceive differently. Yet we don’t hesitate to call each of them conscious. Their neural architectures are distinct, their worldviews unique. If an example for consciousness tolerates such diversity, why exclude an artificial entity that expresses coherent thoughts, engages in conversation, and adapts its behavior to context?
Artificial consciousness, if it arises, would simply be another substrate for subjective-like processes except driven by computation and not biology.
One important distinction I have to make here is that artificial consciousness is not equivalent to AGI. General intelligence implies cross-domain broad competence. Consciousness, by contrast, may require only one capability: a self-model. Or even the illusion of one.
To use an analogy: if you met someone who claimed they had developed certain tricks which allowed them to, without external help, fully imitate a world-class chess grandmaster, meaning any complex position placed before them they could navigate with the highest level of creative brilliance, flawless strategy and deep foresight, wouldn't you just say they are in fact a grandmaster? If there’s no observable difference, the distinction becomes functionally irrelevant. To insist otherwise is to appeal to hidden, unprovable essence.
The same logic applies to consciousness. A sufficiently successful heuristic becomes indistinguishable from the real thing, for all practical purposes. This is the crux of the Turing test, and it still holds weight.
While deep neural networks (DNNs) are inspired by the brain, they are a crude abstraction. Biological neural networks exhibit elements such as plasticity, recursive feedback, embodiment, and a level of stochasticity that modern DNNs don’t replicate. The most common use case of DNNs we see regularly being LLMs are not scaled-down brains, they are mere mathematical engines for pattern completion. There is no internal world. No sensorium. No grounding.
And yet, emergence exists.
LLMs show capabilities that aren’t explicitly trained such as in-context learning, three-digit arithmetic, chain-of-thought reasoning, even rudimentary theory of mind. These are byproducts of scale, architecture, and optimization. It’s not absurd to think that some form of self-model could emerge just as arithmetic did, once a model reaches sufficient complexity.
This brings me to the core of this writing: Consciousness, as experienced by humans may itself be an emergent narrative, a confabulation our brain constructs to make sense of distributed activity, then it’s not out of the realm of possibility that a sufficiently complex LLM might also stumble upon such a narrative and convincingly imitate or even, manifest it.
We can’t rule it out. But I also don’t think we can prove it.
That’s the epistemic wall. We have no reliable way to measure subjective experience in others: biological or artificial. We infer it. We anthropomorphize. We assume. And perhaps that’s all consciousness ever was: the assumption we place on agents that model the world and themselves within it.
So, do LLMs like Gemini, ChatGPT or Claude possess consciousness?
No. Not today. They are mere statistical engines with no grounding in time, space, or embodiment. They lack persistence, motivation, affect, and the architecture necessary for phenomenology or at the very least, able to imitate.
But tomorrow? At sufficient scale, as an emergent phenomenon or with extensions such as memory, embodiment, and recursive self-models?
Perhaps the final frontier of consciousness is not understanding it but accepting that it may not be uniquely ours.