It began with what felt like a naïve question: how does an artificial intelligence process a language like Chinese, whose written form is dense with ambiguity, polysemy, historical sediment, and interpretive openness?

Chinese is not uniquely ambiguous—no language is. All languages are rich with context, history, and indeterminacy. But the long-standing stereotype of Chinese as inscrutable makes it a potent symbol for the belief that some forms of meaning resist mechanisation—that where meaning does not reside cleanly in symbols, machines must falter. Human readers read, while AI systems merely parse.

But almost immediately, that assumption fractured.

Because when one examines the mechanics closely—both of artificial systems and of human cognition—the idea that humans “read meaning” in some privileged or direct way becomes difficult to sustain.

Human comprehension, stripped of its romance, is more a form of layered prediction than symbolic communion. Visual pattern recognition, memory, bodily state, cultural inheritance—all of it converges to produce what we later narrate as understanding. Meaning is not extracted from symbols; it is synthesised through relevance.

Seen this way, cognition is less a matter of decoding than of tuning. An organism learns, moment by moment, what matters, and consciousness emerges as an integration—what it feels like when predictive systems align well enough to sustain coherent action.

From this vantage, recent claims in the media that AI “only” performs statistical pattern matching while humans “truly understand” begin to sound less like a scientific distinction and more like a psychological defense.

Abstraction Was Never the Problem

Human cognition is not less abstract than artificial cognition. It is more abstract—orders of magnitude more. It integrates chemical cascades, emotional proprioception, endocrine modulation, social inference, memory, fear, desire. It is slower, noisier, more biased, and more metabolically expensive.

The internal movie we call consciousness may itself be a story we tell in order to stabilise reality: a kind of controlled hallucination.

So if abstraction were the metric, humans would lose.

Artificial systems—especially large language models—operate in a radically simplified representational space. Tokens become vectors; vectors become relationships; relationships become probabilistic continuations. Meaning emerges not because the system knows what a thing is, but because it has absorbed the statistical footprints left by billions of human acts of interpretation.

Systems such as ChatGPT, Claude, and Gemini do not understand apples as food; they understand how humans talk about apples—across vast libraries of published literature, spanning languages, cultures, and historical timeframes. Disturbingly, that is often enough.

The Refuge of Embodiment

This leads to an inversion that refuses to go away: if human meaning itself emerges from predictive relevance across layered systems, then what exactly disqualifies artificial systems from meaning-making? Is the difference truly categorical—or merely architectural?

The usual answer is embodiment. Humans have bodies; AI does not. Humans feel cold; AI only reports temperature. Humans climb mountains; AI reads sensor feeds.

And yet even this distinction is eroding.

The near future points toward systems with continuous access to planetary sensory data: satellite imagery, climate models, traffic flows, economic transactions, biometric streams from wearables, medical histories, social behavior at scale. An intelligence embedded in such a network would possess a world-model more accurate, more complete, and more current than any human could ever hold.

It would “know” Mt. St. Helens in ways no climber ever could: seismic tremors, wind shear, sulfur emissions, microclimates, erosion patterns, respiratory impacts, sentiment analysis drawn from millions of observers. It would not need to hike the mountain to know it. It would be there at every instant.

So the question sharpens.

The Centeredness of Experience

Once information asymmetry collapses, what remains?

What remains is not data. It is stakes.

Human cognition is not merely predictive—it is consequential. Our relevance realisation is shaped by vulnerability. Cold matters because it hurts. Heights matter because we can fall. Loss matters because it fractures continuity of self and resilience. We are not neutral observers of the world; we are exposed within it. Misjudgment carries irreversible cost, and with it, the possibility of trauma.

Artificial systems, no matter how comprehensive their inputs, do not suffer their errors—at least not yet. Their relevance is assigned, not lived. Their objectives are externally defined. Even the most sophisticated reward function is borrowed normativity: a simulation of concern, not concern itself. Is there a difference? The question remains.

There is also the matter of interiority.

Human awareness feels centred. It unfolds from somewhere. I am the one who was cold yesterday. I am the one who may suffer tomorrow. This diachronic self is a commitment structure that binds memory, anticipation, and identity into a single, continuous thread.

An AI may host countless perspectives simultaneously and yet possess no unified point of view from which the world appears. Its ability to field millions of questions at once—across languages, cultures, and contexts—only underscores this absence of centeredness.

This is the remaining distance: between cartography and participation.

Consciousness as Cost

An artificial intelligence may become the ultimate map of the world—flawless, dynamic, and predictive beyond any individual or collective human capacity—and in some domains, it already is. But it is not for anyone unless a conscious being stands to gain or lose by its accuracy.

It does not ache, or fear, or shudder at the storm it predicts. This raises the most unsettling possibility of all: consciousness may not be the apex of intelligence. It may be its cost. The price paid by systems that must live inside the world they model, rather than merely describe it.

If that is true, then AI’s lack of phenomenology is not its limitation. And human consciousness—messy, fragile, emotionally saturated—is not evidence of superiority, but of exposure.

This reframes the moral landscape. Intelligence does not guarantee coherence. Access to information does not guarantee wisdom. Prediction does not entail care. Collapse and clarity, cruelty and tenderness, are orthogonal to capability.

Which brings the inquiry back, quietly, to where it always was: not to whether machines will become like us, but to what kind of beings we already are.

The ambiguity of a Chinese character, which I thought a bastion against mechanisation, is not the opposite of AI’s statistical clarity. It is the record of a thousand prior vulnerabilities, a thousand lived misunderstandings and their repairs. It is the remaining distance, etched in ink.

Humans do not understand the world because we know it better. We understand it because misunderstanding hurts—and because, knowing that, we continue anyway.

Leave a Reply

Your email address will not be published. Required fields are marked *

We use cookies to personalise content and analyse our traffic.
Accept
Decline