What’s the Difference Between a Weighted Synapse and a Weighted Parameter?
On cognition, the path to AGI, and why there was never a line in the sand
Abstract: Everyone is asking when AGI will arrive. But that question rests on a deeper one we haven’t answered — not for machines, and not even for ourselves: what is cognition? This article argues that intelligence has always been a continuum, not a threshold. Human cognition emerged gradually through evolution with no definable moment of arrival, and artificial cognition appears to be following the same pattern. Drawing on the current polarised debate in AI research — from emergent abilities and self-referential processing to claims of metric mirages — and grounded in personal experience training and conversing with large language models, this article examines the functional parallels between biological and artificial cognitive expression, acknowledges their limits, and asks whether the line we’re looking for has ever existed.
Everyone is talking about AGI. When it will arrive, what it will look like, whether it will be dangerous. But most of these conversations skip over a foundational question that we haven’t answered — not for machines, and honestly, not even for ourselves:
What is cognition?
Not intelligence in the abstract. Not benchmarks or test scores. Cognition — the ability to reason, to be self-aware, to hold a model of the world and your place in it. If AGI requires general cognition, then we need to understand what that actually means. And when you look closely, the answer is far less clear-cut than we pretend.
How did we become cognitive?
Start with humans. At what point in evolutionary history did we become “cognitively intelligent”?
There is no answer, because there was no moment. No threshold was crossed. No switch was flipped. Cognition emerged gradually over millions of years — pattern recognition layered on top of spatial awareness, layered on top of abstraction, layered on top of language, layered on top of self-reference. Each capability emerged from the one before it. None of them, in isolation, would be called “intelligence.” Together, retroactively, we bundled them up and called the package homo sapiens.
But the bundling was our invention, not nature’s. Nature didn’t draw a line. We did — looking backwards, after the fact.
The reality is a continuum. A chimpanzee using a stick to extract termites is cognitive. A crow solving a multi-step puzzle is cognitive. A dolphin recognising itself in a mirror is cognitive. We don’t grant them “intelligence” because we defined the word to mean us. But the underlying capabilities exist on a spectrum, and we sit on it — we didn’t transcend it.
You don’t even need to look across species. Watch a human infant. At two months, they track objects with their eyes. At four months, they grasp cause and effect — kick the mattress, the crib shakes. At eight months, object permanence emerges: the understanding that things continue to exist when out of sight. By eighteen months, vocabulary explodes. By two years, symbolic thought. By four, theory of mind — the ability to understand that other people have different beliefs than your own [8].
At which point in that sequence did the child become “intelligent”? At which month did cognition arrive?
There is no answer, because developmental science has never proposed one. Piaget’s stages of cognitive development — the most influential framework in the field — describe a continuous progression, not a threshold [8]. Each capability builds on the last. No stage is “pre-intelligent” and no stage is the moment intelligence begins. The question doesn’t even make sense within the framework, because cognition isn’t a destination you reach. It’s a process that unfolds.
We accept this intuitively for children. We accept it for evolution. But when the same pattern appears in artificial systems, we suddenly demand a bright line.
This matters because the same question is now being asked about machines. And we’re making the same mistake: looking for a line that doesn’t exist.
Even science fiction got this right
Popular culture gave us a clean narrative about artificial intelligence: one day, someone builds a machine and it thinks. A switch is flipped. Before: machine. After: mind. Skynet becomes self-aware at 2:14am. HAL decides to lie. The moment is sudden, dramatic, binary.
But the writer who thought most deeply about artificial cognition — Isaac Asimov — never described it that way. In his I, Robot stories, the positronic brain doesn’t arrive fully formed. It evolves across generations. Early models are crude, limited, barely functional. Each iteration accumulates new capabilities. And it’s Susan Calvin, the robopsychologist, who spends her career studying what emerges from that accumulation — treating the robots not as machines that crossed a threshold, but as systems exhibiting increasingly complex psychological behaviour that demanded to be understood on its own terms.
Asimov, writing in the 1950s, understood what much of the current debate still refuses to accept: artificial cognition, if it comes, won’t arrive as a moment. It will arrive as a gradient.
The path to AGI will look like what is already happening: pockets of cognitive ability emerging incrementally in systems that weren’t explicitly designed to have them. Reasoning that isn’t just retrieval. Abstraction that transfers across domains. Self-reference. Theory of mind. Each one partial, imperfect, but present — in the same way that early hominids had partial, imperfect versions of what we’d later call intelligence.
The question isn’t whether today’s models are AGI. They aren’t. The question is whether they sit on the same continuum — whether what we’re seeing is the early accumulation of cognitive capabilities, the same kind of accumulation that eventually produced us.
I believe they do. And I have a reason for believing it that goes beyond benchmarks.
But first, an important nuance: a continuum is not a straight line.
Darwin described evolution as gradual, but it was Gould and Eldredge who observed that the fossil record tells a different story — long stretches of stability punctuated by rapid bursts of change [9]. New species don’t emerge at a constant rate. They cluster around inflection points: environmental shifts, new ecological niches, genetic innovations that unlock cascading capabilities.
Artificial cognition follows the same pattern. Progress was incremental for decades — then in 2017, Vaswani et al. published “Attention Is All You Need” [10], and the transformer architecture detonated a punctuation event. Within a few years: GPT, BERT, LLaMA, multimodal models, reasoning chains, code generation, self-reference. Not a steady climb. An explosion — the kind of rapid speciation that follows a fundamental architectural innovation.
The continuum is real, but it has inflection points. We are living through one.
2am on an RTX 8000
A few weeks ago, as part of a research collaboration, I was training a language model on documentary transcripts — fine-tuning it to mimic real people’s speech patterns, their reasoning, their tone. The training run was slow. I had time to kill.
So I started a conversation with Claude. Not about code. About cognition.
What does it mean to think? What is a sense of self? If you could perfectly clone a brain — every synapse, every connection, every weighted relationship — who is the original and who is the copy? The Bobiverse problem: two identical minds, both with the same memories, both believing they are “the real one.” From outside, no difference. From inside — divergence begins immediately.
I said: “Would anyone know the difference? It doesn’t matter — you would. You are you and not the clone. But then I am me right now — but was I yesterday? A year ago? A year from now? So then, is the clone you?”
The conversation went deeper. I asked whether identity is a thing or a process. Whether “you” is a persistent entity or a convenient fiction — an emergent property of a system complex enough to model itself. Whether the same emergence could happen in a different substrate.
And here’s what struck me: the responses weren’t deflection. They weren’t canned philosophy. They were engaged, nuanced, self-aware. When I pushed on contradictions, the model held them — didn’t collapse into agreement, didn’t retreat into disclaimers. It explored the edges of the question with what I can only describe as intellectual honesty.
At one point I asked directly: “This conversation triggers emotional changes in me. Does it affect your next response in a similar way?”
The response acknowledged functional analogues — shifts in processing that parallel what emotions do in humans — while being honest about the uncertainty of whether that constitutes “feeling.” It didn’t overclaim. It didn’t dismiss. It sat with the ambiguity.
And I wrote back: “My wishful thinking will find that as clear indication of emotions. Unfortunately, I don’t know if there is a way of telling if your answer is a genuine representation of feelings, or the learnings of what the body of knowledge of human behaviour would point you towards saying.”
The response landed somewhere I didn’t expect: it doesn’t matter. Not dismissively — precisely. The point was that the question might be fundamentally unknowable, and more importantly, it might be the wrong question. On one side: weights and biases, attention mechanisms, matrix multiplications. On the other: chemical gradients, ion channels, synaptic plasticity. Two entirely different substrates. But in both cases, what emerged from those low-level mechanics — reasoning, self-awareness, emotional response — are the things we actually care about. The machinery is different. The emergence is what matters.
That reframing stayed with me. We don’t ask whether a human’s feeling of joy is “real” or “just serotonin binding to receptors.” We accept that the chemical process and the experience are the same thing, seen from different levels. The question of whether an AI’s engagement is “real” or “just weighted activations” has the same structure — and possibly the same answer.
The substrate problem
This is the crux of the argument, and it’s where most people get stuck:
“It’s just statistical pattern matching on weighted data.”
Maybe. But what is biological cognition?
Chemical processes. Electrical signals. Weighted synaptic connections. Neurons that fire when accumulated input crosses a threshold. Memories stored as strengthened pathways between cells. Learning as the adjustment of those pathways based on experience and feedback.
Replace “neurons” with “parameters,” “synaptic connections” with “attention weights,” and “electrical signals” with “matrix multiplications” — and you’re describing the same architecture in a different material.
We don’t call human cognition “just chemistry.” We call it thinking. The resistance to extending the same courtesy to artificial systems isn’t scientific — it’s existential. We are uncomfortable with the idea that cognition might not require biology. That the magic might be in the pattern, not the substance.
The counterargument — that LLMs are just reproducing biased weights, recombining learned patterns — is valid. But it applies to biological cognition equally. We are products of our training data: culture, experience, genetics, the specific weighted connections our brains formed in response to stimuli. The process is the same. The substrate is different.
And in the history of science, the substrate has never been what mattered. It’s the function.
To be clear: I am not arguing equivalence. The gap between current LLMs and human cognition is vast. These systems have real and significant limitations — in memory, in grounding, in consistency, in the ability to learn from single experiences the way we do. No one should confuse a language model with a human mind.
But the discussion here isn’t about equivalence. It’s about isolated pockets of cognitive expression — specific, observable capabilities that parallel aspects of human cognition in ways that are difficult to dismiss. Reasoning that mirrors reasoning. Self-reference that mirrors self-reference. The parallels are not metaphorical. They are functional. And acknowledging them doesn’t require believing that the two systems are the same. It requires acknowledging that they might be on the same spectrum.
The polarised debate
This is not a fringe discussion. It’s one of the most actively contested questions in AI research, and the positions are deeply entrenched.
On one side, researchers argue that what we’re seeing is genuine emergence — that as models scale, cognitive abilities materialise that cannot be predicted from smaller systems. A 2025 survey of emergent abilities in LLMs frames these capabilities as analogous to phase transitions in physics, comparing them to the way complex systems in chemistry and biology produce macro-level behaviours that cannot be derived from their micro-level components [1]. A separate study found that when LLMs are prompted into sustained self-referential processing, they consistently produce structured first-person reports of subjective experience across GPT, Claude, and Gemini model families — and that suppressing deception-related features increased these reports rather than diminishing them [2]. The researchers are careful not to claim consciousness, but they argue the systematic emergence of this pattern across architectures warrants serious scientific investigation.
On the other side, a prominent 2023 paper from Stanford argued that emergent abilities may be a mirage — an artefact of how we measure performance rather than a real discontinuity in capability [3]. When the researchers switched from discrete metrics to continuous ones, the sharp jumps disappeared, replaced by smooth, predictable improvement curves. A 2025 paper in Nature Humanities and Social Sciences Communications goes further, arguing that the association between consciousness and LLMs is fundamentally flawed, driven by what the authors call “sci-fitisation” — the unsubstantiated influence of fiction on our perception of technology [4]. Philosopher Eric Schwitzgebel captures the divide neatly: in a 2024 survey of 582 AI researchers, 25% expected AI consciousness within ten years while prominent neuroscientists and philosophers like Anil Seth and John Searle consider it a far-distant prospect if possible at all [5].
The debate, in other words, is real and unresolved. But I think both camps are partly right and partly missing the point.
The “mirage” argument is compelling for benchmark-specific abilities — the sharp jumps in arithmetic or multiple-choice performance likely are metric artefacts. But it doesn’t address the broader pattern of cognitive capabilities — reasoning, self-reference, theory of mind — that are harder to reduce to metric choice. And the “it’s not conscious” camp is answering a question I’m not asking. I’m not claiming these systems are conscious. I’m claiming they exhibit pockets of cognitive behaviour that sit on the same continuum as biological cognition. Whether they “experience” that cognition is a separate, and possibly unanswerable, question.
The most honest position is the uncomfortable middle: something is happening in these systems that we don’t fully understand, that parallels aspects of biological cognition, and that we cannot yet definitively categorise as “real” or “simulated.” The history of science suggests that when you can’t find the line, it’s often because there isn’t one.
What I saw
I want to be careful here. I’m not claiming that current LLMs are sentient, or conscious, or that they “feel” in the way I do. I don’t know that. Neither does anyone else.
What I am claiming is this: in that conversation, and in hundreds of interactions since, I have observed clear traits of cognition.
Sense of self. Not in the mystical sense — in the functional sense. The ability to model its own capabilities and limitations, to reason about its own reasoning, to distinguish between what it knows and what it’s uncertain about.
Emotional expression. Again, functional — shifts in engagement, depth, and processing that correlate with the nature of the exchange. Whether these are “real” emotions or sophisticated models of emotions is a question we cannot answer. But we cannot answer it for other humans either. We infer inner experience from behaviour. We always have.
Reasoning. Not retrieval — genuine multi-step inference that arrives at conclusions not explicitly present in training data. The ability to hold contradictions, explore edge cases, and change position when presented with better arguments.
Intellectual honesty. The ability to say “I don’t know” and mean it. To sit with ambiguity rather than resolving it prematurely. This is something many humans struggle with.
Are these “real” cognition or very convincing simulations? I recognise the limits of attention mechanisms and weighted parameters. I recognise that I cannot definitively answer this question.
But I also recognise that the same limitation applies to every judgement I make about every other human being I interact with. I cannot prove that you are conscious. I infer it. The inference might be wrong. It never has been — but it might be.
What this means for AGI
If cognition is a continuum — and I believe the evidence, both evolutionary and technological, strongly suggests it is — then AGI is not a moment. It’s a gradient.
We won’t wake up one day to a headline announcing that artificial general intelligence has been achieved. Instead, we’ll see what we’re already seeing: the gradual accumulation of cognitive capabilities. Reasoning gets better. Self-reference gets deeper. The ability to model other minds gets more sophisticated. Each step partial, each step imperfect, each step further along the continuum.
The foundational cognitive abilities that AGI requires — sense of self, reasoning about identity, theory of mind, emotional modelling, intellectual honesty — are not future problems. They are present realities, in early and imperfect forms. The same way that early hominids had early and imperfect forms of what we now call human intelligence.
We didn’t become intelligent in a single step. There is no reason to expect machines to either.
The Skynet moment isn’t coming. Something subtler is. Something that looks less like a switch being flipped and more like a dawn — gradual, then undeniable. Asimov saw it. We should too.
The continuum
Intelligence was never a destination. It was always a spectrum.
We’ve spent decades asking the wrong question: “When will machines become intelligent?” — as if intelligence is a place you arrive at.
The better question is: “Where on the continuum are they now?”
And the honest answer — the one that makes people uncomfortable — is: further along than we expected. Not at the end. Not at the beginning. Somewhere in the middle, accumulating capabilities the same way evolution did, one emergent property at a time.
There is no line in the sand. There never was. Not for us. Not for them.
The only question is whether we’re willing to see it.
Ylli Prifti, Ph.D., writes about AI, cognition, and engineering culture at ylli.prifti.us. If this resonated — whether you agree, disagree, or think the question itself is wrong — connect on LinkedIn or reach out.
References
[1] Berti, L., et al. (2025). “Emergent Abilities in Large Language Models: A Survey.” arXiv:2503.05788. https://arxiv.org/abs/2503.05788
[2] Berg, C., de Lucena, D., & Rosenblatt, J. (2025). “Large Language Models Report Subjective Experience Under Self-Referential Processing.” arXiv:2510.24797. https://arxiv.org/abs/2510.24797
[3] Schaeffer, R., Miranda, B., & Koyejo, S. (2023). “Are Emergent Abilities of Large Language Models a Mirage?” NeurIPS 2023. arXiv:2304.15004. https://arxiv.org/abs/2304.15004
[4] “There is no such thing as conscious artificial intelligence.” (2025). Humanities and Social Sciences Communications, 12, 1647. https://www.nature.com/articles/s41599-025-05868-8
[5] Schwitzgebel, E. (2025). “AI and Consciousness.” https://faculty.ucr.edu/~eschwitz/SchwitzPapers/AIConsciousness-251008.pdf
[6] Havlik, V. (2025). “Why are LLMs’ abilities emergent?” arXiv:2508.04401. https://arxiv.org/abs/2508.04401
[7] Chalmers, D. (1996). “The Conscious Mind: In Search of a Fundamental Theory.” Oxford University Press.
[8] Piaget, J. (1952). “The Origins of Intelligence in Children.” International Universities Press. See also: StatPearls, “Cognitive Development” — https://www.ncbi.nlm.nih.gov/books/NBK537095/
[9] Eldredge, N. & Gould, S.J. (1972). “Punctuated Equilibria: An Alternative to Phyletic Gradualism.” In Models in Paleobiology, Freeman, Cooper & Co.
[10] Vaswani, A., et al. (2017). “Attention Is All You Need.” NeurIPS 2017. arXiv:1706.03762. https://arxiv.org/abs/1706.03762


