Technologies (and I use this term in its broadest sense possible) used to stay in their lane. The steam engine moves pulleys; the pen leaves ink trails on paper to write; a computer runs programs with defined goals and heuristics. Even microcontroller GPIOs (which stand for general purpose input and output) aren’t really general - you can control them to toggle (or be toggled) on and off. The point I’m trying to make is, to paraphrase Maslow’s hammer, we build technologies to be hammers or, tools with specific tasks in mind.
Today, AI sprawls across informational landscapes like an invasive species, adapting, interacting, and sometimes displacing what came before, with the promise that one day a new letter, G, will be sandwiched between that A and I. In this post, I want to discuss what happens when we introduce new tools into existing systems, and who benefits from the chaos that follows.
The Fundamental Assumption#
The AGI pursuit assumes “general intelligence” exists as a coherent and fully-definable target ehast we can formalise and reproduce. This is pure fantasy. Decades of cognitive science show that intelligence cannot be reduced and crystallised as a singular property. At best, it’s a post-hoc label slapped onto a messy collection of context-dependent capacities.
As examples, Howard Gardner identified eight cognitive modalities (bodily-kinesthetic, interpersonal, verbal-linguistic, logical-mathematical, naturalistic, intrapersonal, visual-spatial, and musical). Sternberg distinguished between componential (i.e., being good at school), experiential (i.e., being creative / flexible / innovative), and practical (i.e., being able to adapt to new contexts) intelligences. To be clear, I am not endorsing these theories; I am merely showing that the debate is thriving and definitely not settled.
I could add to all of this emotional intelligence, swarm intelligence, morphological computation, but you get the point: “intelligence” is whatever we decide to call intelligent in a given context, shaped entirely by embodied circumstance and power relations.
I started ruminating about all this after reading that Yann LeCun recently declared he’s “not interested in LLMs anymore”. He argues these systems lack persistent memory, genuine world models, and capacity for planning. On this, I agree. His proposed alternative - Objective-Driven AI built on Joint Embedding Predictive Architectures (speak that three times in a row in front of a mirror for maximum effect) - inches, theoretically, toward sensory grounding but is at high risk of repeating the same mistake: treating embodiment as richer input rather than the foundation of cognition itself.
As I argue in my PhD, physical bodies don’t somehow ‘host’ intelligence. There isn’t some sort of Ratatouille-looking homunculus in your brain that processes information and directs action. Instead, bodies constitute intelligence by interacting with their environments: passive dynamic walkers achieve stable bipedal locomotion through mechanical designs that exploit gravity. Octopuses distribute computation across compliant limbsg. Soft robots outsource thinking to material properties. Intelligence emerges from the dynamic coupling of body, brain, and environment, not through abstract computation.
Recursion as Control#
I won’t berate the point that AI systems don’t filter information neutrally; rather, they engineer epistemic environments where authentic and synthetic experience become indistinguishable. Modern AI functions as what Lorenzo Arico politely calls “epistemic agents”: they structure and filter information in ways that shape societal understandings (if enough people use them - and how can you escape it, if it basically shows up everytime you open your computer, phone or the Internet in general!).
Large language models learn patterns from vast corpora of human text, developing sophisticated-looking models of discourse without ever experiencing the reality that discourse describes or is embedded in. They’re statistical engines that predict what comes next based on what came before: stochastic parrots. The classic example of this behaviour is social media algorithms. They create filter bubbles where users encounter information that confirms existing beliefs, while learning from user behaviour to make those bubbles more effective. The tools we use to interpret reality become products of the same systems generating the phenomena we’re trying to understand.
The recursive quality transforms how knowledge gets constituted. As AI systems colonise infrastructure (e.g., search engines, educational platforms, research tools), they start influencing how knowledge itself gets structured. Generative AI now creates synthetic media that mimics real people and events with (sometimes) startling accuracy, undermining traditional methods for distinguishing authentic from artificial content. As I type this in my trusty Grammarly, this whole paragraph is getting flagged as AI generated, even though I am typing it in! But I digress.
Distributed Intelligence is Actually the Point#
If you abandon the quest for singular, disembodied “general intelligence”, and I believe something MUCH more interesting emerges. Swarm intelligence (and agent-based modelling) shows that complex patterns arise from decentralised systems of simple agents following local rules. Individual robots with minimal cognitive capacity collectively achieve aggregation, pattern formation, and collective perception beyond any individual’s capability.
Crucially, these approaches don’t attempt at replicating human cognition at all. They solve problems through spatial organisation, social inference, and emergent dynamics. They’re differently intelligent because intelligence was always plural, always contextual, always embodied.
In other words, intelligence becomes distributed, hybrid, fundamentally social. This would terrify the AGI crowd because it suggests intelligence was never about individual genius but about relationships, infrastructure, and who controls the means of coordination.
If intelligence is genuinely distributed, embodied, and relational, then the question becomes: who designs the relationships? Who controls the infrastructure? Who benefits when coordination becomes algorithmic? The AGI fantasy sells us individual genius as the endpoint, conveniently obscuring these questions of power and access.
Perhaps the real project isn’t building smarter (whatever that might mean) machines but cultivating systems where intelligence remains plural, tools stay accountable to their contexts, and the means of coordination don’t concentrate in the hands of those wielding hammers and looking at everything as a nail.