← Back to all writings

The ghost in the swarm

Distributed intelligence as the silent sea change

We stand today mesmerized by the titans of AI. GPT, Claude, Gemini, Grok – names whispered with a reverence, and acclaimed with a strength, once reserved for deities or scientific laws. These intelligences, gargantuan neural networks trained in compute domes on oceans of digital information, represent a monumental achievement. They have mastered language, art, coding, and science will soon be conquered. Their dominance is fueled not only by their impressive capabilities but also by unprecedented capital investment, generational entrepreneurs, established data moats, and the relative conceptual simplicity of designing and controlling a single, albeit massive, entity. However, as we push the limits of this paradigm, and capital and talent continue to flood the industry, a nagging question whispers from the periphery: Are we building magnificent cathedrals facing away from the sea?

I argue that our current fascination with the centralized "God-Model," while understandable, risks obscuring a more fundamental, potentially far more impactful, shift underway. This shift isn’t about simply making our current AI bigger or faster. It's about shattering the monolithic architecture itself. It's about embracing distributed intelligence – not as a niche solution for privacy or edge cases, but as the necessary substrate for the next epoch of intelligent systems, particularly those destined to leave the clean room of the datacenter and grapple with the messy, unpredictable physics of reality.

Let's be clear: this isn't a call to dismantle the cathedrals we've already erected. The immense power and general capabilities of these models are foundational and absolutely essential. It is less about wholesale replacement and more about sophisticated integration. It's about evolution.

This isn't just a technical debate for engineers. It strikes at the heart of how we define intelligence, how we perceive control, and ultimately, how we will coexist with intelligences fundamentally different from our own individual, isolated minds. For the investor looking for alpha, the scientist seeking the next frontier, or the curious mind trying to map the future, overlooking this transition is akin to studying astronomy while ignoring gravity.

The constraints of reality

The triumphs of large, centralized models are undeniable, but, so far, they are triumphs within a specific context: the digital realm, characterized by curated data, negligible physical interaction costs, and where latency is often a nuisance rather than an existential threat. Transposing this architecture directly onto the physical world – the world of atoms, friction, and unavoidable light-speed delays – reveals inherent limitations, hairline fractures in the monolith.

The physics of speed

Imagine billions of robotic hands, autonomous vehicles, environmental sensors. Each needs to react now. The time it takes for sensor data to travel to a central brain, be processed, and return as an action command is often an eternity when dealing with physical dynamics. A drone adjusting to a sudden gust, a robotic surgeon countering a tremor, a logistics bot avoiding a collision – these demand intelligence that lives at the point of action. Centralized processing, even with 5G or 6G, faces irreducible latency limits imposed by physics itself.

The data bottleneck

The firehose of high-dimensional, real-time sensor data (vision, lidar, tactile, auditory, proprioceptive) from a world populated by embodied agents will dwarf the entire current internet dataset. Attempting to funnel this raw sensory stream back to central servers for storage and processing is not just economically daunting, it's likely physically impossible at scale. The world simply generates too much information, too quickly.

The fragility of the single point

Centralized systems are inherently brittle. Server outages, network disruptions, cyberattacks, even software bugs in the core model can cascade, paralyzing entire fleets, factories, or cities dependent on that single brain. Robustness in the face of failure, damage, or unforeseen circumstances demands redundancy and graceful degradation – hallmarks of decentralized systems. Nature doesn't run on a single server farm.

The need for local adaptation

True mastery of the physical world requires continuous adaptation to local, specific conditions. A robot arm's optimal movement strategy subtly changes as its joints wear. Grasping success depends on the unique friction coefficient of this object, right now. Learning purely from fleet-wide averages pushed down from a central model is too slow, too coarse. Agents need the capacity to learn in situ, refining their models based on their own unique experiences and immediate environment. Centralized training struggles to capture this granular, continuous, embodied learning.

These aren't mere engineering hurdles to be overcome with faster networks or bigger servers. They suggest a fundamental architectural mismatch. The centralized model excels at global knowledge synthesis from static datasets; the physical world demands local, real-time responsiveness and adaptation within a dynamic, high-entropy environment.

Intelligence unchained: what does it mean for thinking to be distributed?

So, what is this distributed intelligence? Forget simply running multiple copies of the same program. Think, instead, of an ecosystem of computation. Imagine a network of agents, nodes, or processes, each possessing potentially limited individual capabilities and information, but whose interactions, governed by specific protocols, give rise to sophisticated, adaptive, and resilient system-level behavior.

Key facets distinguish this paradigm:

  • Local sensing, local acting: Agents operate primarily on information available in their immediate vicinity. The world is perceived through a local lens.
  • Decentralized coordination: Control is often implicit, emergent. There may be no conductor waving a baton. Global coherence arises from local rules of engagement, negotiation, or information exchange (utilizing methods like consensus algorithms, gossip protocols, market-based negotiation, bio-inspired signaling like stigmergy, or learned communication strategies) – think market mechanisms or ants leaving pheromone trails.
  • The network is the mind: Intelligence isn't solely encapsulated within any single agent; it resides fundamentally between them – in the connections, the protocols, the patterns of interaction. The structure of the network, the rules of communication, the feedback loops – these are components of the system's intelligence.
  • Emergent capabilities: This is the magic and the mystery. Complex, unpredictable, yet often highly effective global behaviors (flocking, collective construction, distributed problem-solving, resilient infrastructure) bubble up from simple local interactions. This isn't just complicated behavior; it can be qualitatively new behavior – sometimes termed strong emergence – irreducible to the sum of its parts and often arising through processes of self-organization. It's where complexity theory meets computation.
  • Learning spread thin: This is where distributed intelligence truly diverges. Learning isn't confined to a monolithic training phase. Agents learn continuously, from their own experience, from observing neighbors, or through carefully curated information sharing. Crucially, agents learn while interacting, meaning the learning landscape itself is non-stationary – my learning changes your environment, which changes your learning, and so on, creating complex co-evolutionary dynamics.

Imagine intelligence less like a single, unified consciousness and more like the intricate, dynamic equilibrium of a rainforest – countless individual actors, local interactions, resource flows, and feedback loops creating a resilient, adaptive whole far greater than the sum of its parts.

The coming swarm: robotics, embodiment, and the distributed imperative

Now, consider the imminent explosion of robotics and embodied AI – autonomous vehicles navigating chaotic streets, drone swarms mapping disaster zones, collaborative robots assembling complex products, nanobots potentially operating within biological systems. For these systems, distributed intelligence isn't an option; it's the native language.

Autonomous cars can't rely solely on a central traffic controller. They need peer-to-peer negotiation for lane changes, intersection management, and emergency maneuvers, potentially forming dynamic, self-organizing traffic flows. We see nascent forms of this already in traffic coordination applications that leverage real-time user data, or in federated learning approaches where models train locally on devices without centralizing raw data. Robots on an assembly line need to coordinate hand-offs, avoid collisions, and dynamically reallocate tasks if one machine fails – all requiring rapid, local communication and decision-making. A central scheduler becomes an impossible bottleneck. A distributed network of sensors (mobile or static) can build a far richer, more resilient, and more timely understanding of a complex environment (e.g., tracking pollution plumes, monitoring seismic activity) than any single sensor reporting back to base. Information fusion happens within the network. Tasks requiring multiple agents (lifting heavy objects, complex construction) necessitate tightly coupled, real-time communication and shared situational awareness that centralized loops struggle to provide.

The intelligence these systems exhibit will feel different. Less articulate monologue, more coordinated physical action. Less abstract reasoning, more situated, embodied competence. Less about knowing that, more about knowing how, learned through interaction and distributed trial-and-error.

Fundamental shifts: how distributed intelligence changes the game

Embracing distributed intelligence isn't just adopting new software patterns. It forces us to confront and revise some of our most fundamental assumptions about intelligence, control, and design

From architect to gardener

The dream of the omniscient programmer dictating every action dissolves. Designing intelligence systems is less like building a clockwork mechanism and more like cultivating a garden or designing an economy. You define the agents' basic capabilities (the "physics"), craft the interaction protocols (the "social laws" or "market rules"), and perhaps provide incentive structures or environmental constraints. Then, you observe, nudge, and prune. Control becomes indirect, probabilistic, focused on shaping the conditions for desirable emergence. As an investor, the value shifts from owning the "best plant" to owning the "most fertile soil" or the "best gardening tools."

The myth of the global optimum

Optimization usually aims for a single, best solution across the entire system. In complex, dynamic distributed systems, this concept often breaks down. Local optimization by individual agents, interacting under specific rules, may lead to states that are dynamically stable, resilient, and good enough – analogous to game-theoretic equilibria or Pareto optimal configurations – but not necessarily the theoretical global optimum. Furthermore, the very definition of optimum might be constantly shifting due to the system's interaction with a changing environment and the co-evolution of its agents. Think less finding the peak of a static mountain, more surfing a constantly evolving wave.

Intelligence as a systemic process

We must radically rethink what "intelligence" means. If it resides in the interactions and emergent patterns, it becomes less like a static property of an object (like mass) and more like a dynamic process (like weather). It's something the system does, collectively, rather than something a single component is. This challenges our anthropocentric bias towards singular consciousness as the template for all intelligence.

New challenges in trust and verification

How do you understand, debug, or trust a system whose critical behavior emerges unpredictably from millions of interactions? Explainability shifts from interpreting the weights of one model to understanding the dynamics of the system – a problem more familiar to ecologists or economists than traditional software engineers. Verification might involve statistical analysis, agent-based modeling, and perhaps entirely new mathematical frameworks. Trust becomes less about deterministic guarantees and more about observed resilience, predictable patterns of behavior, and robust safety protocols governing interactions.

Aligning emergent goals

Aligning a single AI with human values is hard enough. How do you align a swarm where goals might be implicit, emergent, or even conflicting across agents optimizing locally? Undesirable collective behavior (emergent bias, flash crashes, destructive competition) might arise even if individual agents seem benign. Ensuring beneficial outcomes requires alignment at the level of interaction protocols and system dynamics, a vastly more complex challenge than simply programming an objective function.

Dancing with ghosts: how we change alongside distributed minds

This transformation isn't just about the machines; it's about us. Our tools shape us, and tools embodying distributed intelligence will reshape our cognition, our society, and our place in the world.

New expertise needed

The most valuable human skills will shift. Designing interaction protocols, understanding complex systems dynamics, simulating and predicting emergent phenomena, ethical governance of autonomous swarms, interpreting the behavior of computational ecosystems – these become paramount. Systems thinking transitions from a useful skill to a fundamental requirement.

Investment focus shifts

Value may accrue not just to those building the best individual agents, but to those creating the platforms, protocols, simulation tools, and operating systems that enable effective distributed intelligence ecosystems. Edge hardware and communication technologies remain crucial.

Living with(in) the swarm

Our interaction models will change. We won't just command individual robots; we'll set objectives for collectives, monitor swarm health via complex dashboards, perhaps even use augmented reality to visualize the invisible flows of information and influence within distributed intelligence systems operating around us. Imagine urban planners interacting with simulations of autonomous traffic flow, or emergency responders guiding a search-and-rescue swarm through high-level directives influencing local agent behavior.

Rethinking ethics and responsibility

Our legal and ethical frameworks, built around individual agency and intent, are woefully unprepared. Who is responsible when an emergent behavior causes harm? How do we imbue systemic values? Can a swarm have rights, or responsibilities? We need new philosophical and legal constructs to grapple with agency and accountability that are diffuse and emergent.

Cognitive re-wiring

Interacting regularly with systems whose behavior is emergent, probabilistic, and non-deterministic may subtly reshape our own thinking. We might become more intuitive Gärdenfors-style reasoners (operating perhaps more on conceptual spaces – geometric mental models – than purely symbolic logic), akin to how an experienced firefighter 'feels' the danger in a burning building rather than calculating precise probabilities -, better at grasping complex interdependencies, feedback loops, and tipping points. We might also face psychological challenges in dealing with intelligences that are powerful yet fundamentally alien, lacking the comprehensible "mind" we project onto centralized AIs. Will we learn to "feel" the emergent patterns?

Riding the wave

The current AI narrative is dominated by the impressive, comprehensible power of centralized models. They offer clarity, a sense of control, and staggering capabilities in the digital sphere. But the unyielding constraints of physics and the sheer complexity of real-world interaction beckon us towards a different, complementary architecture.

The future likely lies in sophisticated hybrid systems. We can envision a symbiotic relationship: centralized oracles might act as strategic planners, knowledge repositories, or system designers – providing global context, distilling deep knowledge, running intensive simulations to design or verify local interaction protocols – while swarms of locally intelligent agents handle the tactical, real-time interaction, adaptation, and execution within the physical world. The God-Model might provide the map, but the swarm navigates the territory.

This perspective also raises a profound question: Could our very quest for a singular, human-like Artificial General Intelligence housed in a centralized substrate be a category error when considering interaction with the physical universe? Is it possible that robust, scalable, adaptive intelligence in the face of physical reality is intrinsically distributed, emergent, and embodied? Perhaps the God-Model is destined to remain an oracle, powerful in its abstract domain, but ultimately incapable of truly dancing with the messy, contingent reality we inhabit. The intelligence that thrives there might necessarily look more like an ecosystem than an ego.

Distributed intelligence – messy, complex, sometimes counter-intuitive – mirrors the strategies life has used for billions of years to create resilient, adaptive systems capable of navigating uncertainty. It trades the illusion of perfect top-down control for the reality of robust, emergent adaptation.

As we stand at this inflection point, the crucial task is not merely to build bigger centralized models, but to learn the principles of designing, understanding, and interacting with computational ecosystems. For the investor, the scientist, the builder, and the philosopher alike, the future won't just be coded; it will emerge. The ghost is already stirring in the swarm. Are we prepared to listen?

Appendix: A (slightly) deeper dive into the foundations of distributed intelligence

The challenges posed by latency, bandwidth, robustness, and local adaptation aren't novel discoveries; they are echoes of fundamental problems explored for decades across various scientific and engineering domains. Recognizing this history underscores why distributed intelligence isn't just an alternative, but often a necessity when computation meets the physical world.

  • Early seeds: complexity from local rules. Even before modern computers, the concept of complex global patterns emerging from simple, local interactions was being explored. John von Neumann's work on self-replicating automata in the 1940s laid theoretical groundwork. Later, John Conway's Game of Life (1970) provided a starkly visual demonstration: simple rules applied locally on a grid generated incredibly complex, unpredictable global dynamics, hinting that sophisticated behavior doesn't always require central orchestration. This principle resonates directly with the idea of local adaptation and emergent collective function.
  • Robustness and decentralization by design. The very origins of the internet's precursor, ARPANET, were partly motivated by the need for a communication network resilient to single points of failure (a concern in the Cold War context). This led to foundational work in packet switching and decentralized routing. Concurrently, computer scientists like Leslie Lamport developed crucial algorithms (e.g., Paxos) for achieving consensus in distributed systems, tackling the inherent difficulties of coordinating actions reliably when components might fail or messages might be delayed – directly addressing the fragility concern of centralized models. Ideas like Byzantine Fault Tolerance further explored how systems could function even with malicious or faulty actors.
  • The rise of agents and emergence. In the 1980s, the focus sharpened on autonomous agents. Craig Reynolds' "Boids" (1986) became iconic, demonstrating how compellingly realistic flocking behavior could emerge from agents following just three simple local rules (separation, alignment, cohesion). This wasn't just a graphics trick; it was a powerful illustration of decentralized coordination achieving a global goal without explicit leadership, directly relevant to managing swarms of robots or sensors where centralized micromanagement is impossible due to latency. More recent computational demonstrations, such as OpenAI's work showing emergent sophisticated tool use and cooperative/competitive strategies (like hide-and-seek) in simulated multi-agent environments trained via reinforcement learning, further highlight how complex, unexpected behaviors can arise purely from agent interactions and learning within a defined environment, without being explicitly programmed. The field of Multi-Agent Systems, championed by researchers like Victor Lesser and Michael Wooldridge, formalized the study of how independent agents coordinate, negotiate, and solve problems collectively, laying groundwork for complex robotic teams or distributed sensor networks. Crucially, much of this thinking drew inspiration from, and contributed back to, the burgeoning field of complex adaptive systems, famously studied at interdisciplinary centers like the Santa Fe Institute, which provided vital theoretical frameworks for understanding emergence, adaptation, and self-organization across diverse domains.
  • Learning from nature: swarm intelligence and robotics. Observing social insects like ants and bees provided rich inspiration. Marco Dorigo's work on Ant Colony Optimization in the early 1990s showed how simulated ants, using indirect communication via pheromone trails, could find optimal paths – a decentralized approach to complex problem-solving. Similarly, Particle Swarm Optimization by Kennedy and Eberhart leveraged simulated social cooperation. This biological inspiration directly fueled Swarm Robotics, moving beyond simulation. Projects like the Kilobots at Harvard demonstrated thousands of simple, cheap robots collectively forming shapes or sorting objects, showcasing robustness (system degrades gracefully if some units fail) and scalability without needing a central controller dictating every move. This directly tackles the challenge of deploying large numbers of physical agents.
  • Dealing with data deluge and physical constraints. The proliferation of wireless sensor networks forced engineers to confront the data bottleneck head-on. With thousands of low-power sensors, transmitting all raw data centrally was often infeasible due to bandwidth and energy constraints. This drove innovation in local data processing, aggregation, and event detection within the network, minimizing communication needs. This paradigm is crucial for any large-scale deployment of sensors in the physical world. Furthermore, the field of Cyber-Physical Systems emerged to explicitly address the tight integration of computation, networking, and physical processes, acknowledging the critical role of real-time response and distributed control in systems ranging from smart grids to autonomous vehicles.
  • Distributed learning and adaptation. Even within the AI community focused on large models, distribution has been key. Techniques like distributed reinforcement learning or evolutionary algorithms using "island models" (where separate populations evolve and occasionally exchange individuals) were employed to speed up training, explore diverse solution spaces, and enhance robustness. Google DeepMind's Population Based Training merges parallel optimization and sequential model updates, implicitly leveraging distributed search to achieve better results faster, hinting at how local adaptation within a larger learning framework can be powerful. While often used for training efficiency today, these methods hold keys to enabling continuous, in-situ learning for deployed physical agents. NASA's conceptual work on multi-rover coordination (like the ANTS concept) for planetary exploration also envisioned decentralized decision-making to cover vast areas efficiently and robustly.

Taken together, this rich history provides not just analogies, but tested principles and engineering solutions. From the mathematical elegance of cellular automata to the practical necessities of network design and the bio-inspired coordination of swarm robotics, the message is consistent: centralized control faces fundamental limits when confronted with the scale, speed, uncertainty, and inherent distributed nature of the physical world. The "ghost in the swarm" isn't a futuristic fantasy; it's the culmination of decades of understanding how to build resilient, adaptive, and effective systems by embracing decentralization.