We haven’t had an “Internet moment” in decades. And no, gen-so-called-AI isn’t one either, but that’s not the point of this essay. Yes, tech moves faster than ever (trite trope), with AI features proliferating across products, with each company racing to bolt intelligence onto their existing offerings. And yet, despite this frenetic, self-inflicted, activity, something feels stuck. I argue we are mistaking reaction speed for concrete goals. (Side note: this is where I could go on a side rant on Teleological behaviour – the cybernetic kind – but I’ll skirt around it and simply provide a reference for you to peruse at your own comfort. TL;DR, the goals we pursue might not actually be goals at all).
Roughly speaking, there’s two ways of innovating: (i) you either concentrate your bets (i.e., pick a few promising ideas, resource them properly, and optimise for execution, or (ii) you expand your surface area, funding hundreds of experiments with the implicit understanding that the Law of Large Numbers will sort out winners from losers). Both approaches sound sensible. Both fail to deliver actual disruption.
The focused approach looks (relatively) rational on paper: you minimise variance, concentrate resources, and maximise expected value within a defined opportunity space. In other words, portfolio theory cosplaying as innovation strategy. But true disruption requires exploring space orthogonal to your current offerings. Focused bets, by their very nature, optimise within existing frameworks. You end up with feature proliferation without category creation – in other words, your apps now can do 47 things but create no new verbs, new ways of thinking or, really, anything at all. In other words, the gizmo that integrates with your existing product suite is fundable, but the thing that makes your product suite obsolete is not.
The alternative, expanding innovation surface area (i.e., through the VC model), seems to address this. Fund hundreds of startups, let markets determine winners, and trust in evolutionary pressure to surface breakthrough ideas. But look closer, and you see it’s basically still optimisation, albeit in a different guise. You are parallelising the conveyor belt / sausage factory, but the destination remains exactly the same: exits, acquisitions, fitting into existing market categories. You’re still not actually exploring orthogonal space.
The VC model treats startups as independent experiments. Win or lose, succeed or fail, each bet resolves separately. The idea that the failed warehouse tool’s data handling could solve the nursing scheduler’s core problem never enters the equation, because there’s no mechanism for it to. (all references to real companies / solutions are purely coincidental, by the by).
The Missing Metabolism#
What we’re missing isn’t more bets or better bets, it’s circulation between them. Feedback loops that allow ideas, infrastructure, and capabilities to flow laterally across apparently unrelated experiments. Think about that (fictional) warehouse SaaS again. It failed at its stated goal, but in the process, it developed sophisticated techniques for handling inconsistent input (i.e., people typing with gloves, multiple data entry conventions, chaos as the necessary baseline condition). That capability is a substrate. It could be remixed, recombined, or deployed in entirely different contexts. Nursing scheduler. Restaurant reservations. Field research data collection. Emergency response coordination.
But this requires seeing failure differently. Not as an endpoint but as a material. Not as a signal to shut down and move on, but as production of capabilities that could enable emergence elsewhere in your innovation ecosystem.
This is where Deleuze and Guattari become relevant (BOOOOO, I hear you screaming from the back of the class). Emergence, properly understood, is the “diachronic construction of functional structures in complex systems that achieve a synchronic focus of systematic behavior by constraining component interactions”. Let me elaborate.
When I talk about emergence, I’m not waving my hands and saying ‘complexity, therefore magic’. I mean something quite specific: over time, complex systems build up functional structures that start to shape how the whole thing behaves. That’s the diachronic bit.
At some point, those accumulated structures begin to exert real constraints on the parts inside the system. They channel what can and can’t happen at the level of individual components. That’s where you get a synchronic focus: at any given moment, the system shows a coherent pattern of behaviour not because anyone designed it top-down, but because these very structures limit and coordinate what the parts can do. TL;DR, emergence manifests two sides: the long-term construction of patterns, and the way those patterns, once established, lock in and focus the behaviour of everything that lives inside them. To compress the definition even further, emergence is what happens when heterogeneous elements interact over time in ways that generate new stable patterns.
But, crucially, emergence requires connection between these heterogeneous elements; lateral propagation across domains, not hierarchical optimisation within them.
The implications for all this are slightly uncomfortable (especially in a system that prioritises IP and trade secrets over all): you need shared infrastructure across your innovation portfolio, i.e., the actual accumulated capabilities from previous experiments. You need permission structures that allow teams to cannibalise and recombine components from “failed” projects. You need timescales that don’t force premature resolution of experiments.
Most importantly, you need to optimise for interconnection rather than independence. The value of your innovation surface area isn’t the sum of individual bets; it’s the potential for unexpected (re)combinations between them. This is fundamentally different from portfolio thinking: you’re not hedging risk across independent ventures but building a metabolic system where each experiment can become feedstock (or fertiliser) for others.
This is hard because it requires different success metrics. The warehouse SaaS that “failed” but produced transferable capabilities is more valuable than the one that failed cleanly. The nursing scheduler that succeeds by incorporating those capabilities represents emergence your innovation accounting needs to capture somehow. But how do you value an ecosystem’s combinatorial potential? How do you measure permeability?
Bell Labs understood this, at least for a while. So did Xerox PARC, before the parent company failed to metabolise what PARC produced. We are definitely not limited by the technology today; we have better tools for sharing and recombining than either of those institutions did. The challenge is structural and organisational, because it requires building organisations (or portfolios of ‘em) that can maintain surface area while enabling the ability for capabilities, insights, and components to flow laterally across domains, recombining in ways that weren’t planned for, couldn’t have been planned for, generating the kinds of categorical shifts that focused optimisation precludes by its very design.
Bit of a non sequitur right there, as I don’t have fully fleshed out answers, but I’d be curious about your opinion, dear reader. Reach out!