Back to Articles

The Orchestration Fallacy: Why "Digital Workers" Need Minds, Not Message Routers

A response to Dries Buytaert's "The Orchestration Shift"

Dries Buytaert recently published "The Orchestration Shift", announcing his investment in Activepieces and arguing that "business logic is moving out of individual applications and into the orchestration layer." His thesis: as AI agents become more capable, we'll need better orchestration platforms to coordinate them.

He's half right. And being half right about a paradigm shift is worse than being completely wrong—it leads you to build elaborate scaffolding for a building that's about to be demolished.

The Thesis

Buytaert's argument is straightforward:

  1. Marketing technology has become a fragmented mess of disconnected tools
  2. Orchestration platforms (n8n, Zapier, Activepieces) create a coordinating layer
  3. AI will evolve from simple automation to "digital workers"
  4. Therefore, orchestration infrastructure becomes critical

He frames this as transforming the "martech stack" into a "martech network," with orchestration as the intelligent fabric connecting everything.

It's a reasonable investment thesis for 2024. It's also profoundly short-sighted about where AI is actually heading.

The Critical Thinking Failures

Failure #1: Confusing Scaffolding with Architecture

Orchestration platforms like n8n and Activepieces exist because current AI systems are cognitively incomplete. Large language models are stateless—they don't remember previous interactions, can't learn from outcomes, and don't pursue goals across sessions. They're powerful reasoning engines with amnesia.

Orchestration tools are prosthetic memory and goal-management for systems that lack both. They're the wheelchair we built because we haven't figured out how to give AI legs.

Buytaert's thesis assumes the wheelchair will become more important as the patient gets stronger. The opposite is true: once AI systems have genuine cognitive capabilities—persistent memory, belief revision, goal-directed behavior—the external scaffolding becomes obsolete.

Failure #2: "Digital Workers" Without Specifying What Makes Them Work

The article gestures toward AI agents that "understand context and execute complex tasks autonomously" but never addresses the hard question: how?

What does it mean to "understand context"? In current orchestration tools, context is data passed between workflow steps—a JSON payload moving through a pipeline. That's not understanding; that's plumbing.

Genuine contextual understanding requires:

  • Persistent memory: Knowing what happened yesterday, last week, last year
  • Belief systems: Holding views about what's true, with confidence levels and sources
  • Learning: Updating strategies based on outcomes, not just executing predefined paths
  • Goal commitment: Pursuing objectives across sessions, not just responding to triggers

None of this exists in orchestration platforms. They route messages between tools. They don't think.

Failure #3: Centralised Orchestration vs. Emergent Coordination

Buytaert's model assumes a central orchestrator—a conductor directing the symphony. This is the enterprise software mindset applied to AI: control flows through a single point, workflows are designed by humans, changes require reconfiguration.

But the most robust coordination systems in nature and complex systems don't work this way. Ant colonies coordinate millions of agents through stigmergy—leaving traces in the environment that influence others' behavior. No central controller. No predefined workflows. Emergent intelligence from simple rules and shared memory.

The future of AI coordination isn't a better message router. It's cognitive agents that coordinate through shared beliefs, memory traces, and goal markets. The orchestration emerges from agent behavior, not from workflow definitions.

Failure #4: Assuming AI Needs Human-Designed Workflows

Here's a task: "Monitor competitor pricing and adjust our prices to stay competitive."

In the orchestration paradigm, a human designs this workflow:

Schedule trigger
  → Scrape competitor sites
  → Parse prices
  → Compare to our prices
  → Calculate adjustments
  → Update database
  → Notify team

The human decides the steps. The tool executes them. When the competitor changes their website structure, the workflow breaks. A human fixes it. Repeat.

This isn't automation. It's human intelligence with extra steps.

A genuine digital worker would receive the goal, not the procedure. It would:

  • Figure out how to get competitor prices (and remember which approaches work)
  • Learn when competitors typically change prices
  • Develop beliefs about pricing strategies and their effectiveness
  • Adapt when things break, without human intervention
  • Get better at the task over time

The difference isn't incremental. It's categorical. One is a tool. The other is a worker.

Failure #5: The Open Source Red Herring

Buytaert spends considerable space arguing that orchestration infrastructure should be open source, drawing parallels to Linux, MySQL, and WordPress. He's right that critical infrastructure benefits from open development.

But this argument is orthogonal to whether orchestration platforms are the right abstraction. Open source message routers are still just message routers. The question isn't who controls the plumbing—it's whether plumbing is what we actually need.

What "Digital Workers" Actually Require

Let's be specific about what genuine autonomous AI agents need:

1. Epistemically-Grounded Memory

Not a database of facts, but a memory system that tracks:

  • Confidence levels (how sure are we?)
  • Provenance (where did this come from?)
  • Truth categories (is this a fact, an opinion, an inference?)
  • Temporal dynamics (is this still true? when does it expire?)

Agents need to know not just what they believe, but why they believe it and how much they should trust it.

2. Belief Revision

When new information contradicts existing beliefs, what happens? In current AI systems: nothing. The model doesn't update. The context window doesn't persist.

Real agents need formal belief revision—the ability to rationally update their beliefs when evidence demands, while maintaining consistency and preserving justified beliefs.

3. Goal-Directed Behavior (BDI Architecture)

The Belief-Desire-Intention architecture from agent theory provides a model:

  • Beliefs: What the agent holds true about the world
  • Desires: What the agent wants to achieve
  • Intentions: What the agent is committed to doing

Critically, intentions persist. An agent pursuing a goal doesn't forget about it between invocations. It maintains commitment until the goal is achieved or abandoned for good reason.

4. Continuous Learning

Not just "fine-tuning" in the ML sense, but genuine learning from every interaction:

  • What worked? What failed?
  • Which strategies are effective for which domains?
  • What has this user corrected me about?
  • How should I adjust my approach?

This learning must persist and influence future behavior.

5. Emergent Coordination

Multiple agents working together shouldn't require a human to design the coordination protocol. Through shared memory, reputation systems, and goal markets, agents should coordinate the way effective teams do—through communication, shared understanding, and aligned incentives.

The Actual Paradigm Shift

Buytaert sees this:

Applications → Orchestration Layer → Integration

The actual shift is:

Applications → Cognitive Agents → Emergent Coordination

The orchestration layer doesn't become more important—it gets absorbed into agents that don't need external coordination because they have internal cognitive architecture.

This isn't science fiction. The theoretical foundations exist: BDI agent architectures from the 1990s, AGM belief revision from the 1980s, stigmergic coordination from swarm intelligence research. What's new is the possibility of implementing these at scale with modern LLMs as the reasoning substrate.

What This Means for Investment

If you believe orchestration platforms are the future, you're betting that AI will remain cognitively incomplete indefinitely—that we'll always need external scaffolding to compensate for AI's lack of memory and goals.

That's a defensible short-term bet. It's a terrible long-term bet.

The trajectory is clear: AI systems are gaining memory (RAG, vector databases, context extensions), learning capabilities (fine-tuning, RLHF, continuous learning), and goal-directed behavior (agents, function calling, autonomous systems). Each advancement makes orchestration platforms less necessary.

The winning investment isn't in better message routers. It's in the cognitive infrastructure that makes message routers obsolete.

The Uncomfortable Truth

The orchestration shift thesis is comfortable for the enterprise software industry. It says: the future looks like the present, but with better plumbing. Your existing mental models apply. Your existing skills transfer. You just need to add an orchestration layer.

The actual shift is less comfortable. It says: AI agents are becoming cognitive entities that don't fit the "tool" mental model. They're not services to be orchestrated—they're workers to be managed. And the management paradigm isn't workflow design; it's goal-setting, belief alignment, and outcome evaluation.

This requires new abstractions, new architectures, and new ways of thinking about human-AI collaboration. Orchestration platforms are training wheels. Eventually, you take them off.

Conclusion

Dries Buytaert is a thoughtful technologist, and his advocacy for open source infrastructure is valuable. But "The Orchestration Shift" mistakes the current moment for the destination.

Today's AI systems need orchestration because they lack cognition. The response isn't to build better orchestration—it's to build cognition. Memory systems that enable persistent, grounded beliefs. Agent architectures that enable goal-directed behavior. Learning systems that enable continuous improvement. Coordination mechanisms that emerge from agent interaction rather than human workflow design.

The orchestration shift isn't wrong. It's just the last chapter of the old book, not the first chapter of the new one.

The real shift isn't about orchestration. It's about giving AI minds instead of just connections.


This is a response to Dries Buytaert's "The Orchestration Shift". I work on cognitive architectures for AI systems, including persistent memory systems and autonomous agent frameworks.