Auf Deutsch lesen

Viable Signals #2 — February 2026

Autonomy vs. Control — What an AI Startup Learned from the 1970s

Something unusual has happened. A company called Dragonscale cited Stafford Beer in an article about AI governance. If you don't know the name: Beer was a British management theorist who answered a fundamental question in the 1970s — how do organizations remain capable of action under uncertainty?

His answer: cybernetics. Not a buzzword, but the science of control and communication in complex systems. From it, Beer developed the Viable System Model (VSM) — an organizational model with five functions: operational units, coordination between them, control and resource allocation, strategic environmental monitoring, and an overarching identity. The core: the balance between autonomy and control.

Why This Is Becoming Relevant Now

Dragonscale is building exactly that: "Goal-Native AI" — an architecture for "Governed Autonomy." They draw on Beer's core principle: Operational units need as much autonomy as possible — and as much control as necessary. They also reference Ashby and Wiener, two more cybernetics pioneers.

What's notable isn't that a single company cites Beer — that happens occasionally in academic papers. What's notable is that an enterprise AI vendor is incorporating these ideas into a concrete product proposal. Cybernetics is migrating from theory into practice.

The Other Side: IBM Reinvents the Wheel

At the same time, IBM co-organized the FAST Workshop at AAAI 2026: "Foundations of Agentic Systems Theory." Around 20 papers on topics like emergence, interaction, agent monitoring — everything Beer systematically described 50 years ago.

None of the papers reference Beer, VSM, or cybernetics in their titles or abstracts.

This isn't an accusation — it's a pattern. The AI community is solving problems that have been worked on in organizational cybernetics for decades. But knowledge doesn't flow across disciplinary boundaries.

What This Means for You

If you're a leader thinking about AI governance, you face a fundamental question: Do you control your AI agents top-down — or give them autonomy with clear boundaries?

Beer's answer: autonomy with boundaries. Not as a leap of faith, but as a structural principle. Concretely: built-in coordination between agents (so they don't block each other), regular quality checks (so errors are caught early), and strategic environmental monitoring (so the system responds to changes before they become crises).

The parallel to your company is direct: the same balance between autonomy and control that you need for your teams, you need for your AI agents. The companies that understand this first won't have to answer the question again in ten years.

Three Sources to Go Deeper

  1. Dragonscale: "What Is Goal-Native AI?" (Feb 24, 2026) — The article bringing Beer into AI governance
  2. IBM FAST Workshop (AAAI 2026, Jan 27) — Around 20 papers, zero cybernetics
  3. Espejo & Harnden (1989): The Viable System Model — For those who want it from the source

Viable Signals is published 2-3 times per week. Curated by Norman Hilbert (Supervision Rheinland) with support from the Viable System Generator.

← All editions