Back to Articles

Evolving Simplex: A Language for Cognitive Agents

Part 2 of the Simplex Evolution Series

In Part 1, I laid out the vision for the Mnemonic Hive—cognitive architecture that gives AI persistent memory, genuine beliefs, and goal-directed behavior. Now let's get into the implementation: how we're evolving Simplex to support this natively at the language level.

The question isn't whether you can build cognitive agents with existing tools—you can hack together anything with enough Python and prayer. The question is whether the language helps or fights you. I want a language where agents, beliefs, and memories are first-class citizens, not afterthoughts bolted on with libraries.

Design Philosophy

Before diving into syntax, let me articulate the principles driving these extensions:

1. Agents Extend Actors, Not Replace Them

Simplex already has a robust actor model for concurrent, fault-tolerant systems. Agents are actors with additional cognitive state—beliefs, desires, intentions. You can still use plain actors when you don't need cognition. The cognitive layer is additive.

2. Memory Operations Are As Fundamental As Variable Assignment

In most languages, persistence is a library concern. You call db.save() and hope for the best. In cognitive systems, memory is intrinsic. The language should treat remember as naturally as it treats let.

3. Beliefs Have Types

Not just "this is a string" types—epistemic types. Is this an absolute truth or a contextual one? What's the confidence level? Where did it come from? The type system should enforce these distinctions at compile time where possible.

4. Autonomy Is Gradual

Not every component needs to be fully autonomous. Some specialists just respond to queries. Some agents run continuously. The language should support the full spectrum without forcing everything into one paradigm.

The Agent Construct

Here's what an agent looks like in the evolved Simplex:

agent ResearchAgent {
    // Beliefs backed by memory
    beliefs: BeliefStore { backing: memory },

    // Goals with priority queue
    desires: GoalQueue { max_concurrent: 5 },

    // Current committed plans
    intentions: IntentionStack,

    // Autonomous execution loop
    autonomous {
        tick_rate: 100.ms,

        loop {
            let observations = perceive()
            for obs in observations {
                beliefs.revise(obs)
            }

            if let Some(goal) = desires.select() {
                if !intentions.has_plan_for(goal) {
                    intentions.commit(goal)
                }

                if let Some(action) = intentions.next_action() {
                    act(action)
                }
            }
        }
    },

    // Message handlers (actor interface)
    receive AssignGoal(goal: Goal) {
        desires.adopt(goal)
    }
}

Let me unpack what's happening here:

The agent keyword declares a cognitive entity. Under the hood, this compiles to an actor with additional state management for beliefs, desires, and intentions.

The beliefs block declares how the agent's belief state is managed. Here it's backed by the shared memory system, which means beliefs persist across sessions and can be queried semantically.

The autonomous block is the key innovation. This defines a background loop that runs independently of message reception. The agent doesn't just respond to prompts—it actively pursues its goals even when no one is talking to it.

The BDI cycle—perceive, revise beliefs, deliberate, plan, act—is explicit in the code. You can see the cognitive architecture in the structure of the loop.

Belief Literals and Types

Beliefs aren't just strings. They're typed propositions with metadata:

// Absolute truth with high confidence
let fact = believe "Python was created by Guido van Rossum"
    with confidence 0.95
    as Absolute
    from source("wikipedia")

// Contextual truth scoped to domains
let preference = believe "React is best for component-based UIs"
    with confidence 0.8
    as Contextual { domains: ["frontend", "web"] }

// Inferred belief from observations
let inference = infer "User prefers concise responses"
    with confidence 0.6
    from [
        observation("User edits responses shorter"),
        observation("User thanks for brief answers")
    ]

The type system tracks these distinctions. You can't accidentally treat an opinion as an absolute truth:

// This function only accepts high-confidence absolute truths
fn assert_identity<T>(belief: Belief<T, HighConfidence, Absolute>) -> T {
    belief.content
}

// Compile error: can't pass an Opinion to assert_identity
let my_opinion = believe "Tabs are better than spaces" as Opinion
assert_identity(my_opinion)  // Error!

Memory Primitives

Memory operations are language primitives, not library calls:

// Store a memory with full control
remember {
    content: "User prefers functional programming patterns",
    memory_type: Preference,
    truth_category: Opinion,
    confidence: 0.75,
    tier: LongTerm,
    domains: ["programming", "style"],
    tags: ["preference", "coding-style"]
}

// Shorthand forms
remember "User prefers TypeScript" as Preference with confidence 0.8
remember fact "Python 3.12 released October 2023" from "python.org"

// Search memories with filters
let relevant = recall
    about "authentication patterns"
    from tiers [LongTerm, Persistent]
    where confidence >= 0.6
    limit 20

// Forget (soft delete) with audit trail
forget memory_id("outdated-api-info")
    reason: "API deprecated"

The remember/recall/forget primitives map directly to the three-tier memory architecture. The compiler ensures type safety and generates efficient code for semantic search.

Goal and Plan Declarations

Goals are first-class entities with preconditions and success criteria:

goal AnalyzeCodebase {
    description: "Understand the structure of the codebase",
    priority: High,

    preconditions: [
        belief "Have access to repository" with confidence >= 0.9,
        resource "compute" available
    ],

    success_conditions: [
        belief "Understand main modules" with confidence >= 0.8,
        belief "Identified design patterns" with confidence >= 0.7,
        memory exists with tags ["codebase-analysis", "complete"]
    ],

    subgoals: [
        goal ScanFileStructure,
        goal IdentifyEntryPoints,
        goal MapDependencies,
        goal AnalyzePatterns
    ],

    on_success {
        remember "Completed codebase analysis"
            tags: ["codebase-analysis", "complete"]
    },

    on_failure(reason) {
        remember "Failed analysis: {reason}"
            tags: ["codebase-analysis", "failed"]
    }
}

Plans specify how to achieve goals:

plan AnalyzeCodebasePlan for AnalyzeCodebase {
    steps: [
        step "Clone repository" {
            action: git.clone(repo_url),
            effects: [file_system.has(repo_path)]
        },
        step "Scan files" {
            action: fs.walk(repo_path).filter(code_files),
            effects: [belief "Have file list" with confidence 1.0]
        },
        step "Analyze each file" {
            action: parallel(files.map(analyze_file)),
            effects: [belief "Files analyzed" with confidence 0.9]
        }
    ],

    on_step_failure(step, error) {
        match error {
            Transient => retry step with backoff,
            Permanent => replan from current_state,
            Critical => abandon goal with reason error
        }
    }
}

The Mnemonic Hive

Individual agents are powerful. Collectives are transformative:

mnemonic hive ResearchHive {
    // Reactive inference specialists
    specialists: [
        Summarizer,
        CodeAnalyzer,
        Reasoner,
        FactExtractor
    ],

    // Autonomous BDI agents
    agents: [
        PlannerAgent,
        ResearchAgent,
        FactCheckerAgent
    ],

    // Shared memory substrate
    memory: SharedMemoryConfig {
        short_term: { ttl: 48.hours, max_entries: 50000 },
        long_term: { max_entries: 1000000, index: HNSW },
        persistent: { enable_fts: true, enable_hnsw: true }
    },

    // Coordination mechanisms
    goal_market: GoalMarketplace {
        assignment: BiddingAssignment,
        reputation: ReputationSystem
    },

    // Stigmergic traces
    stigmergy: StigmergicConfig {
        trace_on_action: true,
        decay_rate: 0.1
    },

    // Continuous learning
    evolution: EvolutionConfig {
        triggers: [InteractionCount(100), TimeElapsed(24.hours)],
        lora: LoRAConfig { rank: 16, alpha: 32 }
    }
}

The mnemonic hive keyword declares a collective intelligence. It specifies:

  • Specialists: Reactive components that respond to queries
  • Agents: Autonomous components that pursue goals
  • Shared memory: The cognitive substrate all components access
  • Coordination: How goals get assigned and tracked
  • Stigmergy: Indirect coordination through memory traces
  • Evolution: Continuous learning configuration

Why This Matters

You could build all of this with Python and a bunch of libraries. I've done it. It's miserable.

The problem isn't capability—it's friction. When cognitive operations are afterthoughts, you spend more time fighting the language than building the system. When they're first-class:

  • The type system catches errors you'd otherwise find at runtime
  • The syntax guides design toward good patterns
  • The compiler optimizes memory operations you'd otherwise hand-tune
  • The runtime handles persistence you'd otherwise implement badly

A language designed for cognitive agents makes cognitive agent development feel natural instead of heroic.

What's Next

These language extensions are designed but not yet fully implemented. The Simplex compiler needs updates for the new constructs, and the runtime needs the memory substrate integration.

But the specification is complete. We have formal grammar (47 new productions), type system rules, and operational semantics. When we're ready to build, we know exactly what we're building.

The goal is a language where building cognitive agents is as natural as building web services. Where memory and beliefs are as fundamental as variables and functions. Where autonomous behavior is a language feature, not a hack.

That's the evolution of Simplex. That's the foundation for the Mnemonic Hive.


This is Part 2 of the Simplex Evolution Series. See Part 1: The Mnemonic Hive for the cognitive architecture vision.