Today I'm releasing Simplex v0.5.0 as open source software under the MIT license. The complete source code—compiler, toolchain, runtime, and standard library—is now available on GitHub for anyone to use, modify, and build upon.
But this isn't just an open source announcement. Version 0.5.0 introduces two architectural innovations that I believe represent a paradigm shift in how we build AI-native applications: the Anima—a cognitive soul for intelligent agents—and per-hive SLM provisioning—a memory-efficient architecture where specialists share a single language model.
Why Open Source, Why Now?
Simplex began as an experiment: what would a programming language look like if it were designed from the ground up for AI? Not a language with AI bolted on as an afterthought, but one where cognitive agents, belief systems, and collective intelligence are first-class citizens.
After months of development, the language has matured to the point where it's ready for the world. The self-hosted compiler works. The toolchain is complete. The cognitive architecture is solid. It's time to see what others can build with it.
The MIT license means you can use Simplex for anything—commercial products, research, personal projects. No strings attached. Fork it, extend it, embed it. The only requirement is that you include the license notice.
Get the Source: github.com/senuamedia/simplex-lang
The Anima: Giving Agents a Soul
The word anima comes from Latin, meaning "soul" or "breath of life." In Jungian psychology, the anima represents the unconscious feminine aspect of the male psyche—the bridge between conscious and unconscious, between logic and intuition.
I've borrowed this term for Simplex because it captures something essential about what I'm trying to build: AI agents that aren't just stateless function-callers, but entities with memory, beliefs, desires, and intentions. Agents with an inner life.
What Is an Anima?
In Simplex, an Anima is a cognitive core that can be attached to any specialist agent. It provides:
- Cognitive Memory: Four distinct memory types—episodic (what happened), semantic (what things mean), procedural (how to do things), and working (current context)
- Belief System: Graduated confidence levels with rational revision. Agents can hold beliefs, update them based on evidence, and detect contradictions
- Desires & Intentions: Goal-directed behavior through the BDI (Beliefs-Desires-Intentions) architecture
- Identity: A persistent sense of self that persists across interactions
// Create an anima with personal memory
let security_soul = Anima::new(10); // 30% belief threshold
// Teach it about itself
security_soul.learn("I specialize in code security analysis", 0.95, "self");
security_soul.learn("SQL injection is a critical vulnerability", 0.99, "training");
// Give it purpose
security_soul.desire("Find security vulnerabilities before they reach production", 0.95);
security_soul.intend("Analyze every code change for injection attacks", 0.90);
// Attach to a specialist
specialist SecurityAnalyzer {
model: "simplex-cognitive-7b";
anima: security_soul;
fn analyze(code: String) -> SecurityReport {
// The anima's memory and beliefs are automatically
// injected into the inference context
infer("Analyze this code for security issues: " + code)
}
}
The key insight is that the anima's context flows automatically into every inference call. The agent doesn't just respond to the current prompt—it responds as itself, drawing on its memories, filtered through its beliefs, guided by its intentions.
Belief Thresholds: The Epistemology of Agents
Not all beliefs are created equal. Simplex implements a three-tier threshold system:
| Level | Threshold | Meaning |
|---|---|---|
| Anima | 30% | Personal belief. The agent accepts this as part of its worldview |
| Hive | 50% | Shared belief. Propagated to other specialists in the same hive |
| Divine | 70% | Axiomatic belief. Foundational truth that cannot be easily revised |
This graduated system means agents can hold tentative beliefs that they're willing to revise, while also maintaining core convictions that define their identity. A security analyzer might have an axiomatic belief that "user input is never to be trusted" while holding a tentative belief about whether a specific library is safe.
Per-Hive SLM Provisioning: One Model, Many Minds
The second major innovation in v0.5.0 is the per-hive SLM architecture. This solves a practical problem that has plagued AI agent systems: resource consumption.
The Problem with Traditional Approaches
In most multi-agent systems, each agent loads its own copy of the language model. If you have 10 specialists in a hive, you need 10 model instances. With a 7B parameter model at 4-bit quantization, that's 41 gigabytes of memory just for the models.
This is wasteful. The specialists in a hive are typically working on related tasks, using similar reasoning patterns. They don't need separate models—they need shared cognition with differentiated context.
The Simplex Solution
In Simplex v0.5.0, each hive provisions exactly ONE shared language model. All specialists within that hive share the same model instance, but each brings their own context through their anima and the collective HiveMnemonic.
// Create shared consciousness for the hive
let mnemonic = HiveMnemonic::new(
100, // episodic memory capacity
500, // semantic memory capacity
50 // 50% belief threshold for hive-level beliefs
);
// Teach the collective
mnemonic.learn("Our team uses Result types for error handling", 0.95);
mnemonic.learn("Security review is required before merge", 0.99);
// Create ONE shared model for the entire hive
let hive_slm = HiveSLM::new(
"CodeReviewHive",
"simplex-cognitive-7b", // 4.1 GB, loaded ONCE
mnemonic
);
// Create specialists that share the model
let security = Specialist::new("SecurityAnalyzer", hive_slm, security_anima);
let quality = Specialist::new("QualityReviewer", hive_slm, quality_anima);
let perf = Specialist::new("PerformanceOptimizer", hive_slm, perf_anima);
// All three specialists share ONE model instance
// But each has its own anima (personal memory/beliefs)
// And all share the HiveMnemonic (collective memory)
The result: 10 specialists = 1 model load = 4.1 GB, not 41 GB.
How Context Flows
When a specialist makes an inference call, the context is assembled in layers:
- HiveMnemonic: Collective memories and beliefs shared by all specialists
- Anima: Personal memories, beliefs, and intentions specific to this specialist
- Current Query: The immediate task at hand
This layered approach means each specialist sees the same shared context but interprets it through its own lens. The security analyzer focuses on vulnerabilities; the performance optimizer focuses on bottlenecks. Same information, different perspectives.
Cross-Specialist Knowledge Sharing
When one specialist discovers something important, it can contribute to the shared consciousness:
// Security specialist finds a vulnerability
security.contribute_to_mnemonic(
"SQL injection found in process_user_input function"
);
// Now quality and performance specialists see this
// in their next inference context
This is how hives achieve emergent intelligence. Individual specialists make discoveries; the collective remembers; all specialists benefit.
The Complete v0.5.0 Toolchain
The open source release includes the complete Simplex toolchain, all written in Simplex itself:
| Tool | Purpose | Version |
|---|---|---|
| sxc | Native compiler with LLVM backend | 0.5.0 |
| sxpm | Package manager with SLM provisioning | 0.5.0 |
| cursus | Bytecode virtual machine | 0.5.0 |
| sxdoc | Documentation generator | 0.5.0 |
| sxlsp | Language server for IDE integration | 0.5.0 |
The package manager now includes model provisioning commands:
sxpm model list # List available cognitive models
sxpm model install # Install a model
sxpm model remove # Remove a model
sxpm model info # Show model details
Built-in Models
Simplex ships with three built-in models optimized for cognitive tasks:
- simplex-cognitive-7b (4.1 GB): Full cognitive capabilities, recommended for production
- simplex-cognitive-1b (700 MB): Lightweight alternative for resource-constrained environments
- simplex-mnemonic-embed (134 MB): Embedding model for semantic memory and similarity search
Language Highlights
For those new to Simplex, here's what makes the language distinctive:
Rust-Inspired Safety with AI-Native Extensions
// Familiar syntax for systems programmers
fn process_request(req: Request) -> Result<Response, Error> {
let user = req.authenticate()?; // Error propagation
let data = fetch_data(user.id).await?; // Async/await
match validate(data) {
Ok(valid) => Ok(Response::success(valid)),
Err(e) => Err(Error::validation(e)),
}
}
First-Class Actors
actor OrderProcessor {
pending: Vec<Order> = vec![];
receive {
NewOrder(order) => {
self.pending.push(order);
self.process_batch();
}
GetStatus => self.pending.len()
}
}
fn main() {
let processor = spawn OrderProcessor {};
send processor NewOrder(order);
let count = ask processor GetStatus;
}
Cognitive Specialists
specialist DataAnalyst {
model: "simplex-cognitive-7b";
fn analyze(data: DataFrame) -> Insights {
infer("Analyze this dataset for patterns: " + data.summary())
}
fn explain(finding: Finding) -> String {
infer("Explain this finding in plain English: " + finding)
}
}
Platform Support
Pre-built binaries are available for:
- Linux x86_64: Full toolchain
- Linux arm64: Full toolchain
- macOS x86_64: Full toolchain (Intel)
- macOS arm64: Full toolchain (Apple Silicon)
- Windows x86_64: LLVM IR (native coming soon)
Getting Started
# Clone the repository
git clone https://github.com/senuamedia/simplex-lang.git
cd simplex-lang
# Build from source (requires clang, Python 3)
./build.sh
# Or download pre-built binaries from the releases page
# Verify installation
sxc --version
# sxc 0.5.0
# Create your first program
echo 'fn main() { println("Hello, Simplex!"); }' > hello.sx
sxc run hello.sx
From Vision to Reality
Building a programming language is one thing. Proving it works in the real world is another.
With this open source release, I'm moving from concept to application. The architectural foundations are solid—the anima, the per-hive SLM provisioning, the belief systems, the collective intelligence patterns. Now it's time to put them to the test in production scenarios: multi-agent code review pipelines, autonomous research assistants, adaptive customer service systems, real-time market analysis hives.
The next phase isn't about adding more features to the language. It's about deploying the language. Measuring latency, memory consumption, belief convergence rates. Watching how specialists actually collaborate when given real problems. Learning what works and what needs refinement. The only way to validate a vision this ambitious is to build real systems and measure real outcomes.
Build With Us
If you're working on an AI project that pushes beyond simple prompt-response patterns—something that requires persistent memory, coordinated specialists, or genuine cognitive architecture—I want to hear from you.
Looking for Early Adopters
Whether you're building autonomous agents, cognitive assistants, or AI-native applications that don't fit the LangChain mold, Simplex offers a fundamentally different approach. I'm actively seeking collaborators and early adopters who want to build their next AI solution with cutting-edge technology designed from the ground up for intelligent systems.
Join the Community
Simplex is just getting started. If you're interested in AI-native programming, cognitive architectures, or just want to explore a different way of thinking about software, I invite you to:
- Star the repo: github.com/senuamedia/simplex-lang
- Read the docs: Comprehensive language specification and tutorials included
- File issues: Bug reports, feature requests, and questions welcome
- Contribute: PRs for bug fixes, new features, and documentation improvements
The age of AI-native programming is here. Let's build it together.