Simplex 0.13.0 "Completion & Foundations" is the release where the backlog hits zero. Eight previously incomplete tasks—fully closed. Five critical compiler bugs—fixed. The test suite grows to 162/162 passing, and the language gains ~13,500 lines of new code spanning complex numbers, matrix algebra, a complete dual numbers system, an HTTP client, a JSON parser, SLM native bindings, and contract logic for neural gates. This is the complete build.
Closing the Backlog
Most releases are about what's new. This one is about what's finished. Since v0.8.0, certain tasks have been partially implemented—designed, started, shipped in phases, but never fully closed. v0.13.0 closes all of them:
| Task | Description | Status |
|---|---|---|
| TASK-001 Phase 2 | Contract logic for neural gates | Closed |
| TASK-003 Phases 4-7 | Training pipeline completion | Closed |
| TASK-005 Phases 2-5 | Dual numbers completion | Closed |
| TASK-013-A | Belief-gated receive | Closed |
| TASK-017 | API documentation system | Closed |
| TASK-019 Bug 3 | Async/await exit code 240 | Closed |
| Compiler bugs | 5 critical codegen issues | Closed |
| SLM bindings | Native GGUF model interface | Closed |
Zero open items. Zero known regressions. The slate is clean.
Compiler Stability: Five Critical Fixes
A language is only as trustworthy as its compiler. v0.13.0 fixes five bugs that could cause crashes, segfaults, or incorrect code generation:
1. vec_set Redefinition
The LLVM IR generator could emit duplicate define statements for vec_set when multiple modules used vector operations. LLVM rejects duplicate definitions, causing link failures. The fix deduplicates all runtime function definitions at the IR level.
2. Undefined 'Get' Variable in Actor Patterns
Actor message pattern matching generated references to an undefined Get variable when handling ask-style messages. The codegen now correctly maps message names to their pattern match arms.
3. Async Exit Code 240
The async state machine could crash with exit code 240 when state wasn't preserved across await points. The fix ensures state persistence through every suspension, with ready-tagging for completed futures and invalid state detection instead of silent crashes.
4. Belief/Epistemic Module Segfaults
Programs using belief systems or epistemic logic could segfault due to incorrect memory layout in the belief store. The runtime now correctly aligns belief structures and validates access patterns.
5. Missing FFI Declarations
Native C function calls could fail when the compiler omitted required FFI declare statements. The declaration emitter now scans for all external function references and generates complete FFI declarations.
Why This Matters
These aren't edge cases. They're bugs that surface in real programs—programs with actors, async code, and belief systems. Fixing them isn't a feature. It's the difference between a language you experiment with and a language you build with.
Contract Logic for Neural Gates
v0.12.0 introduced contracts (requires, ensures, invariant) for regular functions. v0.13.0 extends them to neural gates, adding a fourth keyword: fallback.
neural gate should_prune(weight: f64, threshold: f64) -> bool
requires weight >= 0.0
ensures result == false || weight < threshold
invariant threshold > 0.0
fallback false // safe default if contract violated
{
// sigmoid during training, hard comparison during inference
}
The fallback keyword is specific to neural gates. When a contract violation occurs during training—say, a NaN propagates into the weight—instead of crashing, the gate returns its fallback value. Training continues. The violation is logged. The model self-corrects on the next gradient step.
This matters because neural gate contracts operate in a fundamentally different context than function contracts. A violated precondition on binary_search is a programmer error. A violated precondition on a neural gate might be a transient training instability. The fallback mechanism acknowledges this distinction.
Complex Numbers: 650 Lines
The new simplex-std::complex module provides a complete complex number implementation. This isn't a toy—it's the foundation for quantum gate simulation and signal processing.
use simplex_std::complex::{Complex, euler, polar};
// Construct from real and imaginary parts
let z1 = Complex::new(3.0, 4.0); // 3 + 4i
let z2 = Complex::new(1.0, -2.0); // 1 - 2i
// Full arithmetic
let sum = z1 + z2; // 4 + 2i
let product = z1 * z2; // 11 - 2i
let quotient = z1 / z2; // -1 + 2i
// Euler's formula: e^(iπ) + 1 = 0
let z = euler(PI);
// z.real ≈ -1.0, z.imag ≈ 0.0
// Polar form
let (r, theta) = z1.to_polar(); // r = 5.0, θ = 0.9273...
let z3 = polar(5.0, 0.9273); // back to 3 + 4i
// Conjugate and magnitude
let conj = z1.conjugate(); // 3 - 4i
let mag = z1.magnitude(); // 5.0
// Complex trigonometry
let w = z1.sin(); // sin(3 + 4i)
Every operation is implemented with correct IEEE 754 handling, including edge cases for division near zero and trigonometric functions with large imaginary components.
Matrix & Linear Algebra: 1,457 Lines
The simplex-std::matrix module provides dense matrix operations, LU decomposition, and—notably—quantum gate constructors.
Core Operations
use simplex_std::matrix::{Matrix, matmul, transpose, determinant, inverse};
let a = Matrix::from([
[1.0, 2.0],
[3.0, 4.0],
]);
let b = Matrix::from([
[5.0, 6.0],
[7.0, 8.0],
]);
let c = matmul(&a, &b); // [[19, 22], [43, 50]]
let at = transpose(&a); // [[1, 3], [2, 4]]
let det = determinant(&a); // -2.0
let a_inv = inverse(&a)?; // [[-2, 1], [1.5, -0.5]]
let tr = a.trace(); // 5.0
LU Decomposition
LU decomposition with partial pivoting enables efficient solving of linear systems, matrix inversion, and determinant computation. A single factorization can be reused for multiple right-hand sides:
use simplex_std::matrix::{lu_decompose, lu_solve};
let (l, u, perm) = lu_decompose(&a)?;
// Solve Ax = b for multiple b vectors
let x1 = lu_solve(&l, &u, &perm, &b1)?;
let x2 = lu_solve(&l, &u, &perm, &b2)?;
Quantum Gate Constructors
This is where the matrix module meets the future. Simplex ships with constructors for the standard quantum gates, represented as complex matrices:
| Gate | Constructor | Description |
|---|---|---|
| Hadamard | gate::hadamard() |
Creates equal superposition |
| Pauli-X | gate::pauli_x() |
Quantum NOT (bit flip) |
| Pauli-Y | gate::pauli_y() |
Rotation around Y-axis |
| Pauli-Z | gate::pauli_z() |
Phase flip |
| CNOT | gate::cnot() |
Controlled-NOT (entanglement) |
| Phase | gate::phase(theta) |
Arbitrary phase rotation |
| T-gate | gate::t_gate() |
π/8 phase gate |
use simplex_std::matrix::gate;
use simplex_std::matrix::kronecker;
// Build a quantum circuit: H ⊗ I then CNOT
let h = gate::hadamard();
let i = Matrix::identity(2);
let hi = kronecker(&h, &i); // Hadamard on qubit 0, identity on qubit 1
let cnot = gate::cnot();
// Apply to |00〉 state
let state = Matrix::col_vec([1.0, 0.0, 0.0, 0.0]); // |00〉
let result = matmul(&cnot, &matmul(&hi, &state));
// result is Bell state: (|00〉 + |11〉) / √2
The Kronecker product enables composing single-qubit gates into multi-qubit operations. Combined with eigenvalue estimation via power iteration, this module provides the numerical foundation for quantum circuit simulation entirely in Simplex.
Dual Numbers Completion
v0.12.0 shipped forward-mode AD with single-variable dual numbers. v0.13.0 completes the system with two new types and a high-level differentiation API:
MultiDual: N-Dimensional Gradients
Single dual numbers compute one partial derivative at a time. MultiDual carries an N-dimensional derivative vector, computing the full gradient in a single forward pass:
use simplex_ad::{MultiDual, gradient};
// f(x, y) = x² + x*y + sin(y)
fn f(vars: &[MultiDual]) -> MultiDual {
let x = vars[0];
let y = vars[1];
x * x + x * y + y.sin()
}
// Compute gradient at (3.0, 1.0)
let grad = gradient(f, &[3.0, 1.0]);
// grad[0] = ∂f/∂x = 2x + y = 7.0
// grad[1] = ∂f/∂y = x + cos(y) = 3.5403...
Dual2: Second-Order Derivatives
Dual2 carries value, first derivative, and second derivative—enough for Hessian computation without nesting dual numbers:
use simplex_ad::{Dual2, hessian};
fn g(x: Dual2) -> Dual2 {
x * x * x // f(x) = x³
}
let x = Dual2::variable(2.0);
let result = g(x);
// result.value = 8.0 (f(2))
// result.deriv1 = 12.0 (f'(2) = 3x²)
// result.deriv2 = 12.0 (f''(2) = 6x)
The diff Module
The high-level diff module provides gradient(), jacobian(), and hessian() functions that handle all the dual number machinery internally. You pass a function and a point; you get back derivatives:
use simplex_ad::diff;
let grad = diff::gradient(f, &[1.0, 2.0, 3.0]); // ∇f
let jac = diff::jacobian(g_vec, &[1.0, 2.0]); // J(g)
let hess = diff::hessian(h, &[1.0, 2.0]); // H(h)
This completes the automatic differentiation story that began in v0.8.0. From single dual numbers to full Hessian computation—all forward-mode, all zero tape overhead, all compiled to the same assembly as hand-written derivatives.
Training Pipeline Completion
The training subsystem that started with Self-Learning Annealing in v0.9.0 is now fully realised. v0.13.0 adds three final components:
Meta-Optimizer
The meta-optimizer orchestrates all five learnable schedules—learning rate, temperature, pruning rate, quantization level, and distillation temperature—in a single training run. Each schedule's MLP receives shared training state, and their meta-gradients are computed jointly:
use simplex_training::{MetaOptimizer, LearnableSchedule};
let meta = MetaOptimizer::new()
.with_learnable_lr()
.with_learnable_temperature()
.with_learnable_pruning()
.with_learnable_quantization()
.with_learnable_distillation()
.meta_lr(0.001);
let result = meta.train(&model, &dataset).await?;
// All 5 hyperparameters adapted jointly throughout training
Staged Compression Pipeline
Model compression is most effective when done in stages: train the full model, then prune, then quantize, then distill. Each stage uses a learnable schedule. The pipeline automates this progression:
use simplex_training::pipeline::{CompressionPipeline, Stage};
let pipeline = CompressionPipeline::new()
.stage(Stage::Train { epochs: 100 })
.stage(Stage::Prune { target_sparsity: 0.5 })
.stage(Stage::Quantize { target_bits: 4 })
.stage(Stage::Distill { teacher: &full_model });
let compressed = pipeline.run(&model, &data).await?;
Curriculum Learning
Progressive difficulty scheduling presents easy examples first, gradually increasing complexity as the model improves. The curriculum itself is learnable—the meta-optimizer adjusts the difficulty ramp based on training loss:
use simplex_training::curriculum::{Curriculum, DifficultyFn};
let curriculum = Curriculum::new()
.difficulty_fn(|example| example.complexity_score())
.initial_difficulty(0.2) // start with easiest 20%
.learnable(true); // meta-optimize the ramp
Belief-Gated Receive
Actors in Simplex can hold beliefs—probabilistic assertions about the world. Belief-gated receive extends the actor mailbox so that messages are only processed when the actor's beliefs meet a threshold:
actor Analyst {
beliefs: BeliefStore,
receive fn handle_report(report: Report)
when self.beliefs.confidence("market_stable") > 0.7
{
// Only processes reports when confidence in market
// stability exceeds 70%
self.analyze(report);
}
receive fn handle_alert(alert: Alert)
// No belief gate - always processes alerts
{
self.escalate(alert);
}
}
When a message arrives but the belief threshold isn't met, the actor suspends. When beliefs update (from new evidence, other messages, or learning), the runtime wakes suspended actors and re-evaluates their gates. This required new suspend and wake declarations in the codegen layer, along with a comprehensive test suite for edge cases around belief threshold boundaries.
HTTP Client: 912 Lines
Simplex had an HTTP server since v0.9.0. Now it has a client. The simplex-std::http_client module provides the standard HTTP verbs with TLS support:
use simplex_std::http_client::{Client, Request};
let client = Client::new();
// Simple GET
let response = client.get("https://api.example.com/data").await?;
let body = response.text()?;
// POST with JSON
let response = client
.post("https://api.example.com/submit")
.header("Authorization", "Bearer token123")
.json(&payload)
.await?;
let result: ApiResponse = response.json()?;
// PUT, DELETE follow the same pattern
client.put(url).json(&update).await?;
client.delete(url).await?;
TLS is handled transparently—HTTPS URLs just work. Headers are first-class, and JSON convenience methods handle serialization and deserialization.
JSON Parser: 941 Lines
The new simplex-std::json module provides full JSON parsing, stringification, and a builder pattern for construction:
use simplex_std::json::{Json, JsonBuilder};
// Parse
let data = Json::parse(raw_string)?;
let name = data.get("user").get("name").as_string()?;
let age = data.get("user").get("age").as_i64()?;
// Build
let obj = JsonBuilder::object()
.field("name", "Rod")
.field("active", true)
.field("scores", JsonBuilder::array()
.push(95)
.push(87)
.push(92)
.build())
.build();
// Stringify
let output = obj.to_string();
// {"name":"Rod","active":true,"scores":[95,87,92]}
Nested traversal with .get() chaining handles deeply nested structures without intermediate variables. Type-safe extraction methods (as_string, as_i64, as_f64, as_bool) return errors on type mismatches rather than silently coercing.
SLM Native Bindings: 405 Lines
The final piece of the cognitive architecture puzzle: native bindings to load and run small language models directly from Simplex, without shelling out to Python or calling an external API.
use simplex_slm::{Model, Tokenizer};
// Load a GGUF model file
let model = Model::load("model.gguf")?;
// Tokenize input
let tokenizer = Tokenizer::from_model(&model)?;
let tokens = tokenizer.encode("What is the capital of France?");
// Run inference
let output = model.generate(&tokens, max_tokens: 128)?;
let text = tokenizer.decode(&output);
// Embedding similarity
let emb1 = model.embed("cat")?;
let emb2 = model.embed("dog")?;
let similarity = cosine_similarity(&emb1, &emb2); // ~0.83
The C runtime layer (405 lines) handles GGUF file validation, a handle table for safe model lifecycle management, and tokenizer bindings. Cosine similarity for embedding comparison is built in. This is what connects Simplex's specialists and hives to actual language models—no FFI wrappers, no external dependencies, just use simplex_slm.
API Documentation: 100% Coverage
The sxdoc tool now generates complete API documentation with two new flags:
sxdoc --manifest— Machine-readable API manifest (JSON) for tooling integrationsxdoc --category <name>— Filter documentation by module category
Documentation coverage has gone from 85% to 100%. Every public function, type, and module in the standard library is documented.
Test Suite: 162/162
Eight new tests join the suite, covering the new standard library modules, belief-gated receive, and compiler fixes. Every test passes.
162 / 162
Tests passing — up from 154 in v0.12.0
| Category | v0.12.0 | v0.13.0 | Change |
|---|---|---|---|
| Language | 42 | 42 | — |
| Standard Library | 27 | 31 | +4 (complex, matrix, JSON, HTTP) |
| AI/Cognitive | 18 | 19 | +1 (belief-gated receive) |
| Neural | 16 | 17 | +1 (contract logic) |
| Types | 12 | 12 | — |
| Toolchain | 11 | 11 | — |
| Training | 8 | 9 | +1 (meta-optimizer) |
| Runtime | 8 | 8 | — |
| Integration | 7 | 8 | +1 (SLM bindings) |
| Other (Basics, Learning, Async, Actors, Observability) | 15 | 15 | — |
| Total | 154 | 162 | +8 |
Toolchain Sync
All seven tools are updated to v0.13.0:
- sxc — Compiler with all five bug fixes and contract codegen for neural gates
- sxpm — Package manager with new standard library module registration
- cursus — Test runner handling 162 tests
- sxdoc — Documentation generator with
--manifestand--categoryflags - sxlsp — Language server with complex/matrix type awareness
- sxfmt — Formatter handling
fallbackkeyword and new module syntax - sxlint — Linter with contract completeness checks for neural gates
Breaking Changes
One minor rename: temp_attention.sx has been renamed to temperature_attention.sx. If you import this module, update your use statement:
// Before
use temp_attention;
// After
use temperature_attention;
Everything else is backwards compatible.
Migration Guide
From v0.12.0 to v0.13.0
- Rebuild all tools:
./build.sh - Rename
temp_attentionimports if applicable - Update version imports to
"0.13.0"
New Modules to Explore
use simplex_std::complex; // Complex numbers
use simplex_std::matrix; // Linear algebra & quantum gates
use simplex_std::http_client; // HTTP client with TLS
use simplex_std::json; // JSON parse/build/stringify
use simplex_ad::{MultiDual, Dual2, diff}; // Full AD system
use simplex_slm; // SLM native bindings
What's Next: The Quantum Bridge
With the backlog at zero and the standard library complete, the path forward is clear. The complex numbers module, the matrix algebra with quantum gate constructors, and the Kronecker product aren't just nice-to-haves—they're the foundation for TASK-021: the Quantum Bridge. Quantum circuit simulation, natively in Simplex, with the same differentiability guarantees that make neural gates work.
But that's the next release. This one is about something rarer: finishing what you started.
Try It Today
There's a particular satisfaction in a release where the headline isn't a new feature but a completed promise. Every task planned. Every task shipped. Every test green. Simplex 0.13.0 is the foundation—solid, tested, and ready for what comes next.