Two releases, one story. Simplex 0.11.0 "Module System" gave the language real multi-file projects with cross-module imports. Simplex 0.12.0 "Complete Language" built on that foundation to deliver neural gate codegen, forward-mode automatic differentiation, contract verification, and a runtime that has grown to 19,589 lines with over 300 functions. The result: 154 out of 154 tests passing—100% of the language specification implemented.
v0.11.0: The Module System
Every language eventually faces the same question: how do you compose code across files? Until v0.11.0, Simplex programs lived in single files. That constraint was always temporary, but removing it required solving a surprisingly deep problem: how do you import functions from compiled LLVM IR without duplicating declarations?
Cross-Module Imports
The answer is a use statement that understands compiled output. When you write use mathlib;, the compiler doesn't look for source code—it looks for the compiled .ll file, extracts function signatures, and generates the correct LLVM declare statements automatically.
// mathlib.sx - compiled separately
fn add(a: i64, b: i64) -> i64 {
a + b
}
fn multiply(a: i64, b: i64) -> i64 {
a * b
}
// main.sx - imports from compiled module
use mathlib;
fn main() {
let result = add(3, multiply(4, 5));
println(result); // 23
}
The Key Insight
Rather than building a separate header/declaration system (C's approach) or requiring source-level analysis (Rust's approach for non-compiled crates), Simplex treats compiled LLVM IR as the source of truth. The parser extracts function signatures directly from .ll files, generating declare statements that match exactly. One source of truth, zero drift.
How Module Resolution Works
The module resolver follows a deterministic search path:
- Look for
<module>.llin the source file's directory - Look for
<module>.llin the project'slib/directory - Look for
<module>.llin the Simplex standard library path
The compiler is now source-directory-aware: it knows where your file lives and resolves modules relative to that location. This means multi-directory projects work naturally without configuration files or build scripts.
Automatic Declaration Generation
This is where it gets interesting. When the compiler encounters use mathlib;, it:
- Locates
mathlib.ll - Parses the LLVM IR to extract
definestatements - Converts each definition to its corresponding
declare - Injects these declarations into the current module's IR output
; Generated automatically from `use mathlib;`
declare i64 @add(i64 %0, i64 %1)
declare i64 @multiply(i64 %0, i64 %1)
; Your code compiles as normal, calling these functions
define i64 @main() {
%1 = call i64 @multiply(i64 4, i64 5)
%2 = call i64 @add(i64 3, i64 %1)
; ...
}
No manual declarations. No header files. No extern blocks. Just use and call.
Backwards Compatibility
v0.11.0 is fully backwards compatible with v0.10.x. Single-file programs compile identically. The module system is purely additive—if you never write use, nothing changes.
v0.12.0: The Complete Language
If v0.11.0 was about composing code, v0.12.0 is about completing it. This release touches every layer of the stack: compiler, runtime, standard library, and toolchain. The codename "Complete Language" isn't marketing—it's the test suite reporting 154/154.
154 / 154
Tests passing — 100% of the language specification implemented
Neural Gate Codegen
Neural gates are Simplex's answer to a question most languages don't even ask: what if control flow could learn?
A neural gate is a conditional branch whose threshold is a sigmoid activation. During training, the sigmoid produces soft probabilities—gradients flow through smoothly. During inference, it snaps to a hard threshold—zero overhead, deterministic execution.
neural gate should_prune(weight: f64, threshold: f64) -> bool {
// During training: sigmoid(weight - threshold)
// - Differentiable, gradients flow through
// - Soft decision enables gradient-based learning
//
// During inference: weight > threshold
// - Hard comparison, zero overhead
// - Deterministic, predictable branching
}
The compiler handles the mode switching automatically. When training mode is active, it emits sigmoid-based soft decisions. When inference mode is active, it emits simple comparisons. Same source code, two different codegen paths, selected at compile time.
f64 Type-Aware Arithmetic
Floating-point arithmetic in a systems language is harder than it looks. You need the compiler to know when a value is f64 so it emits fadd instead of add, fmul instead of mul. Simplex 0.12.0 infers float type from four sources:
| Source | Example | How It Works |
|---|---|---|
| Variable type annotation | let x: f64 = 3.14; |
Explicit type flows to all operations on x |
| Struct field type | point.x + point.y |
Field's declared type determines arithmetic instruction |
| Function return type | let v = compute(); |
Return type of compute() propagates to v |
| Cast expression | x as f64 |
Explicit cast marks result as floating-point |
This also includes proper float negation using IEEE 754 sign bit flip (fneg), and math function remapping so that cos, sin, exp, ln, sqrt, tanh, pow, and abs all emit the correct LLVM intrinsics.
Forward-Mode Automatic Differentiation
Building on the dual number foundation from v0.8.0, the runtime now provides a complete forward-mode AD system. Dual numbers carry a value and its derivative simultaneously—every arithmetic operation propagates both:
use simplex_ad::{Dual, dual_var, dual_const};
// Create a variable: value = 2.0, derivative = 1.0
let x = dual_var(2.0);
// Chain rule applied automatically through every operation
let y = x.sin() * x.exp();
// y.value = sin(2) * exp(2) = 0.9093 * 7.389 = 6.719
// y.deriv = cos(2)*exp(2) + sin(2)*exp(2) = exp(2)*(cos(2) + sin(2))
// = 7.389 * (-0.4161 + 0.9093) = 3.643
The runtime supports dual-number variants of all major operations: add, mul, div, sin, cos, exp, ln, sqrt, tanh, sigmoid, and powi. This is what makes neural gates differentiable during training—the sigmoid flows through dual numbers, producing exact analytical gradients with zero tape overhead.
Contract Verification
Simplex 0.12.0 introduces a contract system inspired by Eiffel's design-by-contract philosophy. Contracts express invariants that the compiler and runtime enforce:
fn binary_search(arr: &[i64], target: i64) -> Option<usize>
requires arr.is_sorted()
ensures result.is_none() || arr[result.unwrap()] == target
{
let mut lo = 0;
let mut hi = arr.len();
while lo < hi
invariant lo <= hi && hi <= arr.len()
{
let mid = lo + (hi - lo) / 2;
if arr[mid] == target {
return Some(mid);
} else if arr[mid] < target {
lo = mid + 1;
} else {
hi = mid;
}
}
None
}
Three contract types are supported:
requires— Preconditions checked before function entryensures— Postconditions checked after function returninvariant— Loop invariants checked at every iteration
Contracts are checked at runtime in debug builds and can be stripped in release builds. They serve as executable documentation—a specification that runs.
Reference and Dereference Operators
v0.12.0 adds proper reference (&x) and dereference (*x) operators with correct LLVM IR generation. This completes the pointer story for Simplex, enabling pass-by-reference semantics and efficient data sharing without copies.
The Runtime: 19,589 Lines
The runtime has grown from a thin layer of memory management into a comprehensive systems platform. At 19,589 lines and over 300 functions, it provides everything a Simplex program needs without reaching for C libraries.
Actor System
A full actor runtime with spawn/send/ask messaging, mailbox queues, and a global actor registry. Actors are lightweight—thousands can run concurrently without thread-per-actor overhead.
actor Counter {
count: i64,
fn increment(&mut self) {
self.count = self.count + 1;
}
fn get(&self) -> i64 {
self.count
}
}
let counter = spawn(Counter { count: 0 });
send(counter, "increment");
let value = ask(counter, "get"); // 1
Supervision Trees
Actors crash. The question is what happens next. Simplex borrows Erlang's "let it crash" philosophy with supervision trees that automatically restart failed actors:
- One-for-one: Only the crashed actor restarts
- One-for-all: All siblings restart when one crashes
Combined with a circuit breaker pattern, the system degrades gracefully under failure rather than cascading.
Work-Stealing Scheduler
The async runtime uses a work-stealing scheduler. Each worker thread maintains a local deque of tasks. When a thread runs out of work, it steals from the back of another thread's deque. This provides near-optimal load balancing without central coordination.
Neural and ML Runtime
The neural subsystem supports training mode toggling, sigmoid activation, a gradient tape for reverse-mode AD, a gate registry for neural gates, and network pruning. This is the runtime machinery that makes neural gates work—the compiler emits calls into this subsystem, and the runtime handles mode switching, gradient tracking, and gate evaluation.
Complete Runtime Subsystems
| Subsystem | Capabilities |
|---|---|
| Actors | Spawn, send, ask, mailbox, actor registry |
| Supervision | One-for-one, one-for-all, circuit breaker |
| Scheduler | Work-stealing with per-thread deques |
| JSON | Parse, stringify, object/array manipulation |
| HashMap | SxString-safe hashing and comparison |
| Neural/ML | Training mode, sigmoid, gradient tape, gate registry, pruning |
| Contracts | Requires, ensures, invariant verification |
| Speculative Execution | Lazy contexts, branch tracking, weighted results |
| Weighted References | wref with GC tracking |
| Dual Numbers | Full AD: add, mul, div, sin, cos, exp, ln, sqrt, tanh, sigmoid, powi |
| Observability | Counters, gauges, histograms, span-based tracing |
| Logging | Structured logging with levels, console/file/JSON output |
| Utilities | Timer, UUID v4, TOML parsing, f64 arithmetic |
SQLite3 Dependency Removed
One notable subtraction: the SQLite3 dependency is gone. The runtime no longer requires -lsqlite3 at link time. Simplex programs compile and link with no external C library dependencies beyond libc. Fewer dependencies, fewer surprises.
Standard Library Additions
Three new standard library modules ship with v0.12.0:
lib/strings.sx — StringBuilder
String concatenation in a loop is O(n²). StringBuilder collects fragments and joins them in a single pass for O(n) string building:
use strings;
let sb = StringBuilder::new();
sb.append("Hello");
sb.append(", ");
sb.append("world");
let result = sb.build(); // "Hello, world" - single allocation
lib/safety.sx — Safe Memory
Utilities for safe memory management—bounds-checked access, null-safe operations, and RAII-style resource management patterns.
lib/llm.sx — GGUF Format
A GGUF format specification library for working with quantized language models. This is the bridge between Simplex's cognitive architecture and the broader ecosystem of open-weight models.
Toolchain Updates
Every tool in the Simplex toolchain has been updated across both releases:
| Tool | Purpose | Notable Changes |
|---|---|---|
sxc |
Compiler | Module resolution, neural gate codegen, f64 type inference |
sxpm |
Package manager | Multi-file project support |
cursus |
Test runner | 154 tests across 14 categories |
sxdoc |
Documentation generator | API docs with manifest and search index |
sxlsp |
Language server | Cross-module go-to-definition |
sxfmt |
Formatter | 3,173 lines, handles all new syntax |
sxlint |
Linter | Contract and neural gate lint rules |
Test Suite: 154 Tests Across 14 Categories
The test suite is the real proof. Every feature described above has corresponding tests. Every test passes.
| Category | Tests | Coverage |
|---|---|---|
| Language | 42 | Core syntax, semantics, control flow |
| Standard Library | 27 | Strings, safety, collections |
| AI/Cognitive | 18 | Specialists, hives, belief systems |
| Neural | 16 | Neural gates, differentiability, IR |
| Types | 12 | Type system, generics, pattern matching |
| Toolchain | 11 | Compiler, linker, formatter |
| Runtime | 8 | Memory, scheduling, GC |
| Training | 8 | Annealing, optimizers, schedules |
| Integration | 7 | End-to-end workflows |
| Basics | 6 | Arithmetic, variables, functions |
| Learning | 4 | Self-learning, meta-gradients |
| Async | 3 | Async/await, futures |
| Actors | 1 | Actor spawn/send/ask |
| Observability | 1 | Metrics, tracing |
| Total | 154 |
Compiler Fixes
Beyond new features, v0.12.0 fixes several subtle codegen issues that surfaced as the test suite expanded:
- Variable shadowing: Inner scopes now correctly shadow outer variables without corrupting the outer binding
- Nested closure codegen: Closures inside closures now capture the correct environment
- Deduplicated declarations: The new
emit_stdlib_declsystem prevents duplicate LLVMdeclarestatements when multiple modules import the same runtime functions
These aren't glamorous fixes, but they're the kind of correctness work that separates a prototype from a language you can trust.
Migration Guide
From v0.10.x to v0.11.0
No breaking changes. Add use statements to start using multi-file projects:
// Split your monolith into modules
use mathlib; // imports from mathlib.ll
use utilities; // imports from utilities.ll
From v0.11.0 to v0.12.0
One build change: remove -lsqlite3 from your linker flags if present. Everything else is additive.
# Before (v0.11.0)
sxc main.sx -o main -lsqlite3
# After (v0.12.0)
sxc main.sx -o main
New Features to Try
// Neural gates
neural gate should_activate(signal: f64, threshold: f64) -> bool { }
// Contracts
fn divide(a: f64, b: f64) -> f64
requires b != 0.0
ensures result * b == a
{
a / b
}
// Forward-mode AD
use simplex_ad::{dual_var};
let x = dual_var(3.0);
let y = x * x; // y.value = 9.0, y.deriv = 6.0
What's Next
With 154/154 tests passing, the language specification is complete. The road ahead shifts from "can it do X?" to "how well does it do X?":
- Associated types —
type Outputin traits for more expressive generic programming - &mut self syntax — Ergonomic mutable method receivers
- Self-hosted compiler — The struct field lookup workaround is the last known gap before full self-hosting
- Performance — Optimization passes, inlining heuristics, and codegen improvements
The foundation is laid. Now we build on it.
Try It Today
A programming language is complete when every test passes. A programming language is real when people build things with it. Simplex 0.12.0 clears the first bar. The next chapter is yours to write.