Today I'm announcing two releases that fundamentally change what a programming language can be. Simplex v0.6.0 introduces Neural Gates—control flow that learns from data. Simplex v0.7.0 adds Real-Time Learning—models that adapt during runtime without retraining. Together, they enable a new paradigm: code that gets smarter the longer it runs.
The Problem with Static Code
Traditional programs are frozen at deployment. You write logic, compile it, ship it. If the world changes—user preferences shift, data patterns evolve, markets move—your code doesn't adapt. You retrain models offline, redeploy, hope you got it right.
This creates a fundamental mismatch. Real systems operate in dynamic environments. Static code cannot keep up.
What if the code itself could learn? What if decision logic optimized from experience? What if models improved from every interaction, without taking them offline?
That's what Simplex v0.6.0 and v0.7.0 deliver.
Neural Gates: Learnable Control Flow (v0.6.0)
Every if statement makes a binary choice. The threshold is hardcoded. The decision boundary is fixed at compile time.
Neural Gates change this. They're differentiable decision points that can be trained from data:
// Traditional: hardcoded threshold
if confidence > 0.7 {
approve_transaction()
} else {
flag_for_review()
}
// Neural Gate: learned threshold
neural_gate route_transaction(confidence: f64, risk: Tensor) -> Decision {
branch approve { Decision::Approve }
branch review { Decision::Review }
branch reject { Decision::Reject }
}
During training, the gate learns optimal decision boundaries from labeled examples. During inference, it compiles to zero-overhead conditionals—as fast as hand-written if statements.
How It Works
Neural Gates use the Gumbel-Softmax trick to enable gradient flow through discrete choices:
| Mode | Behavior | Use |
|---|---|---|
sxc --mode=train |
Differentiable execution. Gradients flow through all branches weighted by probability. | Training from labeled data |
sxc --mode=infer |
Discrete execution. Gates compile to standard conditionals. Zero overhead. | Production deployment |
sxc --mode=profile |
Track gate decisions with statistics. See which branches fire and how often. | Debugging and analysis |
Contracts for Safety
Learned behavior introduces uncertainty. What if the gate makes a dangerous choice? Simplex enforces safety through contracts:
neural_gate route_payment(amount: f64, risk: Tensor) -> PaymentRoute
requires amount > 0.0
requires risk.is_valid()
ensures result.is_auditable()
fallback PaymentRoute::ManualReview
{
branch instant { PaymentRoute::Instant { fee: amount * 0.001 } }
branch standard { PaymentRoute::Standard { fee: 0.50 } }
branch review { PaymentRoute::ManualReview }
}
- requires: Preconditions that must hold before the gate fires
- ensures: Postconditions guaranteed on the output
- fallback: Safe default when confidence is below threshold or contracts fail
The compiler verifies contracts statically where possible. Runtime checks catch dynamic violations. The gate cannot violate safety invariants, no matter what it learns.
Hardware Targeting
Neural Gates automatically target the right hardware:
@gpu
neural_gate batch_classifier(inputs: List<Embedding>) -> List<Label> {
// Tensor operations run on GPU
inputs.map(|e| classify(e))
}
@cpu
fn process_result(label: Label) -> Action {
// Branching logic runs on CPU
match label {
Label::Safe => Action::Allow,
Label::Suspicious => Action::Review,
Label::Malicious => Action::Block,
}
}
@npu
neural_gate cognitive_inference(prompt: String) -> Response {
// SLM inference runs on NPU
slm.generate(prompt)
}
Real-Time Learning: Adaptive Models (v0.7.0)
Neural Gates learn decision logic. But what about the models inside specialists? Traditionally, you'd take them offline, retrain on new data, redeploy.
The simplex-learning library changes this. Models adapt during runtime:
use simplex_learning::{OnlineLearner, StreamingAdam, SafeFallback}
specialist RecommendationEngine {
var learner: OnlineLearner,
fn init() {
self.learner = OnlineLearner::new(
optimizer: StreamingAdam { lr: 0.001 },
window_size: 100,
)
}
receive Recommend(user_id: String, context: Context) -> List<Item> {
self.learner.forward(context)
}
receive UserClicked(user_id: String, item: Item) {
// Learn from positive feedback in real-time
let loss = self.learner.compute_loss(item, positive: true)
self.learner.step(loss)
}
receive UserDismissed(user_id: String, item: Item) {
// Learn from negative feedback
let loss = self.learner.compute_loss(item, positive: false)
self.learner.step(loss)
}
}
Every user interaction teaches the model. No retraining. No redeployment. The system improves continuously.
Streaming Optimizers
Traditional optimizers assume batch training with full datasets. simplex-learning provides streaming variants designed for real-time adaptation:
| Optimizer | Description | Best For |
|---|---|---|
StreamingSGD |
SGD with momentum, constant memory | Fast updates, predictable behavior |
StreamingAdam |
Adam with bounded memory for moment estimates | General-purpose, adaptive learning rate |
StreamingAdamW |
AdamW with decoupled weight decay | Preventing catastrophic forgetting |
Safety Constraints
Online learning is risky. What if bad data causes the model to diverge? What if adversarial inputs corrupt weights?
SafeLearner wraps any learner with runtime safety bounds:
use simplex_learning::{SafeLearner, SafeFallback}
let safe_learner = SafeLearner::new(
learner: OnlineLearner::new(optimizer: StreamingAdam { lr: 0.001 }),
// Maximum parameter change per step
max_delta: 0.1,
// Rollback if validation accuracy drops
validation_threshold: 0.95,
// Keep checkpoints for recovery
checkpoint_interval: 100,
)
// Define fallback behavior
let fallback = SafeFallback {
on_divergence: FallbackAction::Rollback, // Return to last good state
on_repeated_failure: FallbackAction::Alert, // Notify operators
max_failures: 5, // Halt after 5 consecutive failures
}
The system cannot degrade below safety thresholds. If learning goes wrong, it automatically recovers.
Federated Learning Across Hives
Multiple specialists can learn collaboratively while preserving privacy:
use simplex_learning::{FederatedLearner, FederatedConfig}
let federated = FederatedLearner::new(
config: FederatedConfig {
// Only share gradients, not raw data
aggregation: Aggregation::SecureAverage,
// Differential privacy budget
epsilon: 1.0,
// Minimum participants for aggregation
min_participants: 3,
// Sync frequency
sync_interval: Duration::seconds(60),
}
)
// Local update
let local_gradients = learner.compute_gradients(batch)
// Share with federation (privacy-preserving)
federated.contribute(local_gradients)
// Receive aggregated knowledge from other hives
let global_update = federated.receive_aggregate()
learner.apply_update(global_update)
Each hive learns from its local data. The federation combines knowledge without exposing private information.
Why This Matters: A Fundamental Shift
We're used to thinking of code and models as separate:
- Code: Static logic written by humans. Deterministic. Frozen at compile time.
- Models: Learned patterns from data. Probabilistic. Trained offline.
Simplex v0.6.0 and v0.7.0 dissolve this boundary. With Neural Gates, code logic itself is learned. With Real-Time Learning, models update continuously. The result:
Programs that improve the longer they run.
This isn't incremental. It's a different kind of software—one that adapts, learns, and evolves in production.
Use Cases: Systems That Learn
Let me walk through concrete examples of what this enables.
1. Adaptive Customer Service
A support system that learns from every interaction:
specialist SupportRouter {
var learner: SafeLearner,
receive Route(ticket: Ticket) -> Specialist {
// Gate learns optimal routing from resolution data
neural_gate select_specialist(ticket.embedding, ticket.urgency) -> Specialist
fallback Specialist::GeneralSupport
{
branch billing { Specialist::Billing }
branch technical { Specialist::Technical }
branch account { Specialist::Account }
branch escalate { Specialist::Senior }
}
}
receive Resolution(ticket_id: String, resolved: bool, time_to_resolve: Duration) {
// Learn from outcomes
if resolved && time_to_resolve < Duration::minutes(10) {
self.learner.reward(ticket_id, positive: true)
} else {
self.learner.reward(ticket_id, positive: false)
}
}
}
The system starts with reasonable defaults. Over time, it learns which ticket patterns route best to which specialists. Resolution times improve automatically.
2. Dynamic Pricing Engine
Prices that adapt to market conditions in real-time:
specialist PricingEngine {
var learner: OnlineLearner,
var market_feed: MarketDataStream,
receive GetPrice(product: Product, context: PricingContext) -> Price {
// Combine product features with real-time market signals
let market_state = self.market_feed.current()
neural_gate compute_price(product.embedding, context, market_state) -> Price
requires context.is_valid()
ensures result.price > product.cost // Never sell below cost
fallback product.list_price
{
branch premium { adjust_price(product, multiplier: 1.2) }
branch standard { product.list_price }
branch discount { adjust_price(product, multiplier: 0.9) }
branch clearance { adjust_price(product, multiplier: 0.7) }
}
}
receive SaleCompleted(product_id: String, price: Price, sold: bool) {
// Learn from sales outcomes
self.learner.feedback(product_id, sold)
}
receive MarketUpdate(data: MarketData) {
// Continuously adapt to market conditions
self.market_feed.update(data)
}
}
The pricing engine learns optimal price points from actual sales. It adapts to market conditions—economic indicators, competitor pricing, seasonal patterns—without manual rule updates.
3. Environmental Response System
A system that adapts to environmental changes:
specialist EnvironmentalController {
var learner: SafeLearner,
var weather_feed: WeatherDataStream,
var energy_feed: EnergyPriceStream,
receive Optimize(building: Building) -> ControlSettings {
let weather = self.weather_feed.forecast_24h()
let energy_prices = self.energy_feed.forecast_24h()
let occupancy = building.predicted_occupancy()
neural_gate optimize_hvac(weather, energy_prices, occupancy) -> HVACSettings
requires building.is_operational()
ensures result.temperature.in_range(18.0, 26.0) // Comfort bounds
ensures result.energy_cost < building.budget // Budget constraint
fallback HVACSettings::default()
{
branch pre_cool { pre_condition_building(weather, energy_prices) }
branch standard { maintain_comfort(occupancy) }
branch eco_mode { minimize_energy(weather) }
branch peak_shift { shift_load_off_peak(energy_prices) }
}
}
receive SensorUpdate(building_id: String, sensors: SensorData) {
// Learn from actual comfort and energy outcomes
let comfort_score = compute_comfort(sensors)
let energy_efficiency = compute_efficiency(sensors)
self.learner.feedback(building_id, comfort_score, energy_efficiency)
}
receive WeatherActual(data: WeatherData) {
// Improve weather prediction accuracy
self.weather_feed.calibrate(data)
}
}
The building learns optimal HVAC strategies from weather patterns, energy prices, and occupancy. It pre-cools before price spikes. It shifts loads off-peak. It improves efficiency over seasons.
4. Fraud Detection That Evolves
Security that adapts to new attack patterns:
specialist FraudDetector {
var learner: SafeLearner,
var pattern_memory: EpisodicMemory,
receive Analyze(transaction: Transaction) -> RiskAssessment {
// Remember similar past transactions
let similar = self.pattern_memory.recall(transaction.embedding, limit: 10)
neural_gate assess_risk(transaction, similar) -> RiskLevel
requires transaction.is_valid()
fallback RiskLevel::ManualReview // When uncertain, escalate
{
branch safe { RiskLevel::Low }
branch suspicious { RiskLevel::Medium }
branch likely_fraud { RiskLevel::High }
branch confirmed_pattern { RiskLevel::Block }
}
}
receive FraudConfirmed(transaction_id: String) {
// Learn from confirmed fraud
let transaction = self.get_transaction(transaction_id)
self.learner.feedback(transaction, fraudulent: true)
self.pattern_memory.remember(transaction, MemoryType::Episodic)
}
receive FalsePositive(transaction_id: String) {
// Learn from false alarms
let transaction = self.get_transaction(transaction_id)
self.learner.feedback(transaction, fraudulent: false)
}
}
The detector learns from every confirmed fraud and false positive. New attack patterns are incorporated automatically. The system stays ahead of evolving threats.
5. Content Personalization
A content system that learns individual preferences:
specialist ContentPersonalizer {
var learner: OnlineLearner,
var user_memory: Map<String, Anima>,
receive GetFeed(user_id: String) -> List<Content> {
let anima = self.user_memory.get(user_id)
let preferences = anima.beliefs.filter(|b| b.confidence > 0.5)
// Generate candidates
let candidates = self.generate_candidates(preferences)
// Rank with learned model
candidates.map(|content| {
neural_gate should_show(content, anima) -> f64
{
branch highly_relevant { 1.0 }
branch somewhat_relevant { 0.6 }
branch fill_content { 0.3 }
branch skip { 0.0 }
}
})
.filter(|(_, score)| score > 0.2)
.sort_by(|(_, score)| -score)
.take(20)
}
receive Engagement(user_id: String, content_id: String, action: EngagementAction) {
let anima = self.user_memory.get(user_id)
match action {
EngagementAction::Click => self.learner.reward(content_id, 0.3),
EngagementAction::Read => self.learner.reward(content_id, 0.6),
EngagementAction::Share => self.learner.reward(content_id, 1.0),
EngagementAction::Hide => self.learner.reward(content_id, -0.5),
}
// Update user's anima with new beliefs
anima.update_belief(f"prefers_{content.category}", action.positive())
}
}
Every click, read, share, and hide teaches the system. Personalization improves with each interaction. No batch retraining required.
The Bigger Picture
These releases represent something I've been working toward since the beginning of Simplex: software that participates in its own improvement.
Traditional software is a one-way street. Developers write, compile, deploy. The system executes exactly what was written, forever, until someone updates the code.
Simplex systems are different. They:
- Learn decision boundaries from data (Neural Gates)
- Adapt models from interactions (Real-Time Learning)
- Accumulate knowledge across the hive (Federated Learning)
- Remember experiences that inform future behavior (Anima)
- Maintain safety invariants regardless of what they learn (Contracts)
The result: systems that get smarter over time, automatically, safely.
Getting Started
Both releases are available now. Get the latest toolchain from GitHub, then add the learning library to your project:
# Add simplex-learning to your project
sxpm add simplex-learning
# Install dependencies
sxpm install
Documentation:
Full specifications:
What's Next
With v0.6.0 and v0.7.0 complete, focus shifts to building applications that demonstrate these capabilities. Expect:
- Reference implementations of adaptive systems
- Expanded standard library modules
- Performance optimization for production workloads
- Tooling for debugging learned behavior
The foundation is in place. Now we build on it.
Simplex v0.7.0 is available now