Signature
← Back to Overview

MAXIM

Math & Statistical Cognition

How Maxim Thinks About Numbers, Detects Patterns, and Learns Temporal Rhythms

Humans process numbers through two distinct neural systems. The intraparietal sulcus (IPS) provides fast, approximate magnitude sense — you instantly know 47 is "about 50." The angular gyrus handles precise symbolic computation — it's why you can calculate 47 × 13 = 611. Maxim implements both, and wires them into a pattern-emergence detector that distinguishes signal from noise.

The Dual Number System

Biological Inspiration

Neuroscience has identified two distinct systems for numerical cognition. The dorsal stream (IPS, intraparietal sulcus) processes magnitude, approximate quantity, and spatial numerosity — it's fast, parallel, and "good enough." The ventral stream (angular gyrus) handles exact arithmetic, symbolic manipulation, and mathematical facts — it's slow, sequential, and precise. Damage to one leaves the other intact: patients with IPS lesions can still do algebra, patients with angular gyrus damage can still estimate quantities.

Maxim implements both systems and lets them collaborate. The IPS runs constantly (microseconds per assessment), providing a "gut feeling" about data patterns. When the IPS is uncertain, it escalates to the Angular Gyrus for precise computation. This mirrors the biological dual-pathway.

IPS — Fast Path

Dorsal stream. "Where?" and "How much?"

  • Speed: Microseconds
  • Runs: Every analysis cycle (~5 seconds)
  • Handles: 90% of cases (clearly random or clearly patterned)
  • Methods: assess_randomness, detect_trend, detect_anomaly
  • Output: Confidence score (0.0–1.0)

Angular Gyrus — Slow Path

Ventral stream. "What exactly?" and "Have I seen this before?"

  • Speed: Milliseconds
  • Runs: Only when IPS is uncertain
  • Handles: Ambiguous cases (confidence 0.3–0.65)
  • Methods: analyze (R²), mat_multiply, eigenvalues, solve_system
  • Output: Exact values + persistent memory

IPS: Fast Approximate Assessment

The IPS provides three assessment capabilities, each answering a different question about incoming data:

assess_randomness — "Is there a pattern here?"

The core capability. Uses two complementary statistical tests to determine if a data sequence contains structure or is just noise:

Wald-Wolfowitz Runs Test

Convert values to binary (above/below median), count consecutive "runs." Random sequences have a predictable number of runs.

  • Too few runs → values cluster on one side → trending
  • Too many runs → values alternate rapidly → cyclic
  • Expected runs → no structure detected → random

Lag-1 Autocorrelation

Measures how much each value depends on the previous value. Independent of the runs test, catches different patterns.

  • High positive → values persist (clustering)
  • High negative → values alternate (oscillation)
  • Near zero → no serial dependence (random)

Both signals combine into a pattern_confidence score (0.0 = definitely random, 1.0 = definitely patterned), which drives pattern classification:

Pattern Type Condition Example
RANDOM confidence < 0.4 random.random() × 100 values
TRENDING runs_z < -1.96 (too few runs) Tool success rate declining over days
CYCLIC runs_z > 1.96 (too many runs) Activity cycling between high/low states
CLUSTERING |autocorrelation| > 0.3 Bursts of failures followed by recovery

detect_trend — "Which way is it going?"

Determines if a sequence is increasing, decreasing, or stable. Uses linear regression for magnitude and direction. Returns trend direction, slope, and confidence.

detect_anomaly — "Is this value unusual?"

Flags individual values that deviate significantly from established baselines. Computed via rolling statistics (mean, standard deviation) to catch sudden regime changes.

Angular Gyrus: Precise Mathematical Memory

Biological Inspiration

The angular gyrus sits at the junction of the temporal, parietal, and occipital lobes. It's involved in exact arithmetic, mathematical fact retrieval, and symbolic processing. Brain imaging shows it activates when you recall that 7 × 8 = 56 (a fact, not a computation). It maintains a "math vocabulary" — learned mathematical knowledge that speeds future calculations.

Maxim's Angular Gyrus is both a computation engine and a mathematical memory. It stores learned patterns as persistent MathMemory records with an associative graph for recall, and it implements the MemoryLayer protocol alongside Hippocampus and ATL.

Mathematical Memory Categories

Each MathMemory record is classified into a category that determines how it's used:

FACT

Known constants and relationships. "Pi is approximately 3.14159." Recalled instantly without recomputation.

RELATIONSHIP

Proportional or causal links. "Temperature IS_PROPORTIONAL_TO energy." Captures how quantities relate to each other.

METHOD

Multi-step algorithms. "To solve a system: decompose, substitute, back-solve." Higher-order than formulas.

CONSTANT

Named numerical values. "Euler's number e = 2.71828." Seeded at startup, never expires.

PATTERN

Learned statistical patterns. "tool_navigate success declining (R²=0.73, slope=-0.02/day)." Created by StatisticianAgent via IPS→AG escalation.

Associative Graph

MathMemory records are connected via spreading activation. Recalling "linear regression" also activates "R²", "slope", and related patterns.

Matrix Operations

Four matrix operations are exposed via the Angular Gyrus, all delegating to the linear algebra module with lazy imports:

Method Purpose Returns
mat_multiply(A, B) Matrix multiplication A × B ExactResult with result matrix
mat_eigenvalues(M) Eigenvalue decomposition (symmetric) ExactResult with eigenvalue list
solve_system(A, b) Solve Ax = b (linear system) ExactResult with solution vector
mat_determinant(M) Matrix determinant ExactResult with scalar value

All results include both verbal (natural language) and code (Python snippet) representations, matching how the angular gyrus bridges language and computation in the brain.

Operation Aliases

The MathTool accepts natural-language operation names and normalizes them to canonical forms before routing to the Angular Gyrus. This lets agents (and users) say sqrt 25 instead of compute(power, [25, 0.5]):

Alias Normalizes To
sqrt, square_root compute(power, [value, 0.5])
cube_root, cbrt compute(power, [value, 1/3])
squared compute(power, [value, 2])
cubed compute(power, [value, 3])
factorial Direct computation (0–170)

Pre-LLM Instant Math

For simple arithmetic and unary math, Maxim answers before the LLM is ever called. Regex-based evaluators in the LLMWorker catch common patterns and return instant results with zero inference latency:

Binary Arithmetic

Matches number op number patterns.

  • "what is 1+1" → 1 + 1 = 2
  • "5 * 3" → 5 * 3 = 15
  • "10 / 0" → division by zero

Unary + Compound

Matches unary ops, optionally followed by a binary trailing op.

  • "square root of 25" → √25 = 5
  • "square root of 25 plus 3" → √25 + 3 = 8
  • "8 cubed" → 8³ = 512

This pre-LLM path also serves as a fallback when the LLM times out — if the timed-out question was a simple math query, the user still gets a correct answer instead of a generic "I'm not sure" response.

Linear Algebra Foundation

Underpinning both the Angular Gyrus matrix operations and the SCN oscillator is a pure-Python linear algebra module. No numpy dependency — Maxim runs on embedded hardware where heavyweight libraries aren't available.

Module: maxim.math.linalg # Types Vec = list[float] # 1D vector Mat = list[list[float]] # 2D matrix (row-major) # 25 functions across 5 categories: Vector ops (7): vec_add, vec_sub, vec_scale, vec_dot, vec_norm, vec_normalize, vec_elementwise_mul Construction (3): mat_zeros, mat_identity, mat_from_diag Matrix ops (7): mat_add, mat_sub, mat_scale, mat_transpose, mat_mul, mat_vec_mul, mat_shape Solvers (5): determinant, solve, inverse, eigenvalues_symmetric, _qr_decompose Utilities (3): mat_frobenius_norm, mat_is_symmetric, mat_clamp

Numerical Stability

Small matrices on embedded hardware need careful numerics. The module employs four stability techniques:

  • Partial pivoting in Gaussian elimination — prevents catastrophic amplification from small pivots
  • Householder QR decomposition — numerically stable (preferred over Gram-Schmidt)
  • Wilkinson shifts for eigenvalue convergence — avoids slow convergence on nearly-equal eigenvalues
  • Tolerance constants — 1e-12 for singularity detection, 1e-15 for zero-vector guards

The flagship use case is the SCN oscillator: the coupling matrix × phase vector evolution that drives temporal rhythm learning. Without the linalg module, eigenvalue decomposition of the coupling matrix (used for dominant rhythm analysis) would require numpy.

The StatisticianAgent: Pattern-Emergence Detection

Biological Inspiration

The posterior parietal cortex (where IPS and angular gyrus reside) feeds quantitative analysis into the prefrontal cortex for executive decision-making. The StatisticianAgent is this bridge — it turns raw event streams into statistical context that influences goal selection. The IPS handles fast approximate assessment; the Angular Gyrus is reserved for precise multi-metric analyses that require algebraic precision.

The StatisticianAgent watches the AgentBus for tool outcomes and goal completions, building per-metric time series. Each tracked metric gets its own PatternDetector — a finite state machine that progresses from "I don't know yet" to "there's definitely a pattern here" (or "this is just noise").

PatternDetector State Machine

FSM Transitions ┌──────────────────────────────────────────────────────────────────────────┐ OBSERVING ──(confidence > 0.4)──> PATTERN_FORMING ┌─(sustained > 0.65)──┤ ──(uncertain 8 steps)──> CONFIRMED_PATTERN ESCALATED_TO_AG (publish insight) (precise R², autocorr) (store in AG memory) (recall AG memory) ├──(R² > 0.5)──> CONFIRMED └──(R² < 0.2)──> RANDOM └──(50+ obs, confidence < 0.3)──> CONFIRMED_RANDOM (reduce monitoring) └──────────────────────────────────────────────────────────────────────────┘

The Five States

OBSERVING

Collecting data, too early to tell. Needs at least 15 observations before the IPS runs its first assessment.

FORMING

IPS pattern_confidence crossed 0.4 — something may be emerging. Stays here while confidence fluctuates.

ESCALATED

IPS stayed uncertain for 8 consecutive steps (confidence 0.3–0.65). Angular Gyrus invoked for precise R², autocorrelation, and memory recall.

CONFIRMED

Non-random structure verified. A StatisticalInsight is published to the bus, and the pattern is stored as an AG MathMemory record for future recall.

RANDOM

Sequence confirmed as noise. Monitoring frequency reduced to every 10th observation. Can re-enter OBSERVING if new data shows structure.

IPS → Angular Gyrus Escalation

The escalation pathway is where the dual number system truly integrates. When IPS can't decide, the Angular Gyrus takes over with a three-step process:

  1. Check AG memory first — "Have I seen this pattern before?" Recall from the AG associative graph with category=PATTERN. If a prior MathMemory record matches (same metric, same trend direction), fast-track confirmation and reinforce the memory.
  2. Compute precisely — If no memory match, run full linear regression via analyze("linear") to get exact R² and autocorrelation confidence intervals. This uses the linalg module internally.
  3. Store or reject — R² > 0.5 → store as a new PATTERN MathMemory record (becomes retrievable for future fast-tracking). R² < 0.2 → declare random. In between → stay escalated, retry next cycle.

AG Memory Lifecycle

First encounter: IPS uncertain → AG computes R² → stores PATTERN memory

Second encounter: IPS uncertain → AG recalls memory → "I've seen this, R²=0.73" → fast confirm

Over time: observation_count grows, confidence increases, pattern becomes established

Pattern breaks: AG detects mismatch → reduce memory confidence × 0.7 → may compress/remove

Bus Integration

The StatisticianAgent is fully decoupled from other agents via the bus:

Subscribes to (input)

  • ToolResult — tracks success/failure rate per tool
  • GoalCompleted — tracks goal outcome patterns

Publishes (output)

  • StatisticalInsight — on CONFIRMED_PATTERN (actionable alert)
  • StatisticalSummary — periodic context + ranked analysis suggestions for MemoryAgent

MemoryAgent subscribes to StatisticalSummary and populates StructuredContext.statistical_context and statistical_suggestions, which the LLM sees during goal proposal. This means the agent's reasoning is informed by detected patterns and data-type-aware analysis recommendations.

Analysis Suggestion Engine

Rather than generic "investigate patterns" guidance, the StatisticianAgent generates specific, ranked analysis suggestions based on what it actually sees. This is entirely deterministic — no LLM involved. The engine has three stages:

1. Data Type Inference (MetricDataType)

Each metric is classified by naming convention first (fast path), then by value distribution analysis (fallback):

BINARY

*:success, *:fail, or values ⊆ {0, 1}

RATE

*rate*, *ratio*, or all values in [0.0, 1.0]

LATENCY

*latency*, *duration*, or non-negative values > 1.0

CONTINUOUS

Everything else (unbounded floats, signed values)

2. Decision Matrix (FSM State × Data Type → Suggestion)

The suggestion depends on both what the FSM has determined and what kind of data it's looking at:

FSM State Binary/Rate Latency Continuous Priority
OBSERVING No suggestion (insufficient signal)
FORMING assess_randomness anomaly trend 0.5
ESCALATED analyze linear (high priority — active uncertainty) 0.8
CONFIRMED recall_memory (if AG memory exists) or analyze (for characterization) 0.6
RANDOM No suggestion (low value)

Temporal context from the SCN oscillator adjusts priority: +0.15 when temporal_anomaly > 0.5.

3. Ranked Output (AnalysisSuggestion)

Each suggestion carries full context for prompt construction:

AnalysisSuggestion( metric="tool:navigate:success", # Which metric tool_call="math", # Which tool to invoke operation="assess_randomness", # Specific operation rationale="binary metric showing # Why this analysis emerging pattern (mean=0.60)", priority=0.5, # 0.0-1.0 ranking data_type="binary", # Inferred type fsm_state="PATTERN_FORMING" # Current FSM state )

Top 5 suggestions are published with each StatisticalSummary. MemoryAgent serializes them into StructuredContext.statistical_suggestions, and both ExecAgent (up to 3) and LLMWorker prompts consume them as ranked, actionable guidance — replacing the previous generic "use math tool to investigate patterns" advice.

SCN Coupled Oscillator Network

Biological Inspiration

The suprachiasmatic nucleus (SCN) in the hypothalamus is the brain's master clock — a network of ~20,000 coupled oscillators synchronized via gap junctions and neurotransmitters. "Monday mornings" isn't a lookup in two separate tables — it's an emergent rhythm from learned coupling between circadian and weekly oscillators. The Kuramoto model, which Maxim implements, is the standard mathematical framework for studying such coupled oscillator networks.

The existing SCN in Maxim provides temporal bin indexing — 47 bins across 4 timescales for fast set-based lookup ("what memories happened at 9am?"). The oscillator network adds a fundamentally different capability: learned temporal coupling that makes rhythms interact and predict each other.

Four Oscillators

🌅

Circadian

Period: 1 day

ω = 1.0

📅

Weekly

Period: 7 days

ω = 1/7

🌙

Monthly

Period: 30 days

ω = 1/30

☀️

Annual

Period: 365.25 days

ω = 1/365.25

Kuramoto Dynamics

Each oscillator has a phase θ that evolves according to the Kuramoto model:

Phase Evolutioni/dt = ωi + (K/N) ∑j W[i][j] · sin(θj − θi) ωi = natural frequency of oscillator i K = global coupling strength (default 0.1) W = 4×4 coupling matrix (learned via Hebbian rule) N = number of oscillators (4)

When oscillators have similar phases, sin(θj − θi) ≈ 0 and they don't influence each other. When out of sync, the coupling term pulls them toward alignment (or repulsion, if weights are negative).

Hebbian Coupling Learning

The coupling matrix W isn't static. It learns from observations via a Hebbian rule — "neurons that fire together wire together":

Hebbian Learning ΔW[i][j] = η · cos(θi − θj) When oscillators co-activate (similar phases): cos ≈ 1 → strengthen coupling When anti-phase: cos ≈ -1 → weaken coupling η = 0.01 (learning rate), decay = 0.999 per step Weights bounded: [-0.5, 2.0] (mild inhibition to strong excitation)

This is how "Monday mornings" emerge. If events consistently happen when both the circadian oscillator (morning phase) and weekly oscillator (Monday phase) are active, the coupling between them strengthens. Eventually, the system predicts "Monday mornings" as a single emergent rhythm, not an intersection of two independent lookups.

Soft Phase Reset

Each observation nudges the oscillators toward the observed temporal signature, but how much depends on experience:

Blend Factor blend = max(0.1, 1 / (1 + 0.01 × observation_count)) Early (obs ~0): blend ≈ 1.0 → trust observation fully After 100 obs: blend ≈ 0.5 → blend observation with model After 1000 obs: blend ≈ 0.1 → trust model, resist disruption

This mirrors biological entrainment — a newborn's circadian rhythm is easily disrupted by light changes, but an adult's clock resists jet lag because it has strong learned coupling.

Key Capabilities

Method What It Does
phase_coherence() Kuramoto order parameter r ∈ [0,1]. r=1 means all oscillators synchronized, r<0.5 means spread. Indicates how "settled" the temporal model is.
predict_next_occurrence() Forward-simulates oscillator dynamics without mutation. "When will the circadian oscillator next reach morning phase?" Returns hours until target.
temporal_anomaly_score() Circular distance between predicted and observed phases. 0 = perfectly expected, 1 = maximally surprising for this moment in time.
coupling_eigenvalues() Eigenvalues of the coupling matrix — the dominant eigenvalue reveals the strongest learned rhythm. Uses linalg.eigenvalues_symmetric.

SCN Integration: Bins + Oscillator

The oscillator lives alongside the existing bin system, not replacing it. Every call to scn.register(memory_id, signature) feeds both:

SCN Architecture SCN ┌───────────────────────┐ │ BIN INDICES │ OSCILLATOR NETWORK │ (existing, unchanged) │ (new, optional) │ │ │ 24 hour bins │ phases[4] │ 7 day bins │ coupling[4×4] │ 4 week bins │ frequencies[4] │ 12 month bins │ │ │ │ query_hour() │ predict_next() │ query_day() │ phase_coherence() │ get_bins() │ temporal_anomaly_score() │ is_sole_rep() │ coupling_eigenvalues() └───────────────────────┘ Enabled via: scn.enable_oscillator(config) Disabled by default — zero impact on existing consumers Persistence: v2.0 format, backward-compatible with v1.0

How It All Connects

The math framework isn't isolated — it's wired into the core cognitive loop. Here's the full data flow:

Math ↔ Agent Pipeline Integration Bus Events StatisticianAgent Output ──────────── ────────────────── ────── ToolResult ─────┐ GoalCompleted ──┤ ┌──────────────────────┐ ├──bus──→ │ Per-metric series │ │ │ │ │ │ IPS fast path: │ StatisticalInsight │ │ assess_randomness() │──→ (on CONFIRMED) │ │ detect_trend() │ │ │ │ StatisticalSummary │ │ AG slow path: │──→ (periodic → MemoryAgent) │ │ analyze("linear") │ + AnalysisSuggestions │ │ recall/store memory │ + data_type_breakdown │ │ │ │ │ Suggestion engine: │ knowledge_context │ │ infer_data_type() │──→ (merged ATL + AG → LLM) │ │ FSM × type matrix │ │ │ rank by priority │ PromotionCandidate │ │ │──→ (→ SemanticPromoter → ATL) │ │ SCN temporal context: │ │ │ anomaly weighting │ statistical_suggestions │ │ phase correlation │──→ (→ ExecAgent + LLMWorker prompts) │ └──────────────────────┘

Integration Points

System Uses Math For
MemoryAgent Populates statistical_context, active_pattern_count, and statistical_suggestions in StructuredContext via bus subscription
ExecAgent / LLM Sees statistical patterns and ranked analysis suggestions in prompt — data-type-aware guidance like "math assess_randomness on tool:navigate:success [binary]" instead of generic advice
SemanticPromoter StatisticianAgent is a PromotionSource — confirmed patterns become ATL semantic concepts with cross-layer edges to AG records
SCN Bins Oscillator enriches temporal indexing with coupling-based prediction, without changing any bin query behavior
MathTool Agent-accessible math operations routed through IPS (compare, trend, anomaly) and Angular Gyrus (compute, analyze, mat_multiply, eigenvalues, solve_system, determinant). Natural-language aliases (sqrt, square_root, cube_root, squared, cubed, factorial) auto-normalize to canonical operations. LLM prompts include PEMDAS decomposition guidance for multi-step expressions, using store_value/recall_value for intermediates

A Concrete Example

The robot has been using navigate for an hour, and success rate is declining:

  1. IPS detects pattern_confidence rising: 0.12 → 0.35 → 0.52 → PatternDetector enters FORMING
  2. Confidence oscillates around 0.45 for 8 steps — IPS uncertain. Escalated to AG.
  3. AG checks memory: no prior "navigate decline" pattern. Runs analyze("linear"): R² = 0.73, slope = -0.02/step.
  4. AG stores new MathMemory(PATTERN, "stat:tool:navigate:success") with R² and slope.
  5. CONFIRMED_PATTERN. StatisticalInsight published to bus.
  6. Suggestion engine infers data_type=binary (from *:success naming), generates AnalysisSuggestion(operation="recall_memory", priority=0.6) since AG memory exists.
  7. MemoryAgent receives StatisticalSummary with suggestions, adds to LLM context: "navigate tool declining (R²=0.73)" + ranked analysis recommendation.
  8. ExecAgent sees "SUGGESTED ANALYSES: math recall_memory on tool:navigate:success [binary]" — proposes a goal that avoids navigate, choosing an alternative approach.
  9. Next session: IPS detects the same pattern. AG recalls the MathMemory record → fast-track confirmation (no recomputation).