Signature
← Back to Ramblings

MAXIM

An Artificial Brain from Biological Blueprints

What is Maxim?

Maxim is an agentic robotics framework that gives robots something resembling a mind. Not just reflexes. Not just if-then rules. An actual cognitive architecture inspired by how mammalian brains work, adapted for embodied AI agents that can perceive, remember, decide, and learn.

The Problem with Robot Brains

Most robots today are either brilliant or stupid, rarely anything in between. Industrial robots execute perfect movements but can't adapt. LLM-powered robots can reason but lack genuine understanding of their bodies. They don't feel where they are in space. They don't remember what worked before.

Maxim bridges this gap by stealing shamelessly from neuroscience. If evolution spent hundreds of millions of years perfecting certain cognitive architectures, why reinvent them?

The Biological Blueprint

Maxim implements computational models of several brain structures, each handling a specific cognitive function:

Operating Modes

Maxim's behavior is controlled by three independent dimensions: processing states (awake vs sleep), operational modes (passive, active, singularity), and strategies (observe, explore, research, assist, reflect, learn). These combine freely to produce configurations like "active exploration" or "passive reflection."

Live mode is the full embodied runtime with a self-evolving intent system—the agent shapes its own personality through experience. Exploration mode adds curiosity modeling and budget tracking. Sleep isn't a mode but a processing state that enables memory consolidation while monitoring for wake keywords.

Read the full Operating Modes guide →

How It All Connects

The magic isn't in any single component. It's in how they integrate. Perception flows into memory. Memory informs attention. Attention shapes what gets remembered. Reward signals update causal models. Pain teaches avoidance.

Perception → Memory → Attention → Decision → Action → Outcome ↑ ↓ └──────────────── Learning ←───────────────────┘

A Concrete Example

A robot is asked to "find the coffee mug." Here's what happens:

  • The Entorhinal Cortex retrieves similar past experiences (maybe "finding cups" or "kitchen tasks")
  • The Hippocampus provides context: "Last time, the mug was on the counter"
  • The SCN notes it's morning, when mugs are often near the coffee maker
  • The Attention Network prioritizes scanning kitchen areas
  • The Nucleus Accumbens predicts: "Counter search usually succeeds"
  • If the robot moves too fast and jerks, Pain Detection signals discomfort
  • Success or failure updates all relevant memories and predictions

Why This Matters

This isn't just academic curiosity. Biological inspiration provides:

  • Robustness: Brains evolved to handle noisy, unpredictable environments
  • Efficiency: Evolution optimized for energy constraints
  • Continuous Learning: Real memory systems adapt without catastrophic forgetting
  • Embodied Grounding: Cognition tied to sensorimotor experience
  • Safety: Pain and fear are ancient, battle-tested harm avoidance systems

Maxim runs on the Reachy Mini humanoid robot, processing real visual input, real motor commands, real interactions with humans. It's not a simulation. It's not a thought experiment. It's a working system that learns from every interaction.