MAXIM
Imagination
Real-Time Entity Design from Novel Percepts
Shipped in v0.7.0 — I1 + I2
The Concept
Biological Inspiration
Imagination fires during low-arousal idle states — the same way you don't daydream while fighting. When the brain encounters something unfamiliar, it constructs a mental model from prior experience to reason about the novel entity before physically interacting with it. The Default Network, which activates during rest and mind-wandering, gates this process.
When the agent encounters a novel entity mentioned in percept text that has no existing SEM component, the imagination system designs one in real-time. The result is a fully functional entity with sensors, affordances, and failure modes — registered ephemerally for the session and available for interaction through auto-generated tools.
Why This Matters
Without imagination, the agent can only interact with entities that were pre-authored as YAML components. Narration that mentions a "rusted padlock" or "crystal chandelier" would have no tools, no sensors, no failure modes — the agent could only talk about them, never touch them. Imagination closes this gap by designing components on-the-fly.
The Pipeline
Each stage is a gate. If any gate fails, the pipeline short-circuits gracefully — the agent falls back to verbal-only interaction with the entity.
Entity Extraction
Lightweight NLP heuristics extract entity-like noun phrases from narration text. No external model required — this runs on pure string processing.
Extracted
Physical objects, creatures, weapons, environmental features, items, vehicles, NPCs
Examples: "rusty padlock", "crystal chandelier", "ancient tome", "iron golem"
Filtered Out
Abstract concepts, body parts, clothing, emotions, time references, generic pronouns
Examples: "courage", "left arm", "leather boots", "dread", "morning"
Two strategies work in parallel:
- Sentence-level intro patterns — catches "You see a rusty gate", "A massive golem blocks the path", "There is a glowing orb"
- Head-noun scanning — matches against a curated indicator vocabulary of entity-like words (weapon, creature, door, chest, etc.)
ComponentIndex: Two-Layer Lookup
Before imagining anything, each candidate phrase is checked against the ComponentIndex to see if an existing component already covers it:
Layer 1: Alias Table (O(1))
Exact match against component names and declared synonyms from the component.synonyms YAML field.
"sword" → weapons/rusty_sword
Layer 2: Embedding Similarity
Cosine similarity against all component signature embeddings. Threshold: 0.65. Uses the shared similarity.encoder singleton.
"old iron door" → environments/rusty_gate (0.72)
If either layer finds a match, imagination is skipped for that phrase. This prevents the system from creating duplicate components under different names.
Thread Safety
The ComponentIndex is protected by an RLock. Multiple threads (AUT + orchestrator) can query it concurrently. Persistence uses .npy + .json sidecar — no pickle, ever.
Gates
Three gates prevent imagination from firing at inappropriate times:
Mention Threshold
Default 2 mentions before triggering. A one-off phrase ("you notice a crack in the wall") won't spawn an entity. Repeated mentions signal narrative importance.
DN Arousal Gate
Only fires during low-arousal idle states. Blocked when the Default Network is inhibited or recent interesting events occurred. You don't daydream while fighting.
Energy Budget
Skipped when LLM energy is critical (<10% remaining). Falls back gracefully to verbal-only interaction with the entity.
Per-Phrase Design Guard
A per-phrase lock prevents concurrent LLM calls for the same entity phrase. In multi-thread setups (AUT + orchestrator), only one thread designs a given entity; the other waits for the result. Thread-safe throughout via RLock.
EntityDesigner
When all gates pass, the ImaginationDesigner wraps the EntityDesigner and makes a single LLM call to generate a complete SEM component specification from the entity phrase and surrounding narrative context.
Quick Validation
After generation, the spec is validated against the SEM protocol: required fields present, sensor ranges valid, modulator params typed, failure triggers well-formed. Invalid specs are discarded — the agent falls back to verbal interaction.
Ephemeral Registration
Imagined entities live in a separate overlay (_ephemeral_index) from the persistent component registry. This separation is architectural:
During Session
- Visible to
get(),has(),query() - Tools registered in current scene scope
- Added to ComponentIndex for dedup
- Full SEM interaction (sensors, affordances, failures)
At Session End
- Cleared via
clear_ephemeral() - Tools deregistered
- Episodes persist (with provenance)
- Causal links get 50% confidence decay
This means the agent learns from imagined interactions (pain avoidance, reward prediction) but with reduced confidence, reflecting the simulated origin.
Provenance Tagging
All learning from imagined entities carries imagined=True provenance:
The 50% decay means the agent retains partial learning ("chandeliers can collapse") but with appropriately reduced confidence compared to verified real-world interactions. If the agent encounters the same entity type again and the interaction confirms the learned pattern, confidence rebuilds naturally through standard Rescorla-Wagner updates.
Scene-Scoped Tool Registration
Imagined entities get their tools registered into the current scene scope (I3). This means:
- Tools activate when the entity enters the scene and deactivate when it leaves
- An active tool cap prevents prompt overflow from many imagined entities
- The executor gate rejects calls to deactivated tools with informative errors
- Least-recently-used tools are deactivated first when the cap is reached
This integrates naturally with the scene-scoped tool system — imagined entities follow the same lifecycle as pre-authored ones.
ImaginationCache
A session-scoped cache prevents redundant design attempts:
- Shared across AUT + orchestrator — if the orchestrator's narration mentions "crystal chandelier" and the AUT's perception also extracts it, only one design call fires
- Stores both successes and failures — a failed validation for "vague smoke" won't retry every turn
- Thread-safe via RLock
- Cleared at session end alongside the ephemeral registry
Integration with Bio-Systems
Imagined entities participate in the full bio-pipeline, just like pre-authored ones:
Hippocampus
Episodes from imagined interactions are captured with imagined=True metadata.
NAc
Causal links form from affordance outcomes. 50% confidence decay at session end.
PainBus
Failure modes fire pain signals through the same cascade as real entities.
Cerebellum
Forward models train on imagined affordance outcomes via Rescorla-Wagner.
ATL
Semantic concepts form from imagined entity interactions (modality-tagged).
Acting Coach
Exploration directives include imagined entity affordances in the meta-prompt.
Architecture
| Module | Purpose |
|---|---|
| imagination/trigger.py | Entity noun-phrase extraction, ComponentIndex lookup, design dispatch |
| imagination/designer.py | ImaginationDesigner: wraps EntityDesigner for real-time entity generation |
| imagination/cache.py | Session-scoped ImaginationCache, thread-safe, shared AUT + orchestrator |
| embodiment/component_registry.py | register_ephemeral(), clear_ephemeral(), ephemeral overlay |
| embodiment/component_index.py | Two-layer semantic discovery (alias hash + embedding cosine) |
| tools/registry.py | Scene-scoped activation, active tool cap, executor gate |
| runtime/agent_loop.py | imagination_trigger parameter on run_agentic_loop |
Wiring
The imagination trigger is passed as a parameter to run_agentic_loop and fires post-state.update() on every turn. This placement means the agent has already processed the percept and updated its state before imagination considers whether to design a new entity.
Related Systems
Embodiment (SEM Protocol)
The entity format imagination generates. Sensors, modulators, failure modes.
Component Library
65+ pre-authored entities. ComponentIndex searches these before imagining.
Asset Foundry
Batch entity generation with gauntlet testing. Imagination is the real-time equivalent.
Scene-Scoped Tools
How imagined entity tools activate/deactivate with scene changes.