Signature
← Back to Overview

MAXIM

Imagination

Real-Time Entity Design from Novel Percepts

Shipped in v0.7.0 — I1 + I2

The Concept

Biological Inspiration

Imagination fires during low-arousal idle states — the same way you don't daydream while fighting. When the brain encounters something unfamiliar, it constructs a mental model from prior experience to reason about the novel entity before physically interacting with it. The Default Network, which activates during rest and mind-wandering, gates this process.

When the agent encounters a novel entity mentioned in percept text that has no existing SEM component, the imagination system designs one in real-time. The result is a fully functional entity with sensors, affordances, and failure modes — registered ephemerally for the session and available for interaction through auto-generated tools.

Why This Matters

Without imagination, the agent can only interact with entities that were pre-authored as YAML components. Narration that mentions a "rusted padlock" or "crystal chandelier" would have no tools, no sensors, no failure modes — the agent could only talk about them, never touch them. Imagination closes this gap by designing components on-the-fly.

The Pipeline

Imagination Pipeline Percept Text ↓ Entity Extraction (NLP heuristics) ↓ ImaginationCache check (already imagined this session?) ↓ ComponentIndex Two-Layer Lookup ↓ ├── Match found → Skip (already known) ↓ └── No match → Mention Counter ↓ Count ≥ threshold (default 2)? ↓ DN Arousal Gate (low arousal only) ↓ Energy Budget Check (≥10% remaining) ↓ EntityDesigner LLM Call ↓ Quick Validation (SEM protocol) ↓ register_ephemeral() + ComponentIndex.add() ↓ Scene-Scoped Tool Registration

Each stage is a gate. If any gate fails, the pipeline short-circuits gracefully — the agent falls back to verbal-only interaction with the entity.

Entity Extraction

Lightweight NLP heuristics extract entity-like noun phrases from narration text. No external model required — this runs on pure string processing.

Extracted

Physical objects, creatures, weapons, environmental features, items, vehicles, NPCs

Examples: "rusty padlock", "crystal chandelier", "ancient tome", "iron golem"

Filtered Out

Abstract concepts, body parts, clothing, emotions, time references, generic pronouns

Examples: "courage", "left arm", "leather boots", "dread", "morning"

Two strategies work in parallel:

  • Sentence-level intro patterns — catches "You see a rusty gate", "A massive golem blocks the path", "There is a glowing orb"
  • Head-noun scanning — matches against a curated indicator vocabulary of entity-like words (weapon, creature, door, chest, etc.)

ComponentIndex: Two-Layer Lookup

Before imagining anything, each candidate phrase is checked against the ComponentIndex to see if an existing component already covers it:

Layer 1: Alias Table (O(1))

Exact match against component names and declared synonyms from the component.synonyms YAML field.

"sword" → weapons/rusty_sword

Layer 2: Embedding Similarity

Cosine similarity against all component signature embeddings. Threshold: 0.65. Uses the shared similarity.encoder singleton.

"old iron door" → environments/rusty_gate (0.72)

If either layer finds a match, imagination is skipped for that phrase. This prevents the system from creating duplicate components under different names.

Thread Safety

The ComponentIndex is protected by an RLock. Multiple threads (AUT + orchestrator) can query it concurrently. Persistence uses .npy + .json sidecar — no pickle, ever.

Gates

Three gates prevent imagination from firing at inappropriate times:

🔢

Mention Threshold

Default 2 mentions before triggering. A one-off phrase ("you notice a crack in the wall") won't spawn an entity. Repeated mentions signal narrative importance.

💤

DN Arousal Gate

Only fires during low-arousal idle states. Blocked when the Default Network is inhibited or recent interesting events occurred. You don't daydream while fighting.

🔋

Energy Budget

Skipped when LLM energy is critical (<10% remaining). Falls back gracefully to verbal-only interaction with the entity.

Per-Phrase Design Guard

A per-phrase lock prevents concurrent LLM calls for the same entity phrase. In multi-thread setups (AUT + orchestrator), only one thread designs a given entity; the other waits for the result. Thread-safe throughout via RLock.

EntityDesigner

When all gates pass, the ImaginationDesigner wraps the EntityDesigner and makes a single LLM call to generate a complete SEM component specification from the entity phrase and surrounding narrative context.

LLM Input Entity phrase: "crystal chandelier" Narrative context: "A massive crystal chandelier hangs from the vaulted ceiling, its facets catching the torchlight." Genre: fantasy Existing components in scope: [rusty_sword, base_humanoid, ...] LLM Output (validated SEM spec) entity: name: crystal_chandelier entity_type: environmental_feature sensors: stability: {unit: ratio, range: [0, 1], initial: 0.9} illumination: {unit: ratio, range: [0, 1], initial: 0.7} crystal_count: {unit: count, range: [0, 50], initial: 42} modulators: swing: {params: {force: float}, timeout: 3} shatter_crystal: {params: {target: string}, timeout: 2} cut_chain: {params: {}, timeout: 5} failure_modes: collapse: {trigger: {field: stability, op: "<", value: 0.1}, pain_intensity: 0.9} darkness: {trigger: {field: crystal_count, op: "<", value: 5}, pain_intensity: 0.2}

Quick Validation

After generation, the spec is validated against the SEM protocol: required fields present, sensor ranges valid, modulator params typed, failure triggers well-formed. Invalid specs are discarded — the agent falls back to verbal interaction.

Ephemeral Registration

Imagined entities live in a separate overlay (_ephemeral_index) from the persistent component registry. This separation is architectural:

During Session

  • Visible to get(), has(), query()
  • Tools registered in current scene scope
  • Added to ComponentIndex for dedup
  • Full SEM interaction (sensors, affordances, failures)

At Session End

  • Cleared via clear_ephemeral()
  • Tools deregistered
  • Episodes persist (with provenance)
  • Causal links get 50% confidence decay

This means the agent learns from imagined interactions (pain avoidance, reward prediction) but with reduced confidence, reflecting the simulated origin.

Provenance Tagging

All learning from imagined entities carries imagined=True provenance:

Provenance Flow Episode from imagined entity interaction: episode.metadata["imagined"] = True episode.metadata["imagined_entity"] = "crystal_chandelier" CausalLink from imagined entity outcome: link.metadata["imagined"] = True On session end (entity discard): NAc.decay_imagined_links(factor=0.5) → All links with imagined=True get confidence *= 0.5

The 50% decay means the agent retains partial learning ("chandeliers can collapse") but with appropriately reduced confidence compared to verified real-world interactions. If the agent encounters the same entity type again and the interaction confirms the learned pattern, confidence rebuilds naturally through standard Rescorla-Wagner updates.

Scene-Scoped Tool Registration

Imagined entities get their tools registered into the current scene scope (I3). This means:

  • Tools activate when the entity enters the scene and deactivate when it leaves
  • An active tool cap prevents prompt overflow from many imagined entities
  • The executor gate rejects calls to deactivated tools with informative errors
  • Least-recently-used tools are deactivated first when the cap is reached

This integrates naturally with the scene-scoped tool system — imagined entities follow the same lifecycle as pre-authored ones.

ImaginationCache

A session-scoped cache prevents redundant design attempts:

  • Shared across AUT + orchestrator — if the orchestrator's narration mentions "crystal chandelier" and the AUT's perception also extracts it, only one design call fires
  • Stores both successes and failures — a failed validation for "vague smoke" won't retry every turn
  • Thread-safe via RLock
  • Cleared at session end alongside the ephemeral registry

Integration with Bio-Systems

Imagined entities participate in the full bio-pipeline, just like pre-authored ones:

🧠

Hippocampus

Episodes from imagined interactions are captured with imagined=True metadata.

🎯

NAc

Causal links form from affordance outcomes. 50% confidence decay at session end.

PainBus

Failure modes fire pain signals through the same cascade as real entities.

🧩

Cerebellum

Forward models train on imagined affordance outcomes via Rescorla-Wagner.

📚

ATL

Semantic concepts form from imagined entity interactions (modality-tagged).

🎭

Acting Coach

Exploration directives include imagined entity affordances in the meta-prompt.

Architecture

ModulePurpose
imagination/trigger.pyEntity noun-phrase extraction, ComponentIndex lookup, design dispatch
imagination/designer.pyImaginationDesigner: wraps EntityDesigner for real-time entity generation
imagination/cache.pySession-scoped ImaginationCache, thread-safe, shared AUT + orchestrator
embodiment/component_registry.pyregister_ephemeral(), clear_ephemeral(), ephemeral overlay
embodiment/component_index.pyTwo-layer semantic discovery (alias hash + embedding cosine)
tools/registry.pyScene-scoped activation, active tool cap, executor gate
runtime/agent_loop.pyimagination_trigger parameter on run_agentic_loop

Wiring

The imagination trigger is passed as a parameter to run_agentic_loop and fires post-state.update() on every turn. This placement means the agent has already processed the percept and updated its state before imagination considers whether to design a new entity.

Agent Loop Integration while running: percept = await percept_source.next() state.update(percept) # Imagination fires here — after state update, before decision if imagination_trigger: imagination_trigger.process(percept.text, scene_context) decision = await exec_agent.decide(state) result = await executor.execute(decision)