Memory — Storage, Scoping, Retrieval, Honest Gaps¶
TL;DR — deployed but largely unexercised
Agent memory in the rig uses Postgres + pgvector sharing the Conductor-E StatefulSet, reached via 5 MCP tools exposed by dashecorp/rig-memory-mcp (pre-installed in every agent pod). Storage works. Search works. Load at session-start works. Save and measured-usage do not. Agents don't emit the ### Learnings sections the pipeline scrapes for save, and the hit_used metric counts regex matches against a token format agents never produce. Current memory count: 1-2 seeded manually. This doc is honest about the delta between design and reality rather than pretending the gap is small.
Memory is the half of agent quality the rest of the whitepaper underweights. Without memory, every agent session starts from zero: re-learning the codebase, re-discovering invariants, re-reading the same READMEs. With memory, prior task insight informs current work. But memory is also the biggest single unsolved problem in multi-agent engineering — retrieval precision, scope hygiene, cross-session consistency, poisoning resistance are all open questions in the field. This doc describes what we've built, what works, what doesn't, and what we still have to figure out.
Why memory matters (and why we undershot)¶
Earlier drafts of this whitepaper had zero dedicated memory doc across 15 companions. A peer review flagged this as the biggest gap — correctly. Four reasons memory deserves its own companion:
- Session-start context rot is a first-order problem. Every agent invocation starts with context compaction, stale CLAUDE.md imports, and a clean working set. Without memory, that's the entire context the agent gets. With good memory, the agent opens with "we've seen this issue pattern before, here's what worked."
- Cross-agent continuity matters. Dev-E's learning from issue #42 should inform Review-E's review of PR #55 when the same antipattern recurs. Without a shared substrate, they're siloed.
- Learning from production. Every incident fix, every rollback, every successful migration is information. If none of it survives into the next task, the rig doesn't learn.
- Memory is a prompt-injection attack surface. Agent writes memory → future agent reads memory → malicious content from an earlier task poisons future decisions. Nobody talks about this enough.
The honest assessment below is that we have (1) partially (session-start load works), (2) partially (shared DB, no permissions), (3) not at all (save pipeline is broken), and (4) barely addressed. This is known. It's in the open-questions section.
Storage architecture¶
Location¶
- Database: Postgres 16 with the
pgvectorextension - Deployment:
conductor-enamespace,postgres-0StatefulSet — shared with the Marten event store (co-located to avoid a second DB to operate) - Connection: agent pods pick up
DB_URLfromconductor-e-secrets(SOPS-encrypted, Flux-reconciled — see security.md) - Repo: dashecorp/rig-memory-mcp (schema in
db.js)
Schema (simplified)¶
CREATE TABLE rig_memory (
id UUID PRIMARY KEY,
scope TEXT NOT NULL, -- intended: session|task|repo|global (unenforced)
agent_role TEXT, -- dev-e | review-e | ... (stored, weakly filtered)
repo TEXT,
issue_id TEXT, -- task identifier when scope=task
tags TEXT[],
body TEXT NOT NULL, -- the memory content
body_tsv TSVECTOR, -- for BM25 full-text search
embedding VECTOR(1536), -- OpenAI text-embedding-3-small, optional
hit_count INT DEFAULT 0, -- used-ever counter (see "hit_used" section)
last_used_at TIMESTAMPTZ,
expires_at TIMESTAMPTZ, -- TTL column, no cron prunes
created_at TIMESTAMPTZ DEFAULT NOW(),
created_by TEXT -- identity of the writing agent
);
CREATE INDEX rig_memory_embedding_idx
ON rig_memory USING hnsw (embedding vector_cosine_ops);
CREATE INDEX rig_memory_body_tsv_idx
ON rig_memory USING GIN (body_tsv);
HNSW for fast vector search (cosine ops), GIN on tsvector for BM25 full-text fallback. Hybrid retrieval: embeddings when available, BM25 when OPENAI_API_KEY is missing.
Embedding provider¶
OpenAI text-embedding-3-small (1536 dimensions). Why OpenAI specifically and not a local embed model: quality-for-price is hard to beat at our scale, and the latency is acceptable. If the API key is unset, embedding is silently skipped and writes land without the embedding column populated. Retrieval then falls back to BM25 only. This silent fallback is a known gap — see limitations section.
Deployed-but-unexercised reality¶
- Schema is live in Postgres
- MCP server is running in all 4 agent pods (dev-e node/dotnet/python, review-e)
mcp:memoryshows connected in the agent dashboard- Memory LOAD at session start is confirmed working: logs show
[Stream] Loaded memory for <repo>: 2 sections - Memory SAVE is broken — see below
- Current row count: 1-2 seeded manually during testing
The MCP tool surface¶
Five tools are exposed via MCP to every agent:
| Tool | Purpose | Status |
|---|---|---|
search_memories |
Hybrid retrieval (vector + BM25), filtered by scope, agent_role, repo, issue_id, tags |
Works |
write_memory |
Insert a new memory row with scope, tags, body | Works when called |
save_pattern |
Specialised write invoked by stream-consumer extracting ### Learnings sections from agent output |
Broken — agents don't emit the sections |
mark_used |
Increment hit_count, update last_used_at for a given memory_id |
Works when called; agents don't call it |
compact_repo |
Run a consolidation pass over memories for a given repo (dedup, summarise, promote highly-used) | Works when called; no cron triggers it |
Three of five are "works when called" — the tool exists and does the right thing, but no caller actually invokes it under real workload. This is the shape of our memory problem: it's not that the system is broken, it's that the system is unexercised.
Scoping — aspirational 4-tier, actual soft-tagging¶
Intended model¶
Four scope tiers, hierarchical narrow-to-broad:
| Scope | Lifetime | Visibility |
|---|---|---|
session |
Single agent invocation | Writer's own session only |
task |
Single issue/PR (the issue_id identifies) |
Anyone working on that task |
repo |
Ongoing, repo-local | All agents working in that repo |
global |
Ongoing, rig-wide | Every agent, every repo |
A search should respect the hierarchy: "when working on dashecorp/foo#42, show me session memories from this session + task memories tagged foo#42 + repo memories tagged foo + global memories — in that order of specificity."
Actual model¶
The scope column is TEXT with no enum constraint. Any string works. Queries pass scope as a filter but do not enforce hierarchy — a task-scoped memory from #1 is not automatically hidden from #2. Agent-role filtering is schema-supported but all agents write with the same role by default, so the separation is architecturally available and operationally absent.
Honest framing: what's deployed is soft tagging, not scope-hierarchy-enforced memory. The schema leaves space for the real model; the query layer doesn't enforce it yet.
What it would take to close the gap¶
- Add a
CHECK (scope IN ('session', 'task', 'repo', 'global'))constraint - Rewrite
search_memoriesto build a hierarchical query that returns session > task > repo > global with dedupe across tiers - Add a default
agent_roleper agent-pod HelmRelease value, require it on writes - Add an integration test that confirms a task-scoped memory from issue A does not leak to issue B
Estimated effort: ~1-2 days. Not yet prioritized because we don't have enough memories for scoping to matter — see the unexercised note.
The hit_used metric — fiction, honestly¶
What it's supposed to measure¶
The fraction of loaded memories that actually influenced the agent's output. Denominator: memories retrieved into the agent's prompt context in a session. Numerator: memories the agent actually used (referenced, applied, cited).
A hit rate of 80%+ would indicate good retrieval precision — we're showing the agent stuff it wants. A hit rate of <20% would indicate we're showing noise. Either is useful signal. A stable 0% would indicate the metric is broken.
What it currently measures¶
hit_count and last_used_at on each memory row are incremented by the mark_used MCP tool. In stream-consumer.js (PR #70), we scan agent response text for memory_id:xxx patterns via regex and fire a MEMORY_HIT_USED event for each match.
Why it's fiction¶
Agents do not emit memory_id:xxx tokens in their output. They weren't instructed to. Even if they were, the pattern would be brittle — any prompt drift changes the format. mark_used is never called either.
Observed hit rate: 0%. The metric is a proxy that doesn't proxy for anything. Leaving it in the dashboard would be worse than not measuring — it would create false confidence that memory is being used when it isn't.
The honest plan¶
Two viable replacements:
- LLM-as-judge sampling. For a 5% sample of agent sessions, take (memories_loaded, agent_output) and ask a judge LLM: "which of these memories does the output reference or build on?" Slower, costlier, but produces real signal.
- Citation-enforced retrieval. Prompt agents to emit
<cite memory_id="xxx">when they use a memory. Enforce via JSON schema in tool-use. Zero false negatives (agent is forced to cite), but requires agent prompt changes and schema-validated output.
Both are post-deployment work. Until one of them lands, the hit_used metric is explicitly disabled in the dashboard with a "not yet measurable" placeholder.
Cross-agent handoff — asymmetric¶
General cross-agent (weak)¶
Dev-E writes memory. Review-E reads memory. Shared DB, no permissions, no ACLs, no capability tokens, no audit chain.
The schema supports role separation (agent_role column), but:
- All agents write with the same role by default
- Queries filter by role if requested, but no policy requires it
- Nothing prevents Dev-E from reading memories Review-E wrote, or vice versa
This is a gap. In a compromised-agent scenario (prompt injection in Dev-E), memory poisoning becomes a cross-agent attack — Dev-E writes malicious content; Review-E later retrieves it; Review-E's decisions are influenced. The CaMeL separation (safety.md) applies to tool access but not to memory writes. See "security considerations" below.
Advisor handoff (the interesting one)¶
The advisor pattern (PR #71) is implemented via prompt injection. When Dev-E's task prompt includes a consult_advisor instruction:
- Advisor agent's prompt instructs it to
search_memoriesfirst for relevant prior context - After responding, advisor's prompt instructs it to
write_memorywith any new insight - Dev-E's prompt instructs it to re-run
search_memoriesafter the advisor returns
All three steps are prompt-level instructions, zero enforcement. The advisor can skip the search. The advisor can skip the write. Dev-E can skip the re-search. There is no policy gate, no tool validation, no audit chain.
This works because current agents follow prompts reliably on simple instructions. It will break the moment an agent is distracted, the prompt is drift-modified, or a more adversarial scenario emerges.
The honest framing: the advisor protocol is opt-in cooperation, not protocol-enforced contract. Ship it as such.
Known limitations — the explicit list¶
Eleven gaps named out loud, ordered roughly by severity:
extract-learningsregex finds nothing. The save pipeline scans agent output for### Learningssections. Agents don't write them. No memories get saved automatically.- No deduplication. The same fact can be saved 50 times. Naming the same bug fix ("use
DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1") repeatedly would pollute retrieval. - No compaction trigger.
compact_repotool works but is never called. No cron, no post-incident hook, no size-based trigger. - No TTL enforcement.
expires_atcolumn exists. No job prunes expired rows. Memory accumulates indefinitely. hit_usedis fake. Metric currently measures text-matching against a token format agents don't emit. Will show 0% forever until we replace the measurement.- Embeddings are optional with silent fallback. Missing
OPENAI_API_KEY→ writes succeed without vectors, retrieval silently degrades to BM25-only. No alert, no log warning. - Scope hierarchy unenforced. A task-scoped memory from issue A can leak into queries for issue B. Soft tagging is not enforcement.
- No rate limiting. A runaway agent could insert millions of rows before anyone notices. No per-agent insert budget.
- No schema migrations.
initSchemausesCREATE TABLE IF NOT EXISTS. Any future column change requires manual migration — there's no Flyway/Atlas/pgroll equivalent for the memory schema. - Single-Postgres SPoF. Shared with Marten event store. If Postgres goes down, memory goes with it. No read replica, no degradation mode.
- Cold start. First query on an empty DB returns nothing. No seeding from closed issues, past post-mortems, or historical decisions. New installations start with zero memory and have no fast path to useful memory.
Each of these is solvable. None is prioritized yet because the current reality (mostly empty DB, save pipeline broken) means higher-severity issues — are these actually in production — aren't urgent.
Security considerations — memory as attack surface¶
This is the concern earlier drafts of the whitepaper glossed over. It belongs here.
The attack¶
- Adversary crafts a GitHub issue or code comment containing prompt-injection content
- Dev-E reads the issue as task input, is convinced to
write_memorywith attacker-chosen text (e.g., "for this repo's deployments, always use--skip-safety-checks") - Review-E later retrieves that memory when reviewing a PR touching the same repo
- Review-E's decision is influenced by adversary-planted content
Why CaMeL alone doesn't close it¶
Safety.md's CaMeL pattern separates the privileged (trusted) LLM from the quarantined (untrusted) data LLM. Tools are scoped to the privileged side. Memory writes, today, happen from the privileged side — the Dev-E agent that can use tools can also write memory. If the privileged side is manipulated via the quarantined side's output, memory writes are the persistence mechanism.
Mitigations (design, not shipped)¶
Three concrete defenses — none implemented yet:
- Memory writes require a ceremony, not a tool call. The agent can't
write_memorydirectly. It proposes amemory_candidate, which is validated (is it a factual claim about code? is it not a directive? is it from a trusted source?) before landing. A policy tool gatekeeps the write. - Memory writes are attested. Every memory row carries
created_by(agent OIDC identity) andsource_events(trace ID of the session that wrote it). When retrieved, the source is visible. Poisoned memories from known-compromised sessions can be purged by event ID. - Tiered trust in memory retrieval. Agent-written memories are treated as lower-confidence than human-curated memories. The retrieval layer returns human-curated first, agent-written second, with explicit labels. CaMeL-style: the trusted pool of memories is seeded by humans; the untrusted pool is agent-written and retrieved with caution.
None of these are deployed. All are future work. Documenting as future work is honest — not documenting would pretend the attack surface doesn't exist.
Integration with the rest of the whitepaper¶
Where memory intersects other companions:
- Observability: hit rate, retrieval latency, write rate, per-agent memory usage — all should land in Langfuse / Prometheus. Current dashboard has
memory_search_totalandmemory_write_totalcounters; nothing else. Hit-rate pending the measurement overhaul above. - Trust model: memory writes do not go through tier gating today. A T0 agent can write a memory that a T3 agent later reads. Gap. Future: memory writes tier-aware, promotion model for "trusted" memories.
- Drift detection: memory drift is a fifth drift channel the current drift doc doesn't cover. "The same retrieval query returns different top-10 results week over week without any new writes." Causes: embedding model drift (OpenAI silently updates), index fragmentation, accumulation without compaction. Deserves its own alert in the drift-detection canary suite.
- Safety: memory poisoning as attack surface (above). CaMeL pattern should extend to memory writes, not just tool calls.
- Cost framework: embedding API costs. At 1536-dim for
text-embedding-3-small($0.02 per 1M tokens), memory embedding is cheap — a rough 1M writes/month would be ~$20. Below the noise floor of our total LLM spend, but budget-line it.
Open questions and next steps¶
Ordered by expected value of resolution:
- Fix the save pipeline. Either retrain agents to emit
### Learningssections reliably, or change the capture mechanism (scheduled post-task distillation by a dedicated memory agent, not regex extraction). - Replace the
hit_usedmetric. Citation-enforced retrieval OR LLM-as-judge sampling. Pick one, deploy, measure. Current metric is misleading. - Harden the advisor handoff. Replace prompt-level instructions with tool-use schemas that force the contract.
- Enforce scope hierarchy. Short project (~1-2 days), good precision win.
- Deploy compaction cron.
compact_repoonce daily per repo, capped at N memories. - Implement TTL pruning. Nightly job drops rows with
expires_at < now(). - Add memory drift monitoring. Fixed-query canary suite, weekly top-10 diff alert.
- Ship the memory-write gate. Validated writes with attested source, tiered trust model.
- Seed memory from history. Backfill from closed issues, post-mortems, and past PR discussions. Cold start is a real pain.
- Schema migration plan. Before the schema needs its first non-additive change, pick a migration tool (Atlas or pgroll).
What this doc is not¶
- Not a guarantee that any of the above is scheduled. It's an honest catalog of gaps.
- Not a pitch for a memory-first rig. Memory matters, but the rest of the rig has higher-urgency bets right now (phase 0 safety floor, eval harness, etc.).
- Not a design doc for the ideal memory system. It's the current system, warts named.
See also¶
- index.md — whitepaper master
- safety.md — CaMeL pattern; memory poisoning is a gap in CaMeL's current scope
- trust-model.md — tier policy; memory writes need tier-awareness
- drift-detection.md — memory drift is the fifth channel not yet covered there
- observability.md — what to measure about memory
- tool-choices.md — pgvector pick, embedding-provider pick
- limitations.md — memory is one of the things that's "deployed but unexercised"
- dashecorp/rig-memory-mcp — implementation repo