Builder's Notes  ·  April 2026

Trace:
a witness agent

local-first · longitudinal · narrative synthesis

✓ E2E skeleton running ⟳ Phase 1 — dispatch loop local Ollama · qwen3:14b iris-axon-lab / trace

01 What Trace is

Most AI memory tools are built around retrieval — you ask, they fetch. Trace is built around a different question: do I feel witnessed?

Trace reads your AI conversation history and ongoing notes, then generates a weekly narrative dispatch — not a summary, not a dashboard, but a short-form piece of writing that reflects your life back to you across time. It notices the tensions you're living inside, the patterns you haven't named yet, the distance between what you said you'd do and what you actually did.

It runs entirely local. No cloud, no subscription, no model that's also the product. The dispatch arrives as a self-contained HTML file in your output folder.

The core design constraint: Trace should feel like a thoughtful friend who has been paying attention — not a therapist, not an analyst, and definitely not a productivity app.

02 Architecture

Four layers, each independently evolvable:

L1 · INPUT
Ingestion
Claude export JSON Voice note transcripts Free-form text input Conversation classifier
L2 · PROCESS
Memory Processing
Semantic chunking Episodic extraction Prospective flagging Contradiction detection
L3 · SYNTHESIZE
Narrative Engine
Local LLM (Ollama) Warm biographer prompt Tension identification Anti-flattery constraint
L4 · OUTPUT
Dispatch Delivery
HTML dispatch file Markdown companion Dispatch viewer UI Email (Phase 2)
LLM BACKEND
Ollama · qwen3:14b
LANGUAGE
Python 3.12+
OUTPUT FORMAT
HTML + Markdown

03 Current state

The E2E skeleton is running. Feed it a Claude conversation export and a short personal context file (synthesis_user.txt) and it generates a dated dispatch. The pipeline covers all four layers — ingestion through HTML output — using a local Ollama server so nothing leaves the machine.

What works: the full loop, the dispatch viewer, the sample synthetic dataset (no real personal data required to try it). What's rough: the synthesis prompt needs iteration, the chunking logic is naive, and the dispatch aesthetic will evolve. That's the point of Phase 1 — get real dispatches into real hands.

The tension you've been circling for six weeks isn't really about the job offer. It's about whether the version of yourself that wanted the job is still the version you trust. You keep saying "I'll know when the time is right" — but Trace notices you've said that fourteen times, and the window keeps moving.
— SYNTHETIC SAMPLE DISPATCH · Quill · Week 9 of 16

04 The Tension Map

Every dispatch includes a Tension Map — a visualization of where the internal friction in your life currently lives. Four dimensions, tracked over time. The map animates through the weeks so you can watch the shape of a person's inner life actually move.

Below is a live demo using Quill, a fully synthetic persona: a systems thinker who spent 16 weeks navigating a leadership offer they ultimately declined — and then had to reckon with who they were after the decision.

TENSION MAP · QUILL · 16-WEEK ARC synthetic demo · no real data
W1
tension 15
current week week 1 baseline

Four axes: Career Drive (ambition / output energy) · Relational Presence (groundedness in relationships) · Identity Continuity (alignment between past and present self) · Wellbeing (embodied baseline). Tension score is a composite of divergence across these dimensions.

The shape shift matters more than any individual score. Quill's radar collapses inward at peak tension — high Career Drive pulling everything else out of balance — then slowly widens and rounds again in the recovery arc. That shape is the story.

05 What's next

06 Why I built this

The real motivation is epistemically honest: most AI products optimize for the user's stated preferences. Trace is designed to notice the gap between stated and revealed — what you say you value vs. what your traces show you actually doing. That gap is where growth lives.

As a researcher who spent years building knowledge graph infrastructure at scale, I kept noticing that the hardest knowledge problem wasn't retrieval. It was prospective memory — remembering to act on what you already know, in the right context, at the right moment. Trace is the personal version of that problem.

It's also just a project I wanted to exist and couldn't find. So here we are.

Research notes, half-baked ideas. Probably overthought, definitely over-architected.