local-first · longitudinal · narrative synthesis
Most AI memory tools are built around retrieval — you ask, they fetch. Trace is built around a different question: do I feel witnessed?
Trace reads your AI conversation history and ongoing notes, then generates a weekly narrative dispatch — not a summary, not a dashboard, but a short-form piece of writing that reflects your life back to you across time. It notices the tensions you're living inside, the patterns you haven't named yet, the distance between what you said you'd do and what you actually did.
It runs entirely local. No cloud, no subscription, no model that's also the product. The dispatch arrives as a self-contained HTML file in your output folder.
Four layers, each independently evolvable:
The E2E skeleton is running. Feed it a Claude conversation export and a short personal context file (synthesis_user.txt) and it generates a dated dispatch. The pipeline covers all four layers — ingestion through HTML output — using a local Ollama server so nothing leaves the machine.
What works: the full loop, the dispatch viewer, the sample synthetic dataset (no real personal data required to try it). What's rough: the synthesis prompt needs iteration, the chunking logic is naive, and the dispatch aesthetic will evolve. That's the point of Phase 1 — get real dispatches into real hands.
Every dispatch includes a Tension Map — a visualization of where the internal friction in your life currently lives. Four dimensions, tracked over time. The map animates through the weeks so you can watch the shape of a person's inner life actually move.
Below is a live demo using Quill, a fully synthetic persona: a systems thinker who spent 16 weeks navigating a leadership offer they ultimately declined — and then had to reckon with who they were after the decision.
Four axes: Career Drive (ambition / output energy) · Relational Presence (groundedness in relationships) · Identity Continuity (alignment between past and present self) · Wellbeing (embodied baseline). Tension score is a composite of divergence across these dimensions.
The real motivation is epistemically honest: most AI products optimize for the user's stated preferences. Trace is designed to notice the gap between stated and revealed — what you say you value vs. what your traces show you actually doing. That gap is where growth lives.
As a researcher who spent years building knowledge graph infrastructure at scale, I kept noticing that the hardest knowledge problem wasn't retrieval. It was prospective memory — remembering to act on what you already know, in the right context, at the right moment. Trace is the personal version of that problem.
It's also just a project I wanted to exist and couldn't find. So here we are.
Research notes, half-baked ideas. Probably overthought, definitely over-architected.