Skip to content
← All tools

Rekall

AI memory that knows the difference between what you said and what it guessed.

Python Claude Code Obsidian Knowledge Management
Knowledge Graph
389 notes · 1,362 wikilinks · confidence decay: −0.05 / 30 days

The problem

AI memory treats everything it learns about you as equally true, forever. Six months in, you have stale preferences, outdated decisions, and contradictory facts all sitting at the same weight. The AI doesn’t know which patterns you confirmed last week and which it inferred once three months ago. It doesn’t know what it doesn’t know.

What it does

A memory system for Claude Code that knows the difference between what you said and what it guessed. Hooks into every session to capture decisions, learnings, and preferences, then persists them as structured notes in an Obsidian vault. Patterns you reinforce grow stronger. Ones you don’t fade on their own. When two memories contradict each other, it flags the conflict instead of silently picking a side. Before searching the web, it checks what you already know.

What makes it non-trivial

  • Confidence scoring (0.3–0.9) separates what you stated from what was inferred. Observed patterns cap at 0.7. Only explicit preferences reach 0.9
  • Evidence-weighted exponential decay. A pattern seen once fades in weeks. One confirmed ten times persists for months. No manual cleanup
  • Contradiction detection during memory compilation. Conflicting memories get surfaced, not silently held
  • Two-phase session extraction: fast skeleton (no LLM cost), then background analysis that pulls out decisions, ideas, and observations
  • Vault researcher hook searches your existing knowledge before hitting the web
  • 7 hooks running in the background: blocking destructive commands, catching secret leaks, linting vault structure
  • All local files. Grep-able, version-controlled, portable. Built on 389 notes and 1,362 wikilinks in a real, actively-used vault