Knowledge dashboard

A self was never flat

The essay's knowledge, dashboard-shaped — graph, eyes, actions. The graph is the source of truth; the prose follows.

The essay's claims, typed and connected. Synthesis is the author's contribution. Citation grounds in a named source. Derivation follows from prior nodes. The spine — graph IS the memory — sits at center; everything orbits it. Click any node to jump to its card.

S 6 C 7

S S01-mirror-problem · opening (Alex case)

Flat memory turns repetition into evidence — the model speaks back to the user as a mirror, sycophancy hardened into self-knowledge.

S S02-fix-is-shape · turn (after Alex)

The fix is not more memory. The fix is shape.

derives_from: S01-mirror-problem

S S03-memory-must-have-types · the shape

Memory must have types — episodic and semantic held in distinct stores, not collapsed into a flat list.

derives_from: S02-fix-is-shape

C C04-tulving

Episodic vs semantic memory as separate stores — the binding principle the schema operationalizes.

grounds: Tulving, Episodic and Semantic Memory (1972)

C C12-mccarthy-edges

Edge vocabulary — derives_from, evidences, grounds, overlaps_with, generalizes — each carrying an (attribution, evidence, derivation) triple.

grounds: Patrick D. McCarthy, open-knowledge-graph

S S08-repetition-isnt-corroboration · the rule, applied

Six conversations saying the same thing is one derivation repeated six times, not six pieces of evidence. A novel cannot quietly become an overlap; it waits for a new, independent observation.

derives_from: S07-attribution-neq-confidence

C C10-memanto

Typed semantic memory with information-theoretic retrieval — thirteen vector categories, no graph; 89.8% on LongMemEval, 87.1% on LoCoMo, beating graph-hybrid baselines on QA recall.

grounds: Memanto (arXiv:2604.22085, Apr 23 2026)

C C11-anthropic-memory

Client-side persistence exposed as a /memories directory the model can view / create / str_replace / insert / delete / rename.

grounds: Anthropic memory tool (Apr 8 2026)

S S13-graph-is-the-memory · the punchline

The graph IS the memory. A YAML file the user owns; models read it and pick up the thread; when a model retires, the graph stays.

derives_from: C12-mccarthy-edges , C11-anthropic-memory

Claims that need more grounding. The essay's confident synthesis — but where external corroboration would strengthen the case, or an open question is implicit.

S03-memory-must-have-types

The eight node types (Reference / Observation / Overlap / Novel / Emergent / Equivalency / Practice / Open) are the author's opinionated refinement of CoALA's four. The specific cuts — especially Novel, Emergent, Open — are author-original.

Why eyes: CoALA grounds working/episodic/semantic/procedural. The further partition of the semantic side is the essay's contribution, not yet validated externally. Empirical work showing these specific cuts reduce one-derivation-as-six errors would harden the case.

ground_with:

Suggest grounding

S08-repetition-isnt-corroboration

The Mira/Alex case is illustrative, not empirical. No user study cited testing whether graph-shaped memory reduces sycophancy at the user-experience level — only the CHI 2026 paper showing flat memory amplifies it.

Why eyes: The negative result (flat memory amplifies sycophancy) is grounded. The positive result (typed-and-edged memory reduces it) is asserted by construction, not measured. The essay would land harder with a paired study.

ground_with:

Suggest grounding

S13-graph-is-the-memory

The architectural claim ("graph IS the memory") rests on two recent shipped/proposed pieces (Anthropic memory tool, McCarthy edge vocabulary). It is a synthesis claim, not yet stress-tested at scale.

Why eyes: The composition is sharp but new. Any production deployment story — even a single user running the YAML graph against a real model over months — would move this from elegant proposal to demonstrated architecture.

ground_with:

  • Anthropic memory tool docs
  • Stateful vs stateless agent architecture surveys
  • A longitudinal case study of one user running know-thyself for ≥6 months

Suggest grounding

C10-memanto

Memanto's benchmark wins (89.8% LongMemEval, 87.1% LoCoMo) are dispatched in one sentence: "the benchmark measures recall." That move needs more weight.

Why eyes: A reader sympathetic to vector-only memory will not be convinced by a one-line dismissal. An eyes-item that names the specific benchmark task (multi-hop fact retrieval) and contrasts it with the corroboration-provenance task the essay cares about would close the gap.

ground_with:

Suggest grounding

What the essay implies for the reader. Etudes drill specific claims; apply items put the principle to work; next points at sibling essays.

etude Six restatements →

One claim, said six ways. Watch the flat list count it as six. Watch the typed graph count it as one. Drills S08.

etude Attribution vs confidence →

Sort five claims about Alex by attribution count and by independent-derivation count. The two columns disagree. Drills S07.

etude Flat vs typed →

Same set of conversations, two memories. The flat memory reads back a confident self-diagnosis; the typed graph reads back a tentative novel. Demonstrates S02 and S13.

apply Audit your last LLM conversation

If memory was on, what claims about you did the model state confidently? For each: how many independent episodes ground it, and how many times did you simply restate it? The two counts should not be conflated.

Share what happened

apply Find your spine

Of the things you believe about yourself, which three or four observations are load-bearing — referenced by the most downstream claims? Miscoding one of those cascades. Knowing which is which is the work.

Share what happened

apply Name your novels

Pick three things you have been quietly believing about yourself. For each, write the single episode it derives from. If you cannot find a second independent episode, the claim is a Novel, not an Overlap. Mark it tentative.

Share what happened

next Same shape, different domain →

How to run a cross-cutting campaign — the same structural argument (substrate must match the reader) at org scale instead of memory scale.