The Context Window Problem Nobody Talks About
You open a new Claude Code session. You type a few sentences to re-orient it. It rebuilds context from your files and off you go. That works well for an hour. It even works well for a day. But somewhere around session 20 or 30, something quietly breaks.
The AI starts asking questions whose answers were settled weeks ago. It proposes an approach you already tried and abandoned. It rebuilds a utility you wrote in session 4. You find yourself spending the first ten minutes of every conversation re-establishing ground truth you never intended to lose.
This is the context window problem, and it's more insidious than it looks. The issue isn't that Claude Code forgets between sessions — you half-expect that. The issue is that it also forgets within a long session, as older messages get compressed into summaries to make room for new ones. Project history evaporates silently. The AI hasn't crashed — it's just operating on a progressively thinner slice of what you built together.
Context compression is the equivalent of a developer losing their entire commit history mid-sprint. The code is still there. The reasoning behind it isn't.
The answer isn't a bigger context window. It's a better external memory system — one that the AI reads before it does anything, every single session, without you having to ask.
What Claude Code's Markdown Files Actually Do
Claude Code has four distinct roles for Markdown files, and understanding all four is the foundation for everything that follows. Most developers only know about one of them.
CLAUDE.md — Persistent Instructions
This is the most important file. Claude reads it automatically at the start of every session — think of it as a letter you write to Claude that it re-reads every time you open a project. There are three locations, all loaded additively:
~/.claude/CLAUDE.md— global, applies to every project on your machine<project-root>/CLAUDE.md— applies to this repo only<subdirectory>/CLAUDE.md— scoped, applies only when working in that folder
What goes in it: anything you want Claude to always know or always do. Tech stack, coding conventions, deploy rules, tool preferences, things it should never do. You can also use @import syntax to reference other files, keeping large projects from accumulating one giant CLAUDE.md.
.claude/commands/<name>.md — Custom Slash Commands
These create your own /commands. The filename becomes the command name — .claude/commands/deploy.md becomes /deploy. The markdown content is the prompt that runs when you invoke it. Use $ARGUMENTS as a placeholder for whatever you type after the command name.
Commands are for repetitive workflows you don't want to re-explain every time: git commit formatting, PR reviews, running tests with specific patterns. They execute immediately; they don't persist behavioral context.
.claude/skills/<name>/SKILL.md — Skills
Skills are a convention layered on top of Claude Code. They load via the Skill tool and inject a detailed prompt into the session when invoked — packaging expert domain knowledge that would be too large to put in CLAUDE.md. Skills can also be triggered automatically when their description matches keywords in the conversation.
Memory Files — Claude's Own Cross-Session Notes
~/.claude/projects/<project-id>/MEMORY.md is written and maintained by Claude itself — not by you. When Claude learns something stable about your project or preferences, it saves it here and reads it back next session. You can also tell Claude to "remember this" explicitly.
CLAUDE.md → You instruct Claude (permanent, per-project or global)
commands/*.md → You define reusable slash commands
skills/*/SKILL.md → You package expert behavior patterns
memory/MEMORY.md → Claude writes notes to itself
The Gap: Instruction Files vs. State Files
Here's the critical distinction that most Claude Code users miss: all four of these files tell Claude how to work. None of them track what has been done and what comes next.
CLAUDE.md is an instruction file. It's excellent for conventions, constraints, and configuration. It is not designed to answer: What features are complete? Which ones are in progress? What's the next priority? What approach did we try last week and why did we abandon it?
This is the gap. And it's the same gap that broke software teams before they adopted ticketing systems — tribal knowledge living in people's heads instead of a written backlog. When someone leaves (or in this case, when context gets compressed), the knowledge disappears with them.
The MASTERPLAN system fills exactly this gap.
Enter the Masterplan System
The system is built on one design principle: context windows get summarized, but files don't. Anything written to disk survives session compression. So the solution is to write project state to disk in a format the AI can read instantly and act on reliably.
It uses two layers:
- MASTERPLAN.md — the dashboard. One table, all work items, full project state. Readable in under a minute.
- plans/*.md — the specs. One file per feature. Full detail: what to build, how to build it, what dependencies it has, how to verify it's done.
The MASTERPLAN.md table is hierarchical using arrow prefixes — ↳ for features under an epic, ↳ for sub-features — with status emojis that propagate upward from children to parents. If any child is 🔲 Planned, the parent is 🔲 Planned. The full status of a 40-feature project is visible at a glance.
Here's what a real MASTERPLAN.md table looks like:
My Project – Masterplan
Plan Status
| Plan | Document | Impact | Effort | Status |
|---|---|---|---|---|
| Epic: Auth Platform | — | High | High | 🚧 In Progress |
| ↳ OAuth2 Integration | plans/feature-01-oauth.md | High | Med | ✅ Complete |
| ↳ MFA Enforcement | plans/feature-02-mfa.md | High | Med | 🚧 In Progress |
| ↳ TOTP Support | plans/feature-02a-totp.md | Med | Low | ✅ Complete |
| ↳ SMS Fallback | plans/feature-02b-sms.md | Med | Low | 🔲 Planned |
| ↳ Session Management | plans/feature-03-session.md | Med | Low | 🔲 Planned |
| Epic: API Layer | — | Med | Med | 🔲 Planned |
| ↳ Rate Limiting | plans/feature-04-rate.md | Med | Low | 🔲 Planned |
Each plan file follows a consistent structure: Status, Summary, Dependencies, Approach, Scope, Steps, Verification, and Human Steps. That last section is where the system becomes genuinely powerful — but we'll get to that.
It's an Agile Board for AI Work
The structural analogy to Agile is deliberate and exact. If you've worked with Jira, Linear, or any sprint board, this maps cleanly:
| Masterplan Level | Agile Equivalent | Has Plan File? | Status Source |
|---|---|---|---|
| Epic — major area of work | Epic / Program Increment | No (container row only) | Derived from children |
| Feature — deliverable | Story / Ticket | Yes — full spec | Set directly |
| Sub-feature — independent layer | Sub-task / Subtask | Yes — when truly parallel | Set directly |
The Impact and Effort columns in MASTERPLAN.md function as story points and priority. High impact / low effort items float to the top of the backlog. The status emoji system maps directly to a sprint board: ✅ Done, 🔲 To Do, 🚧 In Progress, ⏸ Blocked or Deferred.
But the most Agile-adjacent feature is Human Steps in each plan file. These are the acceptance criteria — actions a human must take that can't be automated. Before Claude starts implementation, it surfaces the "Before starting" table and waits for explicit confirmation. At defined checkpoints mid-implementation, it pauses again. After completion, it presents a final checklist before marking anything ✅ Complete.
Human Steps in a plan file are the AI equivalent of a developer waiting for PR review before merging. The gate is built into the spec, not left to convention.
A plan file with Human Steps looks like this:
Feature 02b – SMS Fallback
Status: 🔲 Planned
Summary
Adds SMS as a fallback MFA method when TOTP is unavailable. Integrates Twilio for message delivery. Required before MFA rollout to non-technical users.
Dependencies
- Requires: Feature 02a (TOTP Support) ✅
- Unlocks: Feature 02 (MFA Enforcement) completion
Human Steps — Before Starting
| # | What to do | Where | Done? |
|---|---|---|---|
| 1 | Add TWILIO_API_KEY to environment | .env / secrets manager | ☐ |
Human Steps — After Completion
| # | What to do | Where | Done? |
|---|---|---|---|
| 1 | Verify SMS delivery in staging | Twilio logs | ☐ |
| 2 | Update runbook with new recovery flow | Confluence | ☐ |
The CLAUDE.md Snippet That Changes Everything
The system activates by adding one snippet to your project's CLAUDE.md. Once it's there, every session — in any IDE, on any machine — begins by reading MASTERPLAN.md before doing anything else. No manual setup, no "here's what we're building" briefing each time.
Project Planning
This project uses the masterplan system. Follow these rules in every session:
- Read MASTERPLAN.md first. Before doing any work, read MASTERPLAN.md to understand what is complete, in progress, planned, or disabled. Never re-implement something already marked ✅ Complete.
- Read the plan file before starting. Any time you are about to work on a feature or phase, read its plan file in
plans/first. The plan is the spec — do not deviate from it without flagging the discrepancy to the user. - Surface Human Steps before touching anything. If the plan has a Human Steps section, present the "Before starting" table to the user and wait for explicit confirmation before writing a single line.
- Keep MASTERPLAN.md current. After any status change, update the MASTERPLAN.md row and propagate status upward through parent rows.
That four-rule snippet does more work than it looks like. Rule 1 prevents re-implementation — a specific failure mode where Claude rebuilds something it helped you build three sessions ago because it has no record that it already exists. Rule 3 forces the AI to treat Human Steps as hard gates rather than suggestions. Rule 4 keeps the dashboard current so the next session starts with accurate state.
What This Unlocks System-Wide
The cumulative effect of the system is session independence. Once bootstrapped, any Claude session — fresh conversation, new IDE window, different machine — picks up exactly where the last one left off. The project state lives in files, not in conversation history.
Practically, this means:
- No re-explaining. "Read MASTERPLAN.md" replaces the five-minute context dump that opens every session on a mature project.
- No re-implementation. ✅ Complete in MASTERPLAN.md is a hard signal — the AI won't rebuild it.
- Audit trail by default. Each plan file is both the spec written before implementation and the record of decisions made during it. You never retrofit documentation.
- Parallelism that actually works. When features are truly independent (sub-features), multiple agents can work on them in separate worktrees simultaneously, each reading only the plan file relevant to their unit.
- Domain agnosticism. The system doesn't care whether your project is software, infrastructure, a content calendar, or a marketing campaign. Plans contain whatever the domain demands — code, copy, config, process steps, links.
It also scales cleanly. A five-feature project and a fifty-feature project both use the same one-table dashboard. The ↳ hierarchy keeps parent-child relationships readable without clicking into individual files. The full project state is visible in under a minute regardless of size.
Getting Started in 10 Minutes
Bootstrap the system once per project. There are exactly three artifacts to create:
- MASTERPLAN.md at the project root — use the table structure above, populate with whatever phases or features you already know about
- plans/ folder — add plan files as features are scoped, never pre-create empty ones
- CLAUDE.md snippet — add the four-rule block under a
## Project Planningheading (or create CLAUDE.md if it doesn't exist)
Commit all three together so the system is live from the first deploy. From that point, the workflow is: write the plan file before writing code — the same discipline as writing the ticket before the PR. The plan is the spec. Don't retrofit it as documentation after the fact.
When you start work on a feature, tell Claude to read its plan file. It will surface the Human Steps, confirm them with you, work through the Scope table as a checklist, and update MASTERPLAN.md when it's done. The next session starts with accurate state and no briefing required.
The context window is finite. The filesystem isn't. Every architecture decision, every abandoned approach, every acceptance criterion — write it to disk. Make the AI read it back. That's the whole system.
The gap between what Claude Code offers natively and what long-running projects actually need isn't a gap in the AI's capability — it's a gap in the infrastructure around it. MASTERPLAN.md closes that gap. Not with a plugin, not with a paid service, but with two markdown files and a four-line instruction.