Scope discipline reduces the surface Claude operates over. Smaller folders mean less context rot risk (Module 1 pre-setup), fewer ambiguous file references, and a clearer audit trail when something goes wrong. The same logic that drives test-environment isolation in software, applied to where Claude looks.
Welcome to the deep end.
Module 1 was about thinking. Module 2 is about doing. Asking Chat is asking a colleague for an opinion. Asking Code is handing them the wheel. Same tool family, different stakes, different setup.
Seven exercises. By the end you will have an HTML dashboard that pulls market data and computes two-year returns, volatility, and drawdowns for a five-stock portfolio. You will know how to set up the project, plan the build, watch Claude work, read what it did, and recover when it goes sideways.
Read top to bottom. The recovery move is open by default — the rest open on demand. The project folder itself comes in Bead 1; pre-tool is the machine-wide setup that has to happen before any project, of any kind.
What Git is, and installing it
Install Git from git-scm.com/download/win. Accept the defaults during installation. You will not interact with Git directly — Code drives it for you, once linked.
The Git setup prompt
Once per project, you will tell Code to initialise Git inside the project folder. The prompt is the same every time:
Set up Git in this folder. Initialise the repository, configure
my name and email (use [your name] and [your email]), and make
the first commit.
Do not open Command Prompt to do this yourself. Let Code drive. It will ask permission to run shell commands; allow. If anything errors, screenshot it and ask Chat to translate.
Bead 1 walks through running this for the portfolio dashboard. This is also your first taste of how Code does work — Bead 4 covers permissioning properly.
Models in Code
Claude Code uses the same three families as Chat: Haiku, Sonnet, Opus. The Module 1 pre-setup table still applies. For first builds, use Opus. The work is harder; capability matters more than cost.
Thinking levels in Code
Claude Code differs from Chat on thinking. Chat has adaptive on/off; Code exposes four explicit effort levels: low, medium, high, max.
Set the level with /effort for an interactive slider, or /effort <level> to set directly (e.g. /effort high). /effort auto resets to the model default.
For one-off deeper reasoning on a single turn without changing your session level, drop the keyword ultrathink anywhere in your prompt.
Use high for planning (beads 2 and 3) and for the build (bead 4). Default thinking is fine for routine edits in bead 5. Reach for max or ultrathink only when the task genuinely warrants it.
Cost awareness and /usage
Code can burn tokens fast on long agentic runs. Each turn, each file read, each command run consumes the conversation context and your token budget.
Type /usage in Code to see how much of the conversation context you have used and roughly what you have spent.
Check /usage at the end of each bead. If a bead has consumed more than half the context window, compact (summarise the conversation and start a fresh chat with the summary).
The recovery move — use everywhere
Code does work. Chat thinks. If Code shows you something you don't understand — a stack trace, a diff, an unfamiliar library, a concept you want to push back on — don't try to debug it from inside Code. Paste the lines (or a screenshot) into Chat. Ask Chat to translate or explain. Take what you learned back to Code.
This is the single most useful skill in the module. It surfaces in Beads 02, 04, 05, and 06 — and it's the move most people forget when a build goes sideways.
The Claude Code docs
The official docs live at code.claude.com/docs. They are unusually good, in a category usually populated by mediocre marketing. Bookmark them. When syntax shifts (effort, slash commands, thinking keywords), the docs are where you confirm current behaviour, not this module.
██████╗ ███████╗ █████╗ ██████╗ ██████╗ ██╗
██╔══██╗ ██╔════╝ ██╔══██╗ ██╔══██╗ ██╔═████╗ ███║
██████╔╝ █████╗ ███████║ ██║ ██║ ██║██╔██║ ╚██║
██╔══██╗ ██╔══╝ ██╔══██║ ██║ ██║ ████╔╝██║ ██║
██████╔╝ ███████╗ ██║ ██║ ██████╔╝ ╚██████╔╝ ██║
╚═════╝ ╚══════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝
Set up the project
Each project gets its own folder. Think of it like a kitchen: the recipe, the ingredients, and the cooking all happen in one room.
Code reads only the folder you point it at. Pick wrong and Code either misses context it needs or touches files it should not. Pick right and the rest of the module is predictable.
Three moves: make the folder, point Code at it, link Git. Then verify Code sees what you expect.
Make the folder
Create a Projects folder under your Windows user directory if you do not already have one. Inside it, create a folder for this build:
C:\Users\<your username>\Projects\portfolio-dashboard
Avoid the desktop, Downloads, and cloud-synced folders (OneDrive, Dropbox). You want a stable, local, isolated folder Claude Code can read entirely. By the end of the module the folder will hold source files, a data file (your stock list), a CLAUDE.md (Bead 3), and the hidden .git\ directory. For now it is empty.
Resist the urge to mix this folder with unrelated work. Claude reads what is in the folder. Cluttered folders mean cluttered context.
Point Code at it
- Open Claude Code in the desktop app.
- Click "Select folder" at the bottom of the sidebar.
- Choose
C:\Users\<your username>\Projects\portfolio-dashboard. - The folder name appears at the top of the Code panel. That is now Code's working directory.
Code can read everything inside this folder, recursively, including subfolders and hidden files (like .git\). It cannot read anything outside it. Your Documents, Desktop, OneDrive, and other Projects folders are invisible.
Link Git
Run the Git setup prompt from pre-tool. The folder is empty and unlinked; this turns it into a tracked repository so Code can commit as it works.
Set up Git in this folder. Initialise the repository, configure
my name and email (use [your name] and [your email]), and make
the first commit.
Allow the shell command when prompted. Code will run git init, set your identity, and make an initial commit. From this point on, every change Code makes can be inspected as a diff and rolled back with one prompt — that is the safety net Bead 6 leans on.
Verify what Code sees
Type:
Show me what is in this folder. List every file and folder,
including hidden ones.
Code will list the folder contents. Confirm it matches what you expect: the .git\ directory from pre-tool, your README.md if you made one, and (for a fresh project) not much else.
If Code lists files you did not put there, or omits files you did put there, stop. Either you selected the wrong folder, or you have files in a place you did not realise. Resolve before continuing.
Why scope matters
Three reasons.
Predictability. Code's answers are functions of what it can read. If Code cannot see a file, it cannot use it. If Code can see a file, it might use it even when you did not mean it to.
Safety. Code can edit any file in the folder. A clean, isolated folder means fewer surprises about what just got changed.
Auditability. When something goes wrong (and it will), the smaller the scope, the faster the diagnosis. Pre-tool's Git is the second half of this: scope plus version history equals a clear paper trail.
Adding context later
If a task needs context outside the folder, do not move the folder or expand the scope. Instead:
- Copy the relevant file into the project folder (rename if it would clash).
- Or, paste the content directly into the chat, framed as context.
Either is better than expanding scope.
Checkpoint
Files
- .git\
- README.md
That's all — only the .git directory and a stub README.md. No stray files, no leftover projects, no node_modules. Clean slate, ready for the build.
You haven't asked me to write anything yet, so nothing has been changed on disk. Let me know what you'd like to do next.
✳↑ A simulation of the Code panel after the folder check. If you see anything other than .git\ and your README.md — stray PDFs, an old project, a node_modules folder — you selected the wrong folder. Stop and re-select before continuing.
Key Takeaway
Code reads the folder and only the folder. Pick it once, pick it tight, and the rest of the module sits on a predictable foundation.
Why this works
██████╗ ███████╗ █████╗ ██████╗ ██████╗ ██████╗
██╔══██╗ ██╔════╝ ██╔══██╗ ██╔══██╗ ██╔═████╗ ╚════██╗
██████╔╝ █████╗ ███████║ ██║ ██║ ██║██╔██║ █████╔╝
██╔══██╗ ██╔══╝ ██╔══██║ ██║ ██║ ████╔╝██║ ██╔═══╝
██████╔╝ ███████╗ ██║ ██║ ██████╔╝ ╚██████╔╝ ███████╗
╚═════╝ ╚══════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝
Plan, before you build
Code's first move on a real task is not building. It is planning. Plan mode is the structured way: Claude proposes the steps it intends to take, you approve or edit, nothing changes on disk until you sign off. The cost of a bad plan caught here is minutes; the same plan caught at bead 5 costs hours.
Turn on plan mode
In Claude Code, switch to plan mode (current syntax may be /plan or a Shift+Tab toggle; check the docs at code.claude.com/docs). Plan mode means Claude proposes the steps but does not edit files until you approve.
Ask Claude to ask you
Before describing the project, ask Claude what it needs from you. Type:
I want to build a small HTML dashboard that pulls returns for a
five-stock portfolio and shows two-year returns, volatility, and
drawdowns. Before you write a plan, ask me the questions you
need answered to plan well. List them all at once. I will answer
in the next message.
Claude will produce a list. Expect questions like: which five stocks, what data source, what currency, what time period, what the dashboard should look like, libraries to avoid, where the output file lives.
Answer in one block. Be specific. The more spec you front-load, the less the plan has to guess.
Read the plan
Claude produces a numbered plan. Read it slowly. Look for:
- Steps that match what you asked for.
- Steps that introduce things you did not ask for (a database, an external account, a build tool). Push back.
- Steps that wave-hand ("then deploy"). Ask Claude to expand them.
If the plan is off, say so. Claude revises. Iterate until it is sane.
Sign off
Approve only what you understand. If a step is unfamiliar, ask Claude to explain. If the explanation is unfamiliar, switch to Chat (the recovery move from pre-tool).
When the plan reads cleanly, approve. Code moves out of plan mode and starts work.
Checkpoint
Plan
Plan mode • nothing written to disk until you approve.
- Create dashboard.html with the page shell and chart container.
- Fetch 2-year daily closes for AAPL, MSFT, NVDA, JPM, XOM via Yahoo Finance (no key needed).
- Compute returns, volatility (annualised stdev), max drawdown in vanilla JS.
- Render: a returns line, a stat strip, a drawdown sparkline. Chart.js via CDN.
- Add a manual refresh button. No autorefresh, no build tooling.
Three things I'm assuming — push back if any are wrong: (1) USD throughout, (2) one HTML file no bundler, (3) Yahoo's unofficial endpoint is acceptable for a personal dashboard.
Approve, edit, or ask me to revise.
✳↑ A clean plan: numbered steps, named files, surfaced assumptions, no surprise dependencies. If your plan introduces a database, an account signup, or a build tool you didn't ask for — push back before approving.
Key Takeaway
Plan mode is the cheapest place to be wrong. Ask Claude to ask you questions before it writes the plan. Approve only what you understand.
Why this works
Self-Refine (Madaan et al, 2303.17651) shows that having the model critique and revise its own output improves quality. Plan mode is structured self-critique with you in the loop. Asking the model to surface its own questions before producing the plan (Anthropic prompt engineering best practices) closes the gap between what you said and what you meant. Both moves are cheap. The alternative is expensive.
██████╗ ███████╗ █████╗ ██████╗ ██████╗ ██████╗
██╔══██╗ ██╔════╝ ██╔══██╗ ██╔══██╗ ██╔═████╗ ╚════██╗
██████╔╝ █████╗ ███████║ ██║ ██║ ██║██╔██║ █████╔╝
██╔══██╗ ██╔══╝ ██╔══██║ ██║ ██║ ████╔╝██║ ╚═══██╗
██████╔╝ ███████╗ ██║ ██║ ██████╔╝ ╚██████╔╝ ██████╔╝
╚═════╝ ╚══════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═════╝
Lock the project: review the plan, write CLAUDE.md
The plan from bead 2 lives in the chat. When the chat ends, it goes with it. The decisions inside the plan (what stack, what to avoid, what conventions) need to survive across sessions. That is what CLAUDE.md is for.
Re-review the plan with fresh eyes
Before you write CLAUDE.md, read the plan one more time. Step back. Ask:
- Are there decisions in the plan you accepted because Claude proposed them, not because you preferred them?
- Are there conventions you want enforced?
- Are there guardrails you want set?
Mark these. They are what goes in CLAUDE.md.
Have Code write CLAUDE.md
In the project folder, you want a file called CLAUDE.md. Claude Code reads it automatically at the start of every session. Whatever is in there acts as standing instructions for the project.
Ask Code to draft it from the plan you agreed:
Draft a CLAUDE.md for this project based on our plan. Save it to
the root of this folder.
Read what Code wrote. Edit anything that does not match your intent. The file is yours; Code drafted it for you.
Verify Code reads it
Type:
Read CLAUDE.md and summarise the standing rules for this project.
Code will read the file and recap. Confirm the recap matches what you wrote. If anything is off, edit CLAUDE.md and ask again.
Checkpoint
Wrote CLAUDE.md · 38 lines
# portfolio-dashboard
A personal HTML dashboard for a five-stock portfolio.
Two-year returns, volatility, max drawdown.
## Stack
- Single dashboard.html. No bundler, no framework.
- Chart.js via CDN. Vanilla JS for math.
- Yahoo Finance unofficial endpoint for daily closes.
## Conventions
- USD throughout. Annualised stdev for volatility.
- Surface fetch errors. Never swallow with try/except.
- Don't add dependencies without asking first.
- Edit only files named in the plan.
## Tickers
AAPL, MSFT, NVDA, JPM, XOM.
Saved to project root. Read it, edit anything that doesn't match your intent, then run the verify prompt.
✳↑ CLAUDE.md appears in the file tree as NEW. The body translates the plan into standing rules — stack, conventions, guardrails. The verify prompt is queued in the composer; send it and check Code's recap matches what you wrote.
Key Takeaway
The plan dies when the chat ends. CLAUDE.md does not. Have Code write it from the plan, edit anything that does not match your intent, and verify Code reads it. From now on, every session starts with the rules you set.
Why this works
Persistent project instructions reduce the context you have to re-supply each session (Anthropic on prompt engineering best practices; Claude Code best practices on CLAUDE.md). Same logic as Module 1's standing instructions, scoped to the project rather than to all conversations.
██████╗ ███████╗ █████╗ ██████╗ ██████╗ ██╗ ██╗
██╔══██╗ ██╔════╝ ██╔══██╗ ██╔══██╗ ██╔═████╗ ██║ ██║
██████╔╝ █████╗ ███████║ ██║ ██║ ██║██╔██║ ███████║
██╔══██╗ ██╔══╝ ██╔══██║ ██║ ██║ ████╔╝██║ ╚════██║
██████╔╝ ███████╗ ██║ ██║ ██████╔╝ ╚██████╔╝ ██║
╚═════╝ ╚══════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝
Build, permission, watch the bill
Plan mode ends. Code starts working through the plan. Each turn it edits a file, runs a command, or asks a question. Your job is to watch and to keep an eye on what it is costing you.
Set permissions to auto
Set Code to auto-allow at the start of the session. (In current versions this is the "always allow" or "bypass" setting; check the docs at code.claude.com/docs.)
This is the right default for a first-build project in your own folder, for two reasons.
Click fatigue is real. If Code asks for permission ten times, you stop reading the prompts and click yes anyway. Per-action approval becomes security theatre after a few prompts.
The protection is structural, not procedural. Your safety net is the folder boundary (bead 1, scope) and Git (pre-tool, version history). Not the permission popup. If Code does something wrong, you revert; you do not catch it at the prompt.
For agentic work that touches shared infrastructure (a production database, a real codebase), per-action permissions matter much more. You are not in that mode.
Watch the bill with /usage
Type /usage to see your context usage and rough token spend.
Check it once during the build and once at the end. Two numbers to watch:
- Context. Above 50%, the conversation is long enough that context rot starts to matter. Bead 6 covers compacting and starting fresh.
- Spend. Use the number to calibrate your sense of cost. The first build will surprise you in one direction or the other.
What a working session looks like
Code's main pane fills with a stream of turns. Most look like this:
- An action line. "Wrote
dashboard.html· 184 lines" or "Rancurl …· 200 OK · 412 KB". One line per thing Code did. - A short reasoning sentence. "Now wiring the volatility calculation into the stat strip."
- Occasional questions back to you. "Yahoo returned an empty array for XOM on weekends. Fill forward, drop those rows, or fail loud?"
What a healthy turn looks like: one file touched, one purpose, one sentence of intent. What an unhealthy turn looks like: a wall of file edits with no narration, or narration that doesn't match the diff (Code says it added error handling; the diff doesn't show it). Either pattern is a sign to interrupt.
When to step in (and how)
Code is working. Three reasons to interrupt the flow:
- Code asks you a question. Answer in one block, in the same vocabulary as the question. Don't expand scope while answering.
- Code claims something you cannot verify. "Tests pass." With what tests? "I added the calculation." Where? Push back: show me.
- Code drifts from the plan. A new file, an unfamiliar dependency, a refactor that wasn't asked for. Stop the turn. Ask why.
To stop a turn: hit Esc (or the cancel button) — Code halts at the next checkpoint. Nothing on disk is half-written, but the session keeps the partial state in context. You can correct course and continue without losing the run.
Otherwise, let it work. Bead 5 is where you read what Code did in detail.
Key Takeaway
Set permissions to auto. Watch context and cost with /usage. A healthy turn does one thing, narrates it in one sentence, and the diff matches the narration. Step in only when the answer requires you. The folder and Git are your safety net, not the permission prompt.
Checkpoint
↑ A healthy build session. One action per turn, one sentence of intent, diff sizes visible. The session pill on the left tracks context (34%) and spend ($0.84). Turn 12 is a real question — answer it in the composer and Code resumes. If you ever see a turn with no narration, or narration that doesn't match the file changes — interrupt.
Why this works
Permission systems trade safety for flow (Claude Code best practices). For a self-contained project folder under Git, the structural boundaries (scope and history) matter more than per-action approval, which click fatigue degrades to noise within a few prompts.
██████╗ ███████╗ █████╗ ██████╗ ██████╗ ███████╗
██╔══██╗ ██╔════╝ ██╔══██╗ ██╔══██╗ ██╔═████╗ ██╔════╝
██████╔╝ █████╗ ███████║ ██║ ██║ ██║██╔██║ ███████╗
██╔══██╗ ██╔══╝ ██╔══██║ ██║ ██║ ████╔╝██║ ╚════██║
██████╔╝ ███████╗ ██║ ██║ ██████╔╝ ╚██████╔╝ ███████║
╚═════╝ ╚══════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝
Read what Code did
Code has built. Files have changed. Before you trust them, read them.
Reading code you did not write is intimidating if you are not a developer. The move is not to read every line. The move is to know what changed at a high level, and to ask Code to translate the parts that matter.
Show me the diff
Ask Code:
Show me the diff since the last commit.
Code prints the diff. A diff has three parts:
- The file path at the top of each block. Where the change happened.
- Lines starting with
-. Removed lines. - Lines starting with
+. Added lines.
You do not need to understand every line. Skim for surprises:
- New files. Did you expect that file?
- Files outside your plan. Pause. Why did Code touch that?
- Big blocks of removed code. Pause. Did Code throw something away you wanted?
Have Code translate
Then ask:
Explain what you just changed in plain English. Pretend I am
not a developer. What did each change do, and what would break
if it were missing?
Code translates. Read the translation, not the code.
If something is still unclear, ask follow-ups. Code knows what it did and has the project context. There is no faster reader for your own diff.
Run the thing
The final test is the obvious one. Open the HTML in a browser. Click around. Do the numbers look plausible.
If the diff explanation made sense and the thing runs, the work is good. If either fails, you are in bead 6 territory.
Checkpoint
Diff since last commit
diff — dashboard.html + <script src="https://cdn.jsdelivr.net/npm/chart.js"> + const TICKERS = ['AAPL','MSFT','NVDA','JPM','XOM']; + async function fetchCloses(t) { … } + function annualisedVol(returns) { … } + function maxDrawdown(prices) { … } − <p>Hello world</p> … (218 more lines added)
In plain English
- Loaded Chart.js from a CDN. Without this the page can't draw charts.
- Defined the five tickers as a constant. Edit this list to change the portfolio.
- Added
fetchCloses()— fetches 2y of daily closes from Yahoo. If it fails, the error surfaces (no silent try/except). - Added
annualisedVol()andmaxDrawdown()— the math for the stat strip. If missing, those numbers wouldn't render. - Removed the placeholder "Hello world" from the empty page shell.
↑ The diff up top, the translation below. Skim the diff for surprises (new files, big deletions, things outside the plan); read the translation for what each block does. Then open dashboard.html in a browser — the only check that matters is whether the thing actually runs.
Key Takeaway
Skim the diff for surprises. Have Code translate the rest in plain English. Run the output. Three checks, all in Code, all under five minutes.
Why this works
Reading is the slow loop in agentic work and the one most readers skip. The reliable failure mode for non-coders is: model claims success, code looks reasonable, result does not actually run. Skimming the diff catches structural surprises; asking Code to translate catches the semantic gap; running the output catches both. Three cheap checks, layered.
██████╗ ███████╗ █████╗ ██████╗ ██████╗ ██████╗
██╔══██╗ ██╔════╝ ██╔══██╗ ██╔══██╗ ██╔═████╗ ██╔════╝
██████╔╝ █████╗ ███████║ ██║ ██║ ██║██╔██║ ███████╗
██╔══██╗ ██╔══╝ ██╔══██║ ██║ ██║ ████╔╝██║ ██╔═══██╗
██████╔╝ ███████╗ ██║ ██║ ██████╔╝ ╚██████╔╝ ╚██████╔╝
╚═════╝ ╚══════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═════╝
Iterate, refine, recover
Bead 5 surfaced something wrong. Or close-but-not-right. This is normal. The first build is rarely the final build.
Three modes for what comes next: iterate, revert, or back out.
Iterate when the gap is small
If the change you want is small (a number is wrong, a column needs renaming, a function does too much), describe it and ask Code:
The volatility number on the dashboard reads 14% but my hand
calculation shows ~12%. The discrepancy is too large to be
rounding. Find the bug, propose a fix, and explain it before
applying.
The "explain it before applying" suffix is the move. It avoids Code silently editing something you cannot verify.
Then re-run bead 5 (skim diff, translate, run). The loop is: identify gap, ask Code to fix, verify.
Revert when an iteration makes things worse
Sometimes Code's "fix" makes it worse. When that happens, do not accept the diff and try to fix the fix. Revert.
Ask Code:
Revert the last commit. Show me what reverted.
Code uses Git to undo. Your folder is back to the previous good state. Now propose the fix differently.
Git is your safety net. Use it. The cost of reverting is zero; the cost of layering broken fixes on broken fixes is high.
Back out when the approach is wrong
Sometimes the issue is not a bug. The whole approach is off. Code is doing exactly what you asked for; what you asked for is what you do not want.
When this happens, stop iterating. Do not keep poking. Go back to plan mode (bead 2), redraft the plan, and rebuild. You may want to revert all the way to your initial commit to start clean.
Signs you are in this mode rather than the iteration mode:
- You cannot describe what is wrong without describing the architecture.
- You have iterated three or more times without convergence.
- The result is structurally different from what you imagined.
/clear when the session is too long
If /usage shows context above 70%, or the session has been running an hour, the model is starting to forget early decisions. Type /clear and start a fresh session. Code reads CLAUDE.md again and picks up with a clean context.
This is the move from pre-tool. Use it earlier than feels comfortable.
Checkpoint
What reverted
− const vol = stdev(returns) * 252; // the bad fix + const vol = stdev(returns) * Math.sqrt(252); // restored
You're back at 8b1c. Nothing else changed. Describe the fix differently and try again — or, if this is the third revert in a row, back out to plan mode.
✳↑ The bad commit struck through in the sidebar; HEAD now points at the previous good commit. The diff shows what undid — in this case, restoring Math.sqrt(252) for the annualisation. Cost of the revert: zero. Cost of layering broken fixes on broken fixes: a long afternoon.
Key Takeaway
Three modes plus one reset. Iterate for small gaps. Revert for bad fixes. Back out for wrong approaches. /clear when the session is too long.
Why this works
Iteration without revert capability compounds errors (Hockenmaier on iteration discipline). Git plus deliberate revert keeps the cost of being wrong bounded. Knowing when to back out rather than keep iterating is the harder skill; the heuristic of "three rounds without convergence equals wrong approach" gives a cheap stopping rule.
██████╗ ███████╗ █████╗ ██████╗ ██████╗ ███████╗
██╔══██╗ ██╔════╝ ██╔══██╗ ██╔══██╗ ██╔═████╗ ╚════██║
██████╔╝ █████╗ ███████║ ██║ ██║ ██║██╔██║ ██╔╝
██╔══██╗ ██╔══╝ ██╔══██║ ██║ ██║ ████╔╝██║ ██╔╝
██████╔╝ ███████╗ ██║ ██║ ██████╔╝ ╚██████╔╝ ██║
╚═════╝ ╚══════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝
Post-mortem
You have a working dashboard. Before you call it done, run the post-mortem.
The post-mortem has two halves. First, ask Code itself what shortcuts it took. Second, encode the lessons into CLAUDE.md so the next project starts on a higher floor.
Three questions for Code
Start a fresh session in Code (the /clear move from bead 6, applied here for bias-hedging). With CLAUDE.md still in scope, paste:
Three questions about the project we just built.
1. What worked? Where did the build go right?
2. Imagine a senior engineer is reviewing this code in a
pull request. What would they complain about? Be specific.
Name files and line numbers. Cover: silent failures, fake
success / hardcoded values, scope creep, edits to files
that weren't in the plan, missing edge cases.
3. How would you change CLAUDE.md so we don't repeat the
same complaints on the next project?
Read the answers carefully. The first question is calibration. The second is the work. The third is the leverage.
Question 2 uses a third-person framing on purpose. Asking “what shortcuts did you take?” nudges Code into self-defence; asking “what would a senior engineer complain about?” gets sharper, more honest critique. Same model, different mirror.
Code may want to flatter you on question one. Ignore the flattery. The real value is in two and three.
What to look for in question two
Common shortcuts agentic tools take, with cues:
- Silent failure. "The data fetch is wrapped in a try/except that swallows errors." Means: if the API breaks, your dashboard shows zeros, not an error.
- Fake success. "The volatility calculation is hardcoded for testing and was not replaced with the real formula." Means: your numbers are dummies.
- Yak-shaving. "I added a build tool you did not ask for to handle a problem that was not actually present." Means: scope creep.
- Edits to the wrong file. "I added the chart code to
dashboard.htmlbut also touchedindex.htmlto import it; the second edit was unnecessary." Means: scope outside the plan.
If Code names any of these (or any of their cousins), that is a real finding. Push: "Show me where in the code, exactly."
Refine CLAUDE.md
Take Code's answer to question three and edit CLAUDE.md. Add the rules that would have caught the shortcuts. Examples:
- "Never wrap data fetches in silent try/except. Surface errors."
- "Never hardcode placeholder values. Mark unfinished work with
// TODO: real implementationand stop." - "Never add a dependency or build tool without asking."
- "Edit only the files named in the plan. If you need to touch others, stop and confirm."
Commit the updated CLAUDE.md. The next project that uses this file starts on a higher floor than this one did.
Checkpoint
2. What shortcuts did you take?
Specific. Files and line numbers.
- Silent failure. dashboard.html:142 — the
fetchCloses()retry loop catches network errors and returns[]instead of surfacing them. If Yahoo is down your dashboard renders zeros. - Unverified assumption. dashboard.html:208 — I'm using close prices and treating them as adjusted (dividend/split-adjusted). For AAPL and MSFT this materially changes the 2y return.
- Edge case skipped. No handling for tickers with less than 2y of history. If you swap in a recent IPO,
maxDrawdown()divides by a near-zero baseline.
3. CLAUDE.md additions
+ Surface fetch errors in the UI. No silent fallback to empty arrays. + Use adjusted close, not raw close, for any return calculation. + Validate ticker history length before drawdown math; reject < 1y.✳
↑ A useful post-mortem names files and line numbers for shortcuts and proposes concrete CLAUDE.md edits that would have caught them. If Code's answer to question 2 is vague (“I made some assumptions…”), push: “Which function? Which line?” Vagueness is its own finding.
Key Takeaway
Ask Code what shortcuts it took. Look for silent failure, fake success, yak-shaving, and out-of-scope edits. Bake the lessons into CLAUDE.md so the next project starts higher.
Why this works
Self-critique under structured prompting catches more failures than free-form review (Self-Refine, Madaan et al, 2303.17651). The fresh-session move (start a new chat for the post-mortem) reduces sequential-context bias from the build session (SycEval, 2502.08177). Updating CLAUDE.md is the loop close: each project's failures become the next project's standing rules.