The Claude Code Field Manual: 13 Battle-Tested Workflows From the Tool's Creator

February 23, 2026·11 min read·Sudipta Pathak
AI-Assisted Codingclaude-codeaiproductivityworkflowscliguide

The Claude Code Field Manual: 13 Battle-Tested Workflows From the Tool's Creator

Fork. Clone. Learn.

Most developers using Claude Code are operating at maybe 20% of its potential. They open a terminal, type a prompt, wait, approve, repeat. That's like buying a race car and never leaving first gear.

Boris Cherny built Claude Code. He also shared 13 workflows that separate casual users from developers who ship with it daily. I turned all 13 into hands-on exercises backed by a real codebase — a Flask API with intentional bugs, missing features, and code smells for Claude to find and fix.

This is the field manual. Fork the repo, work through the exercises, and you'll have these workflows in muscle memory by end of day.

Repository: claude-code-best-practices

git clone https://github.com/sudiptap/claude-code-best-practices.git
cd claude_code_best_practices
pip install -r requirements.txt
pytest src/tests/ -v  # 37 tests, all green

The repo includes a small Flask todo API with intentional bugs and code smells — division by zero errors, missing features, overly verbose functions. These aren't mistakes. They're teaching tools. Each exercise uses them to demonstrate a tip in practice.


Tip 1: Run Multiple Claude Code Instances in Parallel

Open three terminals. Start Claude Code in each. Give each a different task.

That's it. That's the tip. And it's the one most people miss.

Terminal 1 → "Add a priority filter to GET /todos"
Terminal 2 → "Fix DELETE to return 204 instead of 200"
Terminal 3 → "Write tests for the /todos/stats endpoint"

All three run simultaneously. Each reads the codebase, makes changes, runs tests. Three tasks complete in the time of one.

The key insight: Claude Code instances are independent processes. They don't conflict unless they edit the same lines. In practice, most parallel tasks touch different files.

Try it: Open exercises/01-parallel-instances.md in the repo. It walks you through running three instances on real tasks against the sample app.


Tip 2: Use Web Claude as Your Architect

Claude Code is an executor — it reads files, writes code, runs commands. Claude on the web (claude.ai) is a thinker — it excels at open-ended design discussions.

Use both.

Web Claude (claude.ai):

I have a Flask todo API. I want to add a "tags" feature.
Should tags be a separate table or a comma-separated column?
What are the trade-offs?

Have a back-and-forth. Explore options. Settle on a design.

Terminal Claude Code:

Implement the following design for tags:
[paste the design you agreed on]

Claude Code reads the codebase, understands existing patterns, and implements exactly what you designed. Separation of thinking and doing.

Try it: exercises/02-web-claude.md


Tip 3: Use Opus with Extended Thinking for Hard Problems

Not every task needs the most powerful model. But when you're stuck — a subtle bug, a complex refactor, tangled dependencies — switch to Opus:

/model opus

Then give it the hard problem:

Look at src/utils.py. Think deeply about what's wrong.
What are the actual bugs vs. style issues?
What's the simplest refactoring path?
What edge cases do the tests miss?

Opus with extended thinking will reason through the code step by step, considering angles you might not have. You'll see its thinking process in real-time.

Use Sonnet for routine tasks. Switch to Opus when you need depth.

Try it: exercises/03-opus-thinking.md


Tip 4: Create a Shared CLAUDE.md

This is the highest-leverage tip on the list.

CLAUDE.md is a file at your project root that Claude Code reads automatically at the start of every session. It tells Claude your conventions, architecture, and common pitfalls — like onboarding documentation for an AI pair programmer.

Here's what the one in the tutorial repo looks like:

# Project: Todo API

## Architecture
- src/app.py — Flask REST API
- src/models.py — SQLite-backed Todo model
- src/utils.py — Utility functions

## Conventions
- Use snake_case for functions and variables
- Run tests: pytest src/tests/ -v
- Conventional commits: feat:, fix:, docs:

## Known Issues (Intentional for Exercises)
- utils.py:calculate_completion_stats() — division by zero on empty list
- app.py:delete_todo() — returns 200 instead of 204

## Common Mistakes to Avoid
- Do NOT use sqlite3.connect() directly — always use models.get_db()
- Do NOT hardcode the database path

Without this, Claude rediscovers your project from scratch every session. With it, Claude hits the ground running.

The critical move: Commit CLAUDE.md to git. Now every developer on your team — and every Claude Code instance — gets the same context automatically.

Try it: exercises/04-shared-claude-md.md — read the existing CLAUDE.md, test that Claude uses it, then extend it with new conventions.


Tip 5: Put Claude in Your Code Reviews

Claude Code can review pull requests — either from the CLI or automatically via GitHub Actions.

From the CLI (review staged changes):

/review

This runs the slash command in .claude/commands/review.md, which tells Claude to check for bugs, style issues, missing tests, and security problems.

Automated via GitHub Actions:

The repo includes .github/workflows/claude-code-review.yml:

- name: Claude Code Review
  uses: anthropics/claude-code-action@v1
  with:
    anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
    prompt: |
      Review this pull request. Focus on:
      1. Bugs and logic errors
      2. Test coverage
      3. Adherence to project conventions (see CLAUDE.md)
      4. Security issues

Every PR gets an instant first-pass review before a human even looks at it.

Try it: exercises/05-claude-code-reviews.md


Tip 6: Plan First, Then Auto-Accept

This is a two-phase workflow:

Phase 1 — Plan (slow, careful):

/plan

I want to add a "due date" feature:
- New column in the database
- Updated create/update endpoints
- New GET /todos/overdue endpoint
- Tests for everything

Plan this step by step. Don't change anything yet.

Review Claude's plan. Ask questions. Iterate until you're confident.

Phase 2 — Execute (fast, autonomous):

Toggle auto-accept with Shift+Tab, or:

claude --dangerously-skip-permissions

Now Claude executes the approved plan at full speed — no stopping to ask permission for each file edit.

The result: you get the quality of careful planning with the speed of autonomous execution.

Try it: exercises/06-plan-then-execute.md


Tip 7: Create Custom Slash Commands

Slash commands are reusable prompts stored as markdown files in .claude/commands/. The repo includes four:

.claude/commands/
├── review.md        → /review (code review)
├── simplify.md      → /simplify (find complex code)
├── verify.md        → /verify (run all checks)
└── test-and-fix.md  → /test-and-fix (feedback loop)

Creating your own is trivial. Want a command that generates API docs?

Create .claude/commands/api-docs.md:

Review all route handlers in src/app.py and generate API documentation
for each endpoint. Include method, path, request body, response format,
and a curl example.

Now /api-docs generates formatted documentation every time.

Commands can also take arguments with $ARGUMENTS:

Explain the function named "$ARGUMENTS" in this codebase.
Find where it's defined, explain what it does, and show where it's called.

Usage: /explain validate_todo_title

Try it: exercises/07-slash-commands.md — examine the existing commands, create your own, and test them.


Tip 8: Build Custom Sub-Agents

Sub-agents are slash commands designed as focused, single-purpose workers. The main difference from regular commands: sub-agents have a clear bounded task and produce structured output.

The /simplify sub-agent in this repo:

You are a code simplification sub-agent. Your job is to find and simplify
overly complex code in this project.

Steps:
1. Read all Python files in src/ (not tests)
2. Evaluate complexity: nesting depth, line count, unnecessary wrappers
3. Show before/after for each function
4. Ask before making changes

Run /simplify against the tutorial app and watch it find the intentionally verbose functions in utils.py — the string concatenation loops, the unnecessary wrappers, the overblown input parsing.

The power move: chain sub-agents.

Run /simplify to clean up the code, then /verify to make sure nothing
broke, then run the tests.

Try it: exercises/08-custom-subagents.md


Tip 9: Set Up Post-Tool-Use Hooks

Hooks are shell commands that run automatically after Claude uses a tool. Edit .claude/settings.json:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "command": "if echo \"$CLAUDE_FILE_PATH\" | grep -q 'src/.*\\.py$'; then pytest src/tests/ -x -q 2>&1 | tail -5; fi"
      }
    ]
  }
}

Now every time Claude edits a Python file, the test suite runs automatically. Claude sees the results and self-corrects if something broke.

You can also add safety hooks:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "command": "if echo \"$CLAUDE_COMMAND\" | grep -qE 'rm -rf|git push.*force'; then echo 'BLOCKED' >&2; exit 1; fi"
      }
    ]
  }
}

Quality enforcement on autopilot.

Try it: exercises/09-post-tool-hooks.md


Tip 10: Configure Permissions Properly

Without permissions configured, Claude asks for approval on every command. With them, safe operations run automatically:

{
  "permissions": {
    "allow": [
      "Bash(pytest:*)",
      "Bash(python:*)",
      "Bash(git status:*)",
      "Bash(git diff:*)",
      "Bash(git log:*)",
      "Bash(curl:*)"
    ],
    "deny": [
      "Bash(rm -rf:*)",
      "Bash(git push --force:*)"
    ]
  }
}

Tests run without asking. Git status runs without asking. But rm -rf and force pushes are blocked. The right balance of speed and safety.

Use /permissions in a session to view and modify these interactively.

Try it: exercises/10-permissions.md


Tip 11: Connect Everything with MCP

MCP (Model Context Protocol) lets Claude Code talk directly to external tools — databases, GitHub, Slack, Sentry, Jira.

Since the tutorial app uses SQLite, you can give Claude direct database access:

{
  "mcpServers": {
    "sqlite": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-sqlite", "todos.db"]
    }
  }
}

Now Claude can query the database while reading the code:

Query the todos table for any rows with invalid priority values,
then fix the validation code if needed.

Data and code, in one conversation.

Try it: exercises/11-mcp-tools.md


Tip 12: Use Background Agents for Heavy Lifting

Some tasks take a while. Don't babysit them.

claude -p "Review every function in src/utils.py. Rate complexity 1-5.
Output as a markdown table." > code-review-report.md &

The & runs it in the background. Go do other work. Check code-review-report.md when it's done.

For CI/CD pipelines, use headless mode:

claude -p --dangerously-skip-permissions \
  "Run pytest. If any tests fail, fix them and re-run.
   Output a summary of what you fixed."

Fully autonomous, no terminal needed.

Try it: exercises/12-background-agents.md


Tip 13: Give Claude Feedback Loops

This is the tip that ties everything together.

Without a feedback loop, Claude writes code and stops. You run tests, report failures, ask for fixes. With a feedback loop, Claude does this cycle itself:

Write code → Run tests → See failures → Fix → Re-run → Repeat until green

The repo has a /test-and-fix command that implements this:

1. Run pytest src/tests/ -v
2. If all pass → done
3. If any fail → read the error, fix the code, go to step 1
4. Maximum 5 iterations

Try it on the intentional bug:

The calculate_completion_stats function crashes on empty lists (division by zero).
Fix it so empty lists return zeros. Update the test. Keep running until all tests pass.

Claude will:

  1. Fix the division by zero guard
  2. Update the test from pytest.raises(ZeroDivisionError) to assert correct values
  3. Run pytest
  4. If anything else broke, fix that too
  5. Repeat until 37/37 green

Try it: exercises/13-feedback-loops.md


The Complete Exercise Map

# Tip Time Difficulty
1 Parallel instances 10 min Beginner
2 Web Claude + Code Claude 15 min Beginner
3 Opus extended thinking 10 min Beginner
4 Shared CLAUDE.md 15 min Beginner
5 Code reviews 20 min Intermediate
6 Plan then auto-accept 15 min Intermediate
7 Slash commands 15 min Intermediate
8 Custom sub-agents 20 min Intermediate
9 Post-tool hooks 15 min Intermediate
10 Permissions 10 min Beginner
11 MCP integration 20 min Advanced
12 Background agents 15 min Intermediate
13 Feedback loops 15 min Intermediate

Total: ~3 hours to go from "I use Claude Code sometimes" to "Claude Code is how I ship."

Deploy Yourself

git clone https://github.com/sudiptap/claude-code-best-practices.git
cd claude_code_best_practices
pip install -r requirements.txt
pytest src/tests/ -v

Open exercises/01-parallel-instances.md. Start with Tip 1. Work through all 13. By the end, these workflows won't be tips you read about — they'll be reflexes.


This field manual was built with Claude Code. The repo, the exercises, the blog post — all of it. That's the point.