Claude Code in Practice
I’ve been using Claude Code as my primary development tool for the past few weeks. Not as an experiment — as the actual tool I reach for when I sit down to build something. This post is a concrete account of what that looks like, based on two real projects: setting up this website and building an autonomous agent loop called Ralph.
The setup
This site runs on Hugo with the Blowfish theme, deployed to S3 + CloudFront. Nothing exotic. Ralph is a bash-driven loop that reads a PRD file, picks the next incomplete user story, implements it, commits, and moves on. It’s designed to run unattended — you write a PRD, launch the loop, and come back to a branch with working commits.
Both projects were built almost entirely through Claude Code running in the terminal. I wrote the PRDs and the architecture decisions. Claude Code did the implementation.
What actually impressed me
It understands project context. The first thing I noticed is that Claude Code reads your codebase and adapts. When I asked it to configure the Blowfish theme, it didn’t guess at parameter names — it checked the module source, found the right config structure, and used it. When a param name from the PRD didn’t match the actual Blowfish API, it corrected course without being told.
Iterative problem-solving works. During the Hugo setup, we hit a build error with Blowfish’s related.html partial:
execute of template failed: error calling related:
both limit and seq must be providedClaude Code traced this to a missing relatedContentLimit parameter in hugo.toml, added it, verified the fix, and moved on. No hand-holding required. It treated the error like a developer would — read the stack trace, checked the template source, found the root cause.
It stays focused. I gave it a PRD with 11 user stories. Each iteration, it picked the next story, implemented it, ran hugo server to verify, committed with a conventional commit message, and updated the progress log. No scope creep, no unnecessary refactoring of previous work. This matters more than it sounds — most AI coding tools want to “improve” things they weren’t asked to touch.
The terminal-native workflow is right. Claude Code runs in your terminal, in your project directory, with your tools. It uses your git config, your shell, your build commands. There’s no web UI to copy-paste from, no context window you’re manually managing. You just talk to it in the same terminal where you’d run hugo server yourself.
Where it gets rough
It can’t see the browser. For a front-end project, this is a real limitation. Claude Code verified that hugo server returned HTTP 200, checked that pages existed, validated HTML output. But it couldn’t actually look at the rendered page. Does the nav bar render correctly? Is the dark mode toggle visible? You still need to check that yourself. There are MCP-based browser tools emerging, but the gap is real today.
Long config files need patience. Hugo’s hugo.toml grew over several iterations as features were added. By the fifth or sixth story, Claude Code occasionally proposed edits to the wrong section or duplicated a parameter block. The fixes were quick, but it’s the kind of thing that wouldn’t happen with a fresh, short file. Context window pressure is real even when the tool manages it for you.
PRD quality is everything. Ralph’s effectiveness depends entirely on how well the PRD is written. Vague acceptance criteria produce vague implementations. When I wrote “configure social links” without specifying the exact Blowfish config structure, the first attempt used a config format that didn’t exist. When I wrote precise criteria — specific param paths, expected values — the implementation was right on the first pass.
This isn’t a Claude Code problem per se, but it’s the key lesson: AI coding tools amplify the quality of your specifications. Good specs, good output. Sloppy specs, debugging sessions.
It doesn’t push back enough. When a PRD acceptance criterion was technically wrong (like referencing a Blowfish param that doesn’t exist), Claude Code tried to implement it anyway before discovering the issue. A senior engineer would have flagged the discrepancy upfront. The tool is diligent but not yet opinionated enough about spec quality.
The Ralph loop as a pattern
The most interesting outcome wasn’t any single feature — it was the Ralph loop itself. The pattern is simple:
#!/bin/bash
# Simplified Ralph concept
while [ $iteration -le $max_iterations ]; do
claude --print "Read prd.json, implement next story, commit" \
--allowedTools Edit,Write,Bash,Read,Glob,Grep
iteration=$((iteration + 1))
doneEach iteration is stateless. Claude Code reads the PRD, checks which stories are done, picks the next one, implements it, and writes progress notes for the next iteration. The progress file acts as shared memory between iterations. It’s crude but effective — eight user stories were implemented in sequence without human intervention.
This loop pattern generalises. Swap the PRD for any structured task list and you get autonomous execution of multi-step projects. The constraint is clear specifications, not the tool’s capability.
Verdict
Claude Code is the first AI coding tool I’ve used that feels like a legitimate peer-programming setup rather than a fancy autocomplete. The terminal-native approach, the ability to read and navigate codebases, and the tool-use architecture make it qualitatively different from chat-based coding assistants.
It’s best for: well-specified implementation work, configuration-heavy tasks, iterative build-test cycles, and any project where you can write clear acceptance criteria.
It’s weakest for: visual/UI verification, exploratory work where requirements are fuzzy, and situations where you need the tool to challenge your assumptions rather than execute them.
Would I use it again? I’m using it right now.