Product Documentation with AsciiDoc and Ralph
Product documentation is one of those things every team agrees matters and nobody wants to write. It’s tedious, it goes stale within weeks, and the people who know the system best are usually the worst at sitting down to document it. I’ve been through enough rounds of “we’ll update the docs after launch” to know that without a forcing function, docs don’t happen.
Over the past year, I’ve been combining two tools to attack this problem: AsciiDoc for the output format and an autonomous AI agent loop (Ralph) for the writing. The result is a workflow where I write a structured brief, an agent iterates through the documentation tasks, and I review the output. It’s not magic — but it cuts the effort by roughly 70% while producing documentation that actually matches the codebase.
Why AsciiDoc over Markdown
Markdown is fine for blog posts and READMEs. For product documentation — technical references, integration guides, operations manuals — it falls apart.
Includes. AsciiDoc lets you pull in external files or file fragments. Your API example code lives in an actual source file that gets tested. When the code changes, the documentation stays in sync because it’s including the real file, not a pasted copy.
[source,python]
----
\include::example/create_payment.py[tags=basic_usage]
----Conditional content. Different audiences see different things. With AsciiDoc’s ifdef directives, one source produces an internal operations guide and an external integration guide. No need to maintain two separate documents that inevitably diverge.
Structured output. AsciiDoc compiles to PDF, HTML, and DocBook via Asciidoctor. You write once, publish everywhere. The PDF output is genuinely good — proper headers, footers, cross-references, and table of contents — without fighting a word processor.
Admonitions, tables, cross-references. These all work natively. No plugins, no extensions, no workarounds. For technical documentation, this isn’t optional — it’s essential.
Markdown could handle some of this with extensions, but then you’re locked into a specific Markdown flavour. AsciiDoc has a single specification and a mature toolchain.
The Ralph loop concept
Ralph is an autonomous agent pattern. The idea is simple: you write a PRD (Product Requirements Document) as a JSON file with prioritised user stories, and a loop script feeds those stories to an AI coding agent one at a time. Each iteration, the agent reads the PRD, picks the next incomplete story, implements it, verifies the result, commits, and updates a progress log. The loop repeats until all stories pass or a maximum iteration count is reached.
The key design constraint is that each iteration is stateless — the agent doesn’t carry memory from the previous iteration beyond what’s written in the PRD and the progress log. This makes the loop resilient. If an iteration fails, the next one picks up from the last successful commit.
For documentation, this means: break the documentation into discrete sections, describe what each section should contain, and let the agent write them sequentially.
The end-to-end workflow
Here’s what a documentation run looks like in practice.
Step 1: Write the PRD. I define each documentation section as a user story. The acceptance criteria describe the content, the audience, and the constraints.
{
"id": "DOC-003",
"title": "Write API authentication guide",
"description": "Document the OAuth2 flow for API consumers",
"acceptanceCriteria": [
"File exists at docs/modules/api/pages/authentication.adoc",
"Covers client credentials and authorization code flows",
"Includes working curl examples against the sandbox API",
"References the error codes table from DOC-002",
"AsciiDoc cross-references resolve without warnings"
],
"priority": 3,
"passes": false
}Step 2: Launch the loop. Ralph iterates through the stories. For each documentation section, it reads the existing codebase for context, writes the AsciiDoc file, verifies that Asciidoctor compiles without errors, and commits.
Step 3: Review. I read the output, adjust tone, fix technical inaccuracies, and merge. The agent gets the structure and coverage right about 80% of the time. I spend my time on accuracy and voice, not on fighting blank-page syndrome.
The resulting AsciiDoc follows whatever structure the PRD specifies. Here’s a typical output fragment:
== Authentication
The API uses OAuth2 for all authenticated endpoints.
Two grant types are supported depending on your integration model.
=== Client credentials flow
Use this flow for server-to-server integrations where
no end-user is involved.
[source,bash]
----
curl -X POST https://sandbox.example.com/oauth/token \
-d grant_type=client_credentials \
-d client_id=$CLIENT_ID \
-d client_secret=$CLIENT_SECRET
----
The response includes an `access_token` valid for 3600 seconds.
See <<error-codes>> for possible failure responses.
NOTE: Sandbox tokens are not valid against the production API.
Obtain separate credentials for each environment.The agent generates proper AsciiDoc structure — sections, admonitions, source blocks with language hints, cross-references. It doesn’t produce Markdown-in-AsciiDoc-clothing because the PRD criteria specify AsciiDoc conventions.
What I’ve learned
PRD quality determines output quality. This is the same lesson from every AI coding tool, but it’s even more pronounced with documentation. If the acceptance criteria say “document the authentication flow”, you get generic filler. If they say “cover client credentials and authorization code flows with curl examples against sandbox”, you get usable content.
One section per story works. Trying to generate an entire document in one pass produces unfocused output. Breaking it into discrete sections — each with specific criteria — lets the agent focus and lets you review incrementally.
AsciiDoc’s strictness helps. Because Asciidoctor is a proper compiler, build failures catch structural problems immediately. Missing includes, broken cross-references, and malformed tables fail the build. The agent gets this feedback in its iteration loop and fixes problems before committing. With Markdown, these errors would silently produce broken output.
It’s not zero-effort. You still need to write the PRD, which means understanding the documentation structure and content requirements. And you still need to review everything — the agent occasionally hallucinates API details or uses outdated parameter names. The value is in eliminating the mechanical writing, not the thinking.
Keep the agent close to the source code. The best results come when the agent can read the codebase alongside writing the docs. It pulls actual class names, method signatures, and config values from the source. Disconnected documentation generation — where the agent works only from the PRD without codebase access — produces vague content that needs heavy editing.
When this works best
This workflow fits documentation that’s structured and technical: API references, integration guides, operations runbooks, architecture decision records. Content where the shape is predictable and the facts can be verified against code.
It’s less useful for conceptual documentation, tutorials, or anything that needs a strong narrative arc. Those benefit from a human writer who understands the reader’s journey. The agent can draft them, but the editing effort starts approaching what you’d spend writing from scratch.
The sweet spot is mature systems that need comprehensive documentation but don’t have it. The codebase exists, the knowledge is in the team’s heads and in the code, and the bottleneck is the writing. Point an agent at it with a good PRD, and you’ll have a first draft by the end of the day.