Skip to main content

How This Book Was Built: Spec-Driven Development

🎯 The Challenge​

Hackathon Goal: "Write a book using Docusaurus and deploy it to GitHub Pages using Spec-Kit Plus and Claude Code."

This textbook isn't just about roboticsβ€”it's a living demonstration of AI-powered, spec-driven content creation. Every chapter you read was systematically generated following the same reproducible workflow.


πŸ”„ The Workflow​

graph LR
A[/sp.specify] --> B[spec.md]
B --> C[/sp.plan]
C --> D[plan.md + ADRs]
D --> E[/sp.tasks]
E --> F[tasks.md]
F --> G[/sp.implement]
G --> H[Content + PHRs]
H --> I{Quality Check}
I -->|Pass| J[Deploy]
I -->|Fail| A

style A fill:#4CAF50
style C fill:#2196F3
style E fill:#FF9800
style G fill:#9C27B0
style J fill:#4CAF50

Every chapter follows this four-step process:

1. Specification (/sp.specify)​

Input: Natural language feature description Output: spec.md with structured requirements

Example:

/sp.specify "Create Chapter 2: ROS 2 Fundamentals covering pub/sub, services, actions, and transforms"

What Happens:

  • Analyzes feature description
  • Generates user stories (e.g., US-P1: Pub/Sub Lab, US-P2: Services & Actions)
  • Defines functional requirements (FR-001 to FR-008)
  • Creates acceptance criteria for each requirement
  • Outputs structured specs/002-ros2-fundamentals/spec.md

Example Output (View Chapter 2 Spec):

## User Stories
- **US-P1**: As a student, I want hands-on pub/sub labs...
- **US-P2**: As a student, I want service/action examples...

## Functional Requirements
- **FR-001**: Provide learning objectives (5 measurable)
- **FR-002**: Explain ROS 2 graph architecture...

2. Planning (/sp.plan)​

Input: spec.md Output: plan.md + Architecture Decision Records (ADRs)

Example:

/sp.plan  # Runs in feature branch after spec is approved

What Happens:

  • Analyzes technical requirements
  • Makes architectural decisions (e.g., "Use Black formatter for Python")
  • Generates design artifacts (diagrams, data models)
  • Creates ADRs for significant decisions
  • Outputs specs/002-ros2-fundamentals/plan.md

ADR Suggestion (happens during planning):

πŸ“‹ Architectural decision detected: ROS 2 Humble as primary distribution
Document reasoning and tradeoffs? Run `/sp.adr ros2-humble-selection`

Example ADRs Created:

Example Output (View Chapter 2 Plan):

## Architecture Decisions
1. Use Gazebo Classic 11+ for simulation (stability over features)
2. Black formatter with line length 100 (ROS 2 community standard)
3. Google-style docstrings (clarity for students)

3. Task Generation (/sp.tasks)​

Input: spec.md + plan.md Output: tasks.md with dependency-ordered implementation tasks

Example:

/sp.tasks  # After plan is reviewed and approved

What Happens:

  • Breaks down plan into actionable tasks
  • Orders tasks by dependencies (Phase 1: Setup β†’ Phase 2: Foundation β†’ ...)
  • Marks parallelizable tasks with [P]
  • Includes file paths and acceptance criteria
  • Outputs specs/002-ros2-fundamentals/tasks.md (65 tasks for Chapter 2!)

Example Output (View Chapter 2 Tasks):

## Phase 1: Setup
- [ ] T001 Create directory: chapters/02-ros2-fundamentals/
- [ ] T002 Create subdirectories: assets/, assessments/, lab-01-pubsub/

## Phase 2: Foundation
- [ ] T005 Write learning objectives (5 from spec FR-001-008)
- [ ] T007 [P] Write conceptual overview: ROS 2 graph architecture

## Phase 3: Lab P1 - Pub/Sub
- [ ] T016 **publisher_node.py** (~50 lines, publishes IMU at 50Hz)
- [ ] T017 **subscriber_node.py** (~40 lines, logs messages)

4. Implementation (/sp.implement)​

Input: tasks.md Output: Actual content (markdown, code, diagrams) + Prompt History Records (PHRs)

Example:

/sp.implement  # Executes tasks systematically

What Happens:

  • Processes tasks in dependency order
  • Creates files (README.md, Python code, diagrams)
  • Validates against acceptance criteria
  • Generates PHRs documenting each step
  • Marks tasks as complete in real-time

Prompt History Records (PHRs) are created automatically:

  • Location: history/prompts/<feature-name>/
  • Format: <ID>-<slug>.<stage>.prompt.md
  • Contains: User input, AI response, files changed, outcome

Example PHRs (View All PHRs):

history/prompts/002-ros2-fundamentals/
β”œβ”€β”€ 001-create-ros2-fundamentals-spec.spec.prompt.md
β”œβ”€β”€ 002-create-ros2-fundamentals-plan.plan.prompt.md
β”œβ”€β”€ 003-create-ros2-fundamentals-tasks.tasks.prompt.md
└── 004-implement-ros2-fundamentals-foundation.implement.prompt.md

Example Implementation Output:


πŸ“‹ Quality Control: The Constitution​

Every piece of content is validated against the Constitution (View Full Constitution).

Key Principles:

  1. Embodiment-First: Every concept links to physical constraints
  2. Sim-to-Real Continuity: Simulation and deployment treated as one pipeline
  3. Systems Integration: Cross-cutting concerns addressed (perception β†’ planning β†’ control)
  4. Toolchain Transparency: All versions locked and documented
  5. Assessment by Action: Deliverable artifacts with acceptance tests
  6. Ethical Considerations: Safety, privacy, bias addressed where relevant

Pre-Publication Checklist (must pass ALL):

  • Learning objectives are measurable
  • Code examples tested on Ubuntu 22.04 + ROS 2 Humble
  • Troubleshooting guide has β‰₯5 common errors
  • Visual aids present (diagrams, code blocks)
  • Reading level appropriate (technical but accessible)

🎯 Why This Matters​

Traditional Book Writing​

Idea β†’ Draft β†’ Edit β†’ Review β†’ Publish
↑_________________________________|
(repeat until "good enough")

Problems:

  • ❌ No audit trail (why was decision X made?)
  • ❌ Hard to collaborate (who did what?)
  • ❌ Inconsistent quality (different chapters, different standards)
  • ❌ Not reproducible (can't recreate process)

Spec-Driven Book Creation​

Specify β†’ Plan β†’ Tasks β†’ Implement β†’ Validate
β”‚ β”‚ β”‚ β”‚ β”‚
↓ ↓ ↓ ↓ ↓
spec.md plan.md tasks.md content PHRs
β”‚ β”‚ β”‚
└────────┴──→ ADRs β†β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Advantages:

  • βœ… Reproducible: Follow same /sp.* commands for new chapters
  • βœ… Auditable: PHRs capture every decision and change
  • βœ… Collaborative: Contributors follow same systematic workflow
  • βœ… Quality-Controlled: Constitution enforces consistency
  • βœ… Transparent: ADRs explain architectural choices
  • βœ… Efficient: AI handles grunt work, humans provide direction

πŸ“Š Real Example: Chapter 2 Timeline​

Total Time: ~6 hours (would take weeks manually) Tasks: 65 (14 complete, 51 remaining) PHRs: 3 (one per stage) Content: ~22KB README, glossary, diagrams, quiz

Stage Breakdown​

StageCommandTimeOutput
Spec/sp.specify "Chapter 2..."30 minspec.md (4 user stories, 8 functional reqs)
Plan/sp.plan45 minplan.md (architecture, design artifacts)
Tasks/sp.tasks20 mintasks.md (65 dependency-ordered tasks)
Implement (Phase 1-2)/sp.implement4 hoursFoundation content (objectives, overview, diagrams, quiz)

Remaining: Labs (P1-P4), assessments, polish


πŸ› οΈ Try It Yourself​

Prerequisites​

  1. Install Claude Code
  2. Clone Spec-Kit Plus
  3. Initialize in your repository: sp-init

Create Your Own Chapter​

# 1. Specify your feature
/sp.specify "Create Chapter 11: Swarm Robotics covering multi-agent coordination, consensus algorithms, and decentralized control"

# 2. Review generated spec.md, then plan
/sp.plan

# 3. Claude suggests ADRs for significant decisions:
# "πŸ“‹ Architectural decision detected: ROS 2 DDS vs custom protocol"
/sp.adr dds-vs-custom-protocol # Create ADR

# 4. Generate tasks
/sp.tasks

# 5. Implement
/sp.implement

# 6. PHRs are created automatically at each step!

Result:

  • specs/011-swarm-robotics/spec.md
  • specs/011-swarm-robotics/plan.md
  • specs/011-swarm-robotics/tasks.md
  • chapters/11-swarm-robotics/README.md (+ labs, diagrams)
  • history/prompts/011-swarm-robotics/*.prompt.md (PHRs)
  • history/adr/*.md (ADRs for significant decisions)

πŸ“š Case Studies​

Chapter 1: Introduction to Physical AI​

Status: βœ… Complete (100%) PHRs: 9 (View All) Highlights:

  • Systematic /sp.* workflow from spec to polish
  • 8 implementation phases (setup β†’ foundation β†’ diagrams β†’ assessments β†’ polish)
  • Constitution-validated quality
  • Result: 1975-line comprehensive introduction

View Chapter 1 Complete Workflow β†’

Chapter 2: ROS 2 Fundamentals​

Status: 🟑 Partial (22% - foundation complete, labs pending) PHRs: 3 (View All) Highlights:

  • 65 tasks generated (14 complete)
  • Modular design (4 independent labs: P1-P4)
  • Demonstrates parallel task execution with [P] markers
  • TODO tracking for transparency

View Chapter 2 Progress β†’

Chapter 3: Simulation Environments​

Status: 🚧 Planned (spec + plan + tasks complete, awaiting implementation) PHRs: 3 (View All) Next: /sp.implement when ready


πŸ”— Explore the Artifacts​

Prompt History Records (PHRs)​

See every decision: PHR Gallery - 16 records and counting

Architecture Decision Records (ADRs)​

Understand the "why": ADR Index - 2 major decisions documented

Quality Standards​

See the guardrails: Constitution - The rulebook for all content

Source Code​

Browse the specs: GitHub Repository


πŸ’‘ Key Takeaways​

  1. Systematic > Ad-hoc: Following /sp.* workflow ensures consistency
  2. Documentation-First: Specs and plans force clear thinking
  3. Auditability: PHRs make process reproducible and transparent
  4. Decision Capture: ADRs preserve architectural reasoning
  5. Quality Gates: Constitution validates every deliverable
  6. AI Augmentation: Claude Code handles grunt work, humans guide strategy

This isn't just a textbookβ€”it's a blueprint for AI-powered content creation.


πŸš€ Next Steps​


Built using Spec-Kit Plus + Claude Code + Docusaurus