How This Book Was Built: Spec-Driven Development
π― The Challengeβ
Hackathon Goal: "Write a book using Docusaurus and deploy it to GitHub Pages using Spec-Kit Plus and Claude Code."
This textbook isn't just about roboticsβit's a living demonstration of AI-powered, spec-driven content creation. Every chapter you read was systematically generated following the same reproducible workflow.
π The Workflowβ
graph LR
A[/sp.specify] --> B[spec.md]
B --> C[/sp.plan]
C --> D[plan.md + ADRs]
D --> E[/sp.tasks]
E --> F[tasks.md]
F --> G[/sp.implement]
G --> H[Content + PHRs]
H --> I{Quality Check}
I -->|Pass| J[Deploy]
I -->|Fail| A
style A fill:#4CAF50
style C fill:#2196F3
style E fill:#FF9800
style G fill:#9C27B0
style J fill:#4CAF50
Every chapter follows this four-step process:
1. Specification (/sp.specify)β
Input: Natural language feature description
Output: spec.md with structured requirements
Example:
/sp.specify "Create Chapter 2: ROS 2 Fundamentals covering pub/sub, services, actions, and transforms"
What Happens:
- Analyzes feature description
- Generates user stories (e.g., US-P1: Pub/Sub Lab, US-P2: Services & Actions)
- Defines functional requirements (FR-001 to FR-008)
- Creates acceptance criteria for each requirement
- Outputs structured
specs/002-ros2-fundamentals/spec.md
Example Output (View Chapter 2 Spec):
## User Stories
- **US-P1**: As a student, I want hands-on pub/sub labs...
- **US-P2**: As a student, I want service/action examples...
## Functional Requirements
- **FR-001**: Provide learning objectives (5 measurable)
- **FR-002**: Explain ROS 2 graph architecture...
2. Planning (/sp.plan)β
Input: spec.md
Output: plan.md + Architecture Decision Records (ADRs)
Example:
/sp.plan # Runs in feature branch after spec is approved
What Happens:
- Analyzes technical requirements
- Makes architectural decisions (e.g., "Use Black formatter for Python")
- Generates design artifacts (diagrams, data models)
- Creates ADRs for significant decisions
- Outputs
specs/002-ros2-fundamentals/plan.md
ADR Suggestion (happens during planning):
π Architectural decision detected: ROS 2 Humble as primary distribution
Document reasoning and tradeoffs? Run `/sp.adr ros2-humble-selection`
Example ADRs Created:
Example Output (View Chapter 2 Plan):
## Architecture Decisions
1. Use Gazebo Classic 11+ for simulation (stability over features)
2. Black formatter with line length 100 (ROS 2 community standard)
3. Google-style docstrings (clarity for students)
3. Task Generation (/sp.tasks)β
Input: spec.md + plan.md
Output: tasks.md with dependency-ordered implementation tasks
Example:
/sp.tasks # After plan is reviewed and approved
What Happens:
- Breaks down plan into actionable tasks
- Orders tasks by dependencies (Phase 1: Setup β Phase 2: Foundation β ...)
- Marks parallelizable tasks with
[P] - Includes file paths and acceptance criteria
- Outputs
specs/002-ros2-fundamentals/tasks.md(65 tasks for Chapter 2!)
Example Output (View Chapter 2 Tasks):
## Phase 1: Setup
- [ ] T001 Create directory: chapters/02-ros2-fundamentals/
- [ ] T002 Create subdirectories: assets/, assessments/, lab-01-pubsub/
## Phase 2: Foundation
- [ ] T005 Write learning objectives (5 from spec FR-001-008)
- [ ] T007 [P] Write conceptual overview: ROS 2 graph architecture
## Phase 3: Lab P1 - Pub/Sub
- [ ] T016 **publisher_node.py** (~50 lines, publishes IMU at 50Hz)
- [ ] T017 **subscriber_node.py** (~40 lines, logs messages)
4. Implementation (/sp.implement)β
Input: tasks.md
Output: Actual content (markdown, code, diagrams) + Prompt History Records (PHRs)
Example:
/sp.implement # Executes tasks systematically
What Happens:
- Processes tasks in dependency order
- Creates files (README.md, Python code, diagrams)
- Validates against acceptance criteria
- Generates PHRs documenting each step
- Marks tasks as complete in real-time
Prompt History Records (PHRs) are created automatically:
- Location:
history/prompts/<feature-name>/ - Format:
<ID>-<slug>.<stage>.prompt.md - Contains: User input, AI response, files changed, outcome
Example PHRs (View All PHRs):
history/prompts/002-ros2-fundamentals/
βββ 001-create-ros2-fundamentals-spec.spec.prompt.md
βββ 002-create-ros2-fundamentals-plan.plan.prompt.md
βββ 003-create-ros2-fundamentals-tasks.tasks.prompt.md
βββ 004-implement-ros2-fundamentals-foundation.implement.prompt.md
Example Implementation Output:
- chapters/02-ros2-fundamentals/README.md
- chapters/02-ros2-fundamentals/assets/glossary.md
- Diagrams, quizzes, troubleshooting guides
π Quality Control: The Constitutionβ
Every piece of content is validated against the Constitution (View Full Constitution).
Key Principles:
- Embodiment-First: Every concept links to physical constraints
- Sim-to-Real Continuity: Simulation and deployment treated as one pipeline
- Systems Integration: Cross-cutting concerns addressed (perception β planning β control)
- Toolchain Transparency: All versions locked and documented
- Assessment by Action: Deliverable artifacts with acceptance tests
- Ethical Considerations: Safety, privacy, bias addressed where relevant
Pre-Publication Checklist (must pass ALL):
- Learning objectives are measurable
- Code examples tested on Ubuntu 22.04 + ROS 2 Humble
- Troubleshooting guide has β₯5 common errors
- Visual aids present (diagrams, code blocks)
- Reading level appropriate (technical but accessible)
π― Why This Mattersβ
Traditional Book Writingβ
Idea β Draft β Edit β Review β Publish
β_________________________________|
(repeat until "good enough")
Problems:
- β No audit trail (why was decision X made?)
- β Hard to collaborate (who did what?)
- β Inconsistent quality (different chapters, different standards)
- β Not reproducible (can't recreate process)
Spec-Driven Book Creationβ
Specify β Plan β Tasks β Implement β Validate
β β β β β
β β β β β
spec.md plan.md tasks.md content PHRs
β β β
ββββββββββ΄βββ ADRs βββββββββββ
Advantages:
- β Reproducible: Follow same /sp.* commands for new chapters
- β Auditable: PHRs capture every decision and change
- β Collaborative: Contributors follow same systematic workflow
- β Quality-Controlled: Constitution enforces consistency
- β Transparent: ADRs explain architectural choices
- β Efficient: AI handles grunt work, humans provide direction
π Real Example: Chapter 2 Timelineβ
Total Time: ~6 hours (would take weeks manually) Tasks: 65 (14 complete, 51 remaining) PHRs: 3 (one per stage) Content: ~22KB README, glossary, diagrams, quiz
Stage Breakdownβ
| Stage | Command | Time | Output |
|---|---|---|---|
| Spec | /sp.specify "Chapter 2..." | 30 min | spec.md (4 user stories, 8 functional reqs) |
| Plan | /sp.plan | 45 min | plan.md (architecture, design artifacts) |
| Tasks | /sp.tasks | 20 min | tasks.md (65 dependency-ordered tasks) |
| Implement (Phase 1-2) | /sp.implement | 4 hours | Foundation content (objectives, overview, diagrams, quiz) |
Remaining: Labs (P1-P4), assessments, polish
π οΈ Try It Yourselfβ
Prerequisitesβ
- Install Claude Code
- Clone Spec-Kit Plus
- Initialize in your repository:
sp-init
Create Your Own Chapterβ
# 1. Specify your feature
/sp.specify "Create Chapter 11: Swarm Robotics covering multi-agent coordination, consensus algorithms, and decentralized control"
# 2. Review generated spec.md, then plan
/sp.plan
# 3. Claude suggests ADRs for significant decisions:
# "π Architectural decision detected: ROS 2 DDS vs custom protocol"
/sp.adr dds-vs-custom-protocol # Create ADR
# 4. Generate tasks
/sp.tasks
# 5. Implement
/sp.implement
# 6. PHRs are created automatically at each step!
Result:
specs/011-swarm-robotics/spec.mdspecs/011-swarm-robotics/plan.mdspecs/011-swarm-robotics/tasks.mdchapters/11-swarm-robotics/README.md(+ labs, diagrams)history/prompts/011-swarm-robotics/*.prompt.md(PHRs)history/adr/*.md(ADRs for significant decisions)
π Case Studiesβ
Chapter 1: Introduction to Physical AIβ
Status: β Complete (100%) PHRs: 9 (View All) Highlights:
- Systematic /sp.* workflow from spec to polish
- 8 implementation phases (setup β foundation β diagrams β assessments β polish)
- Constitution-validated quality
- Result: 1975-line comprehensive introduction
View Chapter 1 Complete Workflow β
Chapter 2: ROS 2 Fundamentalsβ
Status: π‘ Partial (22% - foundation complete, labs pending) PHRs: 3 (View All) Highlights:
- 65 tasks generated (14 complete)
- Modular design (4 independent labs: P1-P4)
- Demonstrates parallel task execution with
[P]markers - TODO tracking for transparency
Chapter 3: Simulation Environmentsβ
Status: π§ Planned (spec + plan + tasks complete, awaiting implementation)
PHRs: 3 (View All)
Next: /sp.implement when ready
π Explore the Artifactsβ
Prompt History Records (PHRs)β
See every decision: PHR Gallery - 16 records and counting
Architecture Decision Records (ADRs)β
Understand the "why": ADR Index - 2 major decisions documented
Quality Standardsβ
See the guardrails: Constitution - The rulebook for all content
Source Codeβ
Browse the specs: GitHub Repository
π‘ Key Takeawaysβ
- Systematic > Ad-hoc: Following /sp.* workflow ensures consistency
- Documentation-First: Specs and plans force clear thinking
- Auditability: PHRs make process reproducible and transparent
- Decision Capture: ADRs preserve architectural reasoning
- Quality Gates: Constitution validates every deliverable
- AI Augmentation: Claude Code handles grunt work, humans guide strategy
This isn't just a textbookβit's a blueprint for AI-powered content creation.
π Next Stepsβ
- Learn More: View PHR Gallery to see the complete history
- Understand Decisions: Browse ADRs for architectural choices
- See Quality Standards: Read Constitution for content rules
- Start Learning: Begin Chapter 1 β
Built using Spec-Kit Plus + Claude Code + Docusaurus