Background & Context
Spec Driven Development (SDD) implemented by Spec Kit is a new system (released on 9/13/2025) for development with AI created at GitHub & Microsoft by Den Delimarsky and John Lam. The approach guides the user through creating detailed specifications before writing code. This approach contrasts with traditional development workflows that often flow back and forth between design docs and coding. SDD aims to create a clear, shared understanding of requirements, which can be beneficial when collaborating with AI coding assistants like GitHub Copilot.
I chose to use SDD in VSCode to build my personal website, brentrossen.me, because it is a manageable project with well-defined features that provides a platform for me to experience the Spec Kit process. The site is built using Astro (see blog post), TypeScript, and Tailwind CSS; technologies I am not familiar with but wanted to learn. This experiment was intended to help me evaluate whether SDD is an effective development process to create quality code with AI tools.
About Spec Driven Development:
Spec Driven Development focuses on leveraging AI to create a detailed specification that becomes the source of truth for AI coding agents. The Spec Kit multi-phase process includes:
- Constitution: Define core principles and standards for the project.
- Specify: Create initial feature specifications.
- Clarify: Refine specifications through feedback and iteration.
- Plan: Define tech stack and architecture choices.
- Tasks: Break down features into actionable tasks.
- Analyze: Consistency and coverage analysis
- Implement: Execute development and validate against specifications.
The Spec Kit toolkit provides commands and templates to facilitate this process. It may appear that we would waterfall down through the steps, but in practice, the Constitution may get changed in the specification phase, the spec may change in planning, and the plan may change when creating tasks etc. The point is for the documents to have a complete plan for the feature before coding begins so that the AI has all the context it needs to generate code that meets the requirements. These documents can also be modified manually and should be reviewed by the developer at each step. This process is followed both for initial project creation and every time a new feature is added.
Why I Decided to Try It
I used to have a personal site in 2006-2012 built using Google Sites containing Reviews, My Research Papers, and Quotes. When they converted from classic to new Google Sites, I lost much of my content and never rebuilt it. So, I decided to pick up the habit again by building my site from scratch using modern tools and practices.
Target Features for SDD Implementation
- Personal Site Features:
- Blog Posts
- Reviews
- Quotes
- Experiments (later removed)
- Tech Stack:
- Astro
- TypeScript
- Tailwind CSS
Setting Up Spec Kit
Getting started was straightforward. I installed the Spec Kit CLI and initialized it in my project (Be sure to go to the source for the latest instructions as they may change):
uvx --from git+https://github.com/github/spec-kit.git specify init --here --ai copilot
This created a .specify folder with templates and scripts, plus added the new slash commands for GitHub Copilot: /specify, /clarify, /plan, /tasks, /analyze, and /implement.
My First Feature: The whole kit n kaboodle (oops)
In my first /specify, I described all aspects of my personal site, including blog posts, reviews, quotes, and experiments… That was a mistake. I should have started with a minimal feature set, maybe just blog posts, and then added reviews and quotes later. But I wanted to see how well the process worked end-to-end. So, my first advice is to start as small as possible to have a functional minimum viable product.
A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.
The Specification Phase
Using the /specify command, I started by describing what I wanted to build from a user perspective:
I started by writing in natural language that I wanted to create a personal website with sections for blog posts, reviews, quotes, and experiments. I described the user experience I wanted, such as easy navigation, a clean layout, and responsive design. I removed experiments in a second spec feature pass, which took the AI quite a while to remove all the content and tests and get the tests passing again.
I used both GPT 5 and Claude Sonnet 4 in the creation of this project. Both worked fine, but I enjoyed the chattier Claude Sonnet 4 more. I liked seeing what it was thinking to itself as it worked through the steps. GPT 5 would go silent for several minutes, and then spit out a complete solution.
Planning the Implementation
Next came /plan where I defined my technical approach:
I outlined the tech stack (Astro, TypeScript, Tailwind CSS). I described that I wanted to create content with Markdown. I specified that I wanted to use Astro’s content collections for managing posts, reviews, and quotes.
Breaking Down into Tasks
The /tasks command generated a list of actionable items with a breakdown of each task into sub-tasks. Here’s the tasks generated for the original feature:
- Phase 3.1: Bootstrap & Governance (T001-T008)
- Initialize project, dependencies, tooling, and directory structure
- Phase 3.2: Design Artifact Completion (Phase 1 Complete) (T009-T014)
- Create specification documents (data-model, quickstart, contracts)
- Phase 3.3: Tests First (TDD) – Contract, Integration, Visual & Performance Tests (T015-T034)
- Write all tests before implementation (TDD approach)
- Phase 3.4: Core Implementation (Make Tests Pass) (T035-T054)
- Implement features to make the tests pass
- Phase 3.5: Integration & Polish (T055-T064)
- Add CI/CD, optimize, document, and finalize
These were big chunky phases. I should have guided the AI to break them down into smaller, more manageable phases with clear deliverables. The AI was also over ambitious in the number of tests it created. It tested for things we didn’t end up implementing in the end, leaving us with a bunch of failing tests to remove. I’m not sure TDD is the way to go for this type of project. I would have preferred to write the code and then add tests for the critical paths later.
Implementation
Finally, /implement to actually build the feature:
When I started implementation, I just said “/implement” and let it run through as many phases as it could. This meant that if there was a problem in one phase, the problem would cascade down to the next phases. I should have taken it one phase at a time, verifying each before moving on. Which is what I did when I implemented my second feature.
By giving the high level “/implement” command, it felt like I wasn’t involved in driving the process. All I was doing was giving it permission to access resources, and I didn’t feel like a developer any more, I felt like a non-technical project manager, which created a distinct lack of control. I was standing too far from the project to get it right and feel a sense of accomplishment. Further, standing back left the AI to go off the rails multiple times, leaving me to unwind the problems rather than getting the results right the first time. Building something new is fun, de-tangling a mess is not.
The second feature I built had 14 phases, and I took it one phase at a time. I felt much more in control of the process, and each phase resulted in a functional system with passing tests. It took longer, but I got a better result and felt more like a developer. In the second pass I ended up cleaning up a lot of what went wrong in the first pass. I didn’t enjoy the process the first time, but I greatly enjoyed it the second time.
What I Learned
The Good
Spec Driven Development has some clear benefits:
- Leverage AI to create detailed specifications
- Simple description of the feature starts the process
- Repeated interaction with the AI allows it to clarify requirements
- Structured documents guides implementation by the AI
- Allows the developer to focus on high level design and verification of features
What I haven’t gotten the benefit of yet, but I think I will:
- Specifications become part of the project documentation, allowing co-workers to review and understand the feature plan
- Can use the specification to have AI write an Executive Summary for high level stakeholders
The Challenging
- The process is new and unfamiliar, requiring a sometimes uncomfortable mindset shift
- The AI can go off the rails, requiring careful review and intervention
- This requires discipline to dive into AI outputs and fix problems
- The process can feel cumbersome at times, it can be a lot of detail to wade through
- The AI can be over ambitious, creating tests and even features that aren’t needed
- This is a new way of developing and the process is not fully established, requiring experimentation and adaptation
The Surprising
- Deeply driving the steps does lead to a sense of ownership and accomplishment
- AI can be surprisingly good at generating detailed specifications from a simple description
- It still takes a long time, most of which is waiting for the AI to take the actions
This last point is the biggest surprise, and maybe the most important finding. I was developing the second feature across a full work day. My engagement time was probably 1 hour total, but it occurred in 1-15 minute increments between meetings. So, while the total elapsed time was long, my actual engagement time was short. This is a different way of working than I am used to. I felt like I was developing all day long, which was both good and bad. It was fun to think that something was getting accomplished while I was otherwise engaged. But it also creates more randomization across my day. I needed to frequently go back to VSCode to check if the AI was ready for more input. There’s a certain low level stress to that.
Comparing to My Usual Approach
- I accomplished something I otherwise would not have had time to do
- I was able to leverage technologies that I do not have time to learn
- It is an evolving process, and I already got better at it the second time, I expect that to continue to improve with practice
- In the long run, I expect I’ll have multiple features going at once across multiple projects
I expect the big challenge in the future will be context switching. There’s a lot of mental load to getting a project into your mind to be able to make effective decisions. Running multiple projects at once will increase that load. It will be a learning experience.
Would I Use This Again?
I will absolutely use this approach again. The benefits I’ve seen so far outweigh the challenges, and I believe it will only get better with practice. It even inspired some big ideas for leveraging AI for team management and project management. I think the structured approach of SDD can be beneficial in many contexts. If we can get a detailed spec for every work item, we can have AI help with project planning, resource allocation, and progress tracking. The possibilities are exciting.
Tips for Others Trying SDD
Based on my experience, here’s what I’d recommend:
- Start small: Pick a simple feature to get familiar with the process
- Take it one phase at a time: Don’t rush through the steps, verify each before moving on
- Review AI outputs carefully: The AI does make mistakes and misinterpret requirements, so be vigilant and embrace the discipline of diving into the details
- Be patient: embrace context switching, the AI takes time to process each step, so plan your day accordingly
- Iterate: The first time through may not be perfect, but you’ll get better with practice
The Bottom Line
Spec Driven Development isn’t magic, but it did change how I think about building features. The upfront investment in specifications paid off in a much greater ability to leverage AI. Developing with AI became noticeably more effective when they had clear context about what I was trying to achieve. I expect going forward that I will be using this approach for many of my development projects, and that will require a shift in how I think about and plan my work day.
If you’re curious about SDD, I’d suggest trying it on a single feature. The methodology shines when the feature is contained and can be well specified.
You can find the Spec Kit on GitHub and read more about the methodology in GitHub’s blog post.