When I first started using AI, I was focused on moving faster: write a spec, generate the code, and ship the feature. Then I realized something: AI doesn’t just help me move faster. It helps me think deeper. It removes the friction from the rigorous documented analysis I always wanted to do but rarely had time for.
Possibly the biggest AI opportunity is to use it to make our thinking more rigorous. AI can help us walk through the deep thinking steps we might otherwise skip. It can make it easier to run comprehensive analyses on whether something will succeed before we deploy it. Help us validate that it succeeded after we deploy it. And it can provide us with documentation proving we’ve done our due diligence.
This shift from AI as a speed tool to AI as a rigor tool changes how we build software.
Responsibility to yourself means refusing to let others do your thinking, talking, and naming for you; it means learning to respect and use your own brains and instincts; hence, grappling with hard work.
From Dynamic Analysis to Systematic Rigor
I’ve always validated assumptions as I implemented solutions and tested edge cases with targeted queries. But I did this dynamically, as I built, rather than upfront.
This approach worked. But the rigor lived in my mental model and interactions with the data. It often wasn’t documented. Making it hard to review or repeat.
AI has changed what’s practical. Instead of validating in my head, I can create validation notebooks that document the thinking, build comprehensive test frameworks that prove correctness, and create before-and-after analyses that stakeholders can review.
AI has made it practical to externalize my mental processes, making it easier to document and share them.
AI as a Rigor Multiplier
Here’s the shift: instead of asking AI to think for us, we should use AI to inspire our own thinking more thoroughly than we could alone.
What AI actually enables:
- Explore options thoroughly - AI can help you evaluate multiple approaches in minutes instead of hours or days
- Validate systematically - AI can help you build test suites, validation frameworks, and analysis pipelines that you might not have time to build manually
- Document as you go - AI can help you create detailed records of your decision-making process while you work instead of after the fact
When I wrote about AI as another layer of abstraction, I focused on how AI lets us work at higher levels. But there’s another dimension: AI lets us work more rigorously at every level. It removes the friction from doing the thorough analysis that we know we should do but may skip because it’s tedious, time-consuming, or we need to meet a deadline.
The question isn’t “Can AI do this for me?” The question is “How can AI help me think through this more completely?”
A Concrete Example: Data Pipeline Validation
Here’s an example of what this looks like in practice with a recent data engineering project.
I needed to make changes to a production data pipeline, the kind of change where getting it wrong means bad data in dashboards, confused stakeholders, and potential retakes. Before AI, I would have validated my approach with targeted queries, tested the logic mentally and manually, made the change, and then presented the dashboard update to stakeholders in a meeting.
With AI, I took a different approach:
Before the change, I used AI to help me build a research notebook that validated my entire approach on sample data. Not just “does this query run?” but “does this transformation produce the expected results across different edge cases?” The notebook documented my thinking, showed my validation, and gave me confidence the approach was sound.
During implementation, I had that validation logic ready to reuse. Instead of re-testing manually, I re-ran the same validation against the running test pipeline. The research notebook became a systematic test framework.
After deployment, I used AI to create a before-and-after analysis comparing the old and new pipeline outputs. Both for me and for stakeholders. Clear documentation showed what changed, why it changed, and proof that it worked as intended.
The first time I shared this analysis for code review, I got feedback that it was too much to review. So I iterated. I used AI to rewrite the overview sections, making them succinct guides that got quickly to the point while still letting reviewers drill into whatever they wanted to verify.
This level of preparation, validation, and communication wasn’t feasible before. The time cost was too high. AI has made it practical to be this rigorous on every pipeline change, not just the critical ones.
The Three Pillars of AI-Enhanced Rigor
That data pipeline example illustrates three places where AI supports rigorous thinking: before we start, while we build, and after we deploy.
Planning Before Execution
The first opportunity is using AI to deepen our understanding before we write a single line of code. When I plan a new feature or change, I don’t ask AI to write a spec for me. I ask AI to help me think through it step by step. What are the edge cases? What are the dependencies? What could go wrong? AI helps me ask questions I might not have considered and explore alternatives I might have dismissed too quickly.
For work item planning, AI helps me decompose large tasks into smaller ones with more precision. Instead of creating three vague work items that seem reasonable, AI helps me create a user story and 8 specific tasks that fully cover the work. By reading through and correcting the AI’s suggestions, it forces me to be explicit about assumptions and uncovers complexity I might have underestimated.
In the data pipeline example, the planning included the research notebook that validated the approach on sample data before testing a running pipeline.
The goal isn’t to have AI do the planning. The goal is to use AI to make my planning more thorough, explicit, and complete.
Building with Validation
The second opportunity is during development. In the data pipeline example, this was reusing the validation logic from the research notebook as a test framework for the actual implementation.
This step is inherently in Spec Driven Development. We define explicit specifications of what the code should do and AI helps build implementations that satisfy those contracts. More importantly, AI helps build complete test suites that validate those contracts, thinking through edge cases and error conditions I might have otherwise skipped.
In data engineering, research notebooks are my primary validation tool. Instead of writing a complex data transformation and manually testing that it works, I use AI to help explore the data, test assumptions, and validate logic on sample datasets first. The notebook becomes proof that I’ve thought through the approach, both for myself and for reviewers.
What makes this practical is that AI removes the friction. Building test suites used to feel like extra work. Creating validation notebooks used to compete with “getting it done.” Now, the validation framework can be faster to create than implementing without it.
The common thread is deep thought and then systematic validation. AI makes it easier to be rigorous because it removes the friction from building the scaffolding: the tests, the sample analyses, and the validation structure that prove our work is correct.
Validating After Deployment
The third opportunity is post-deployment analysis. In the data pipeline example, this is the before-and-after analysis that proved the change worked and communicated results to stakeholders.
After deploying a change, I can reuse the validation logic from my research notebook to verify the production pipeline produces expected results. Not just “did it deploy without errors?” but “does it deliver the expected outcomes across all the covered scenarios?”
More importantly, AI helps me create stakeholder-ready analysis. I can create a documented analysis showing what changed, why it changed, and evidence that it’s working correctly. This isn’t just courtesy, it’s proof of due diligence.
The results can be quickly refined as well. When reviewers told me my first analysis was too much to review, AI helped me rewrite the overview to get quickly to the point. The overview became a succinct guide that lets reviewers verify what they care about without wading through everything. This kind of iterative refinement would have been tedious manually, but with AI it has become fast.
This creates a feedback loop: the rigor we apply in planning and building gets validated in deployment, and the deployment analysis informs better planning for the next iteration. The research notebook that validated the approach before implementation becomes the test framework during development, the validation suite after deployment, and the documentation.
Summary: The Three Pillars
To summarize, AI-enhanced rigor operates on three levels:
- Planning Rigor - Use AI to deepen analysis of features, user stories, and tasks before execution begins
- Building Rigor - Use AI to create thorough validation through Spec Driven Development, research notebooks, and test frameworks during development
- Validation Rigor - Use AI to validate success and document due diligence after deployment
These three pillars work together. Planning rigor makes building more focused. Building rigor makes deployment safer. Validation rigor makes planning more informed.
Why Rigor Matters Now More Than Ever
We’re entering an era where AI will increasingly write production code. Not as a novelty, but as standard practice.
When humans write code, we can review it and ask “why did you do it this way?” We can judge the developer’s understanding based on the code they produce. But when AI writes code, we can’t see the thinking because there is no thinking. There’s pattern matching and statistical inference, but not reasoning.
This means the thinking has to come from the human in the loop. The AI can prompt the human to think critically. Then the human has to validate that the AI’s output meets rigorous standards.
If we don’t build rigorous processes now, while we’re learning how to use these tools, we’ll accumulate technical debt. Code that seems to work but that nobody understands, systems that function but can’t be modified safely, and features that ship without thorough validation.
The teams that will succeed in the AI era aren’t the teams that use AI to move fastest. They’re the teams that use AI to think most deeply. The teams that build systematic approaches to validation and create processes where AI supports their judgment rather than replacing it.
This isn’t about being slow or cautious. It’s about being thorough. It’s about using AI to make analysis quick and keeping a human behind the wheel of development. With AI, we can be both fast and rigorous.
Diligence is the mother of good fortune.
Related posts you might find interesting:
- AI is Another Layer of Abstraction - How AI fits into the evolution of software development abstraction
- When AI Makes It Too Easy to Build the Wrong Thing - My experience building something that worked perfectly but solved the wrong problem