The Problem I Was Solving

In my day job, I use Kusto (Azure Data Explorer) constantly for data analysis. I work in Kusto Notebooks and Jupyter Notebooks in VS Code, and I ran into what seemed like an obvious limitation: I could get data from Kusto to Python, but I couldn’t easily pass parameters from Python to Kusto.

This meant my Kusto queries were littered with repeated parameter values:

StormEvents
| where State == "TEXAS"  // Hardcoded
| where StartTime >= datetime(2007-08-29)  // Also hardcoded
| take 10  // You guessed it

If I wanted to change the state, date range, or limit, I had to find and update every occurrence, which was annoying and error-prone.

I thought: “There should be a way to define these as Python variables and reference them in my Kusto queries.”

Spoiler alert: There is. I just didn’t find it until after I’d already started to build it myself.

How AI Search Failed Me (Multiple Times)

I did what any reasonable developer would do: I searched. “How to pass a python variable to kusto with kqlmagic”

Both Bing and Google’s AI summaries confidently told me to try:

  • %kqlvar variable=value cell magic
  • {python_variable} inline in queries
  • {{python_variable}} double-brace syntax

None of these features exist.

I attempted to verify this by searching documentation, hitting Ctrl+F for each of the above suggestions on every page the AIs linked to. The AI-suggested features simply weren’t there.

Even better, when I told the AI “I can’t find documentation for those features on the pages you shared,” it insisted they existed and doubled down by confidently telling me I needed to update my kqlmagic package.

This should have been my first warning sign.

The Real Solution Was Hiding in Plain Sight

What I was searching for: Magic syntax that didn’t exist.

What actually exists and has existed since 2019:

# Python variables
my_state = "TEXAS"
my_date = datetime(2007, 8, 29)
my_limit = 10
%%kql
let _state_ = my_state;
let _date_ = my_date;
let _limit_ = my_limit;
StormEvents
| where State == _state_
| where StartTime >= _date_
| take _limit_

It’s right there in the Microsoft documentation: Parameterize a query with Python

And it’s demonstrated in the kqlmagic repository: ParametrizeYourQuery.ipynb

I missed it because I was:

  1. Searching for the wrong things (the syntax AI suggested)
  2. Using Ctrl+F for specific code patterns that didn’t exist
  3. Trusting the AI search summaries instead of reading the actual documentation

Why I Built It Anyway: The AI Development Trap

In the past, if I thought a feature should exist but couldn’t find it after 10 minutes of searching, I’d probably search for another 20 minutes before considering building it myself.

Why? Because building a VS Code extension feature would take me:

  • 40+ hours of development time
  • Learning a new tech stack
  • Ongoing maintenance burden

That’s a big commitment. That commitment would have inspired me to make sure the feature doesn’t exist before starting.

But with AI-assisted development and Spec Driven Development, the equation changed:

  • Write a spec: 15 minutes (with AI help)
  • Get working code: ~1 hour of my time, ~5 hours of AI time
  • My total investment: 1 hour spread over a day

That’s low enough cost that I thought “I’ll just build it.”

And that’s the problem.

The Hidden Cost of “Just 1 Hour”

That one hour of my time wasn’t a concentrated hour of focused development. It was spread across the day in 1-10 minute increments, checking if the AI did what I expected, giving it permission to run commands, verifying outputs, approving next steps.

This fragmented time is actually more expensive than a concentrated hour.

Each context switch pulls me away from other work. There’s mental overhead in:

  • Stopping what I’m doing to check AI progress
  • Understanding what the AI just did
  • Deciding if it’s on the right track
  • Resuming my previous task

If this feature would have required 40 hours of concentrated development from me, I’d be hyper-protective of that time investment. But “just occasional check-ins throughout the day”? That feels cheap, but it’s not.

The perceived low cost is an illusion. The interruption cost, the context-switching tax, the distraction from other work — these add up to more than the raw time spent.

The Development Process: Three Stages of Building the Wrong Thing

Stage 1: From Initial Optimism to Architectural Reality

Spec written with clear requirements and user scenarios. AI generated the initial implementation for a smooth user experience. Then we hit a wall: VS Code extensions can’t execute code in another extension’s kernel. I was developing an update to the Kusto Notebooks extension, but the Python cells were managed by the Jupyter extension, and they’re in different sandboxes.

Stage 2: From Creative Workarounds to Mounting Code Smells

Next attempt: User-executed “export cell” approach where users run a special cell once to export their variables, then the extension can read the output. That breaks user experience with users manually executing what should be auto-generated code.

That’s a code smell, and not the way I wanted to use the extension myself. By this point: ~1,800 lines of TypeScript implementation, ~620 lines of test code, and it would take quite a few more hours to rewrite into something actually usable.

Stage 3: From Code Smells to Reality Check

The mounting complexity was bothering me and the AI had started to get squirrely (reimplementing solutions we’d already found wouldn’t work).

That moment got me to stop and do one more thorough search. This time: I actually read the documentation instead of searching for specific syntax patterns.

Result: Found the existing solution that’s been there for 7 years.

The Multi-Layered AI Failure

Let me count the ways AI contributed to my mistake:

1. AI Search Gave Confidently Wrong Answers

The search AI summaries didn’t say “I don’t know” or “I couldn’t find that feature.” They gave specific syntax examples that sounded plausible but didn’t exist. This sent me down the wrong research path.

2. AI Made Writing the New Feature Seem Easy

If this feature would have taken me 40 hours to build manually, I would have spent at least an hour making absolutely sure it didn’t exist first, and if it really didn’t, I would have looked into why it didn’t exist. But with AI doing most of the work? The perceived barrier to “just try it” dropped low enough that I skimped on research.

3. AI Helped Me Build the Wrong Thing Well

The AI wrote clean code, handled edge cases, created good documentation, added debugging tools. On the surface it was a technically solid solution, even if it didn’t get to the point of a smooth user experience. The big issue was, AI didn’t question whether we should build it, it just helped me build it efficiently.

By the end, we had:

  • ~1,800 lines of TypeScript implementation
  • ~620 lines of test code
  • ~1,400 lines of documentation (specs, guides, troubleshooting)

Total: ~3,820 lines of well-structured, thoroughly documented code… for a feature that already existed.

4. AI Didn’t Know the Ecosystem

An experienced Kusto/Jupyter developer would have immediately said “check kqlmagic’s parameter support.” But AI (and me, not being deeply familiar with kqlmagic or vscode extensions) didn’t have that contextual knowledge. We started from scratch because we didn’t know what already existed.

The Lesson: If It Should Exist, It Probably Does

Before AI-assisted development: High development cost naturally forced thorough research.

With AI-assisted development: Low development cost can encourage you to skip research and “just build it.”

The new discipline required: Force yourself to research thoroughly. This requires conscious discipline.

What I Should Have Done (And What I’ll Do Next Time)

Instead of 6 hours building a feature (1 hour of my fragmented time, ~5 hours of AI time), I should have invested 30 minutes in thorough research:

1. Read documentation thoroughly, don’t just search it

  • Read the overview and capabilities sections
  • Look at example notebooks and tutorials
  • Don’t just Ctrl+F for specific syntax patterns
  • In this case: actually reading kqlmagic’s documentation instead of searching for %kqlvar syntax

2. Question AI search results aggressively

  • If AI suggests specific syntax, verify it exists in real documentation
  • Don’t trust AI confidence, trust verifiable sources
  • If you can’t find an AI suggested feature documented, it probably doesn’t exist as described

3. Check how existing users solve this problem

  • GitHub issues and discussions
  • User forums
  • Read the actual source code (if available)

4. Ask “Why doesn’t this exist already?”

  • If it seems obvious, someone probably built it
  • If you can’t find it, you might be searching for the wrong thing
  • If it’s really necessary, there might be some other way to accomplish it than your first idea

5. Only build when you’re sure there’s a gap

  • Even with AI, building creates maintenance burden
  • Every feature is a long-term commitment
  • Reusing existing solutions is almost always better

That 30 minutes of research would have saved an hour of my fragmented time (and the mental overhead of constant context switching).

The Broader Implication for AI Development

This experience revealed something important about AI-assisted development: AI drastically lowers the cost of building, but there’s still a cost to building the wrong thing.

In a world where AI can generate thousands of lines of code in minutes, the most valuable skill isn’t coding, it’s knowing what to code and whether to code it at all.

The old bottleneck was implementation. The new bottleneck is understanding the problem space well enough to know if you’re solving the right problem.

Conclusion: The best code is sometimes the code you don’t write

When you make a mistake, there are only three things you should ever do about it: admit it, learn from it, and don't repeat it.

Paul "Bear" Bryant , Various sources, possibly mentioned in Bear: The Hard Life & Good Times of Alabama's Coach Bryant

I’m glad I made this mistake relatively early in my AI-assisted development journey. The cost was low (one day of small chunks of my time), and the lesson is valuable.

With AI making it easier than ever to build features, a critical skill is knowing when not to build. Lower cost means you need to be consciously disciplined to still do thorough research.

Before you ask AI to build that feature that “should obviously exist”:

  1. Spend time researching if it already does
  2. Read actual documentation and code, not just AI summaries
  3. Only build when you’re certain you’re solving a real gap

Because if it should exist… it probably does. And somewhere, there may be a 7-year-old Jupyter notebook example that’s been patiently waiting for you to find it.


Have you ever built something only to discover it already existed? How do you balance research vs. “just build it” in the age of AI assistance?

Related Posts: