Blog

AI Is the Next Programming Language (And We're All Beginners Again)

Programming languages have always been about abstraction, each generation letting humans express intent more naturally. AI is the next rung on that ladder. But this time, the skill that matters isn't syntax. It's knowing what to build.

I still remember the moment Python clicked for me. Not the syntax per se, that came quick enough, but the moment I realized I could stop thinking about memory allocation and just think about the problem. It felt like someone had removed a wall between my brain and the machine.

That same feeling hit me again about six months ago, except this time it wasn't a language. It was a conversation. I was building AviatorFlow, a flight school management system, over Thanksgiving break, talking to an AI agent the way I'd talk to a contractor. "Build me a FastAPI backend with these models. Use Alembic for migrations. Here's the schema." And it just... did it.

Five days later I had a pseudo-production system. Every line of code, frontend, backend, infrastructure, was AI-written. I made the architectural decisions, picked the stack, defined the requirements. But I didn't write a single function by hand.

That's when it hit me: AI isn't just a tool. It's the next level of abstraction programming language.

The Abstraction Ladder

If you zoom out far enough, the entire history of programming is a story about abstraction.

Machine code forced you to think in binary. Literal ones and zeros. You weren't solving problems. You were negotiating with hardware.

Assembly gave you mnemonics. MOV, ADD, JMP. Still brutal, but at least you could read it without a decoder ring.

C introduced structured programming. Functions, loops, variables with names. You could finally think about logic instead of registers.

Python, JavaScript, Ruby, the high-level languages, abstracted away memory management, type declarations, and boilerplate. You could express a complex idea in a few lines.

Each generation did the same thing: traded control for expressiveness. You gave up the ability to micromanage the machine in exchange for the ability to think at a higher level. And every time, the people who resisted the new abstraction eventually got left behind. Ok, maybe I'm being harsh, they didn't get left behind, but it sounded dramatic enough.

In anycase, does this sound familiar?

What Makes AI Different From Every Other Abstraction

Here's the thing: every previous programming language still required you to think in the machine's terms. Python is friendlier than C, sure, but you're still writing loops, defining functions, managing state. You're still translating your intent into a structure the computer can execute.

AI flips that relationship. You express intent in natural language, your terms, and the machine figures out the implementation. You don't write a sorting algorithm. You say "sort these users by signup date" and it produces the code. Or better yet, it produces the feature, not just the function.

That's not an incremental improvement. That's a category shift. It's like going from horse-drawn carriages to combustion engines. Sure, both get you from A to B, but the underlying mechanism is fundamentally different.

But here's the catch that most people gloss over: ambiguity is now a bug, not just bad style.

In Python, if you write unclear code, the interpreter will either run it or throw an error. There's no gray area. With AI, if you express a vague intent, you get a vague implementation. The "compiler" (a.k.a LLM) does its best guess, and sometimes that guess is remarkably good. Other times it's confidently wrong in ways that are hard to catch.

This is the part nobody puts in the headline. AI as a programming language has no type checker. No linter. No compiler errors. When it fails, it fails silently, with working code that does the wrong thing.

Experience Is the New Syntax

When I built AviatorFlow, something became painfully obvious: the backend took maybe 20% of my total time. The frontend took the rest.

Not because the backend was simpler. If anything, data modeling, API design, caching layers, authentication flows, that's where the real complexity lives. But I've been building backend systems for years. I know what good looks like. I know when to use Redis between the frontend and the database. I know which endpoints need a TTL and which need fresh data every time. I know the ratio of GETs to POSTs that determines your caching strategy.

So when I described what I wanted to the AI, I described it precisely. And it built exactly that.

Frontend was a different story. I don't have deep frontend experience. My descriptions were vaguer. "Make it look clean." "Add a dashboard with scheduling." The AI did its best, but the feedback loop was brutal. Significantly more back-and-forth, more tokens consumed, more iterations to get something that felt right.

The lesson? In this new "language," your vocabulary isn't syntax. It's domain expertise. Your ability to be specific, to know what good architecture looks like, to catch when the AI takes a wrong turn, that's what separates a productive session from an exercise in frustration.

The "compiler" is an LLM. Your "type system" is experience. And if your types are weak, you're going to get a lot of runtime errors.

The Skill Shift Nobody's Talking About

For decades, we've valued engineers who can write elegant code. Clean abstractions, efficient algorithms, minimal dependencies. And those skills still matter, but their weight in the equation is shifting dramatically.

Think about what actually happens when you work with AI effectively:

  1. You define requirements. What exactly should this system do? What are the edge cases? What's the data model?
  2. You make architectural decisions. Monolith or microservices? SQL or NoSQL? Server-rendered or SPA? What's the deployment target?
  3. You evaluate output. Does this code actually do what I asked? Is it secure? Will it scale? Are there hidden assumptions?
  4. You course-correct. "No, don't use a global state for that. Use a context provider." "That migration will lock the table for too long, batch it."

Notice what's missing? Writing the code. The actual typing of functions and classes is increasingly the least important part of the job.

The best "AI programmers" I've encountered aren't the fastest typists or the ones who've memorized the most API docs. They're the ones with the deepest systems thinking, people who can hold the whole architecture in their head and spot when one piece doesn't fit.

Requirements engineering, once the boring cousin of "real" programming, is becoming the core skill. The person who can write a precise, complete specification is now more valuable than the person who can implement one. Because the implementation is increasingly commoditized. The specification is not.

Where This "Language" Still Breaks Down

Let's be real: AI as a programming language is still in its assembly era. It works, but it's rough around the edges in ways that genuinely matter.

Hallucinations are the segfaults of this new paradigm. Except worse, because a segfault crashes loudly. A hallucination compiles, runs, passes a superficial review, and then fails in production at 2 AM. The AI will confidently reference an API that doesn't exist, use a library method that was deprecated three versions ago, or implement a "standard" pattern that it invented on the spot.

Context limits are the memory constraints. Try describing a system with 50 microservices, shared databases, event-driven communication, and complex deployment dependencies to an AI. It simply cannot hold all of that in working memory simultaneously. You end up re-explaining context constantly, like working with a brilliant contractor who has amnesia.

There's no debugger yet. When AI-generated code goes wrong, you're reading someone else's code that nobody actually wrote. There's no commit history of thought process. No PR review trail. No "I chose this approach because..." comments. You're reverse-engineering intent from implementation, which is exactly the problem we've been trying to solve for 50 years.

Reproducibility is inconsistent. Give the same prompt to the same model twice and you might get meaningfully different code. In traditional programming, the compiler is deterministic. In AI programming, the compiler is stochastic. That's a genuinely hard problem for teams that need consistency across a codebase.

None of these are dealbreakers. But pretending they don't exist is how you end up with a production system built on vibes and prayers.

What This Means for Engineers (Honestly)

I've seen two camps form around this topic, and they're both wrong.

Camp 1: "AI will replace all programmers." No. It won't. Not anytime soon. AI dramatically amplifies what a skilled engineer can do, but it doesn't replace the judgment, context, and taste that comes from years of building and breaking systems. Someone still needs to know why you'd choose PostgreSQL over MongoDB for this particular workload. Someone still needs to recognize that the AI's elegant solution has a subtle race condition.

Camp 2: "AI is just autocomplete, it won't change anything fundamental." Also wrong. If you're still writing every line by hand and treating AI as a glorified Stack Overflow, you're bringing a horse to a Formula 1 race (I really wanted to use the knife to a gunfight but it didn't quite fit). The engineers who learn to work with AI, to specify precisely, review critically, and iterate rapidly, will operate at a completely different velocity.

Here's what I genuinely believe: the fundamentals matter more now, not less.

Data structures, systems design, networking, security, debugging instincts. These are the things that make your AI-assisted work actually good. Without them, you're just generating plausible-looking code that might work. With them, you're building real systems at speeds that would have been impossible two years ago.

At Cargill, I scaled a NetDevOps team from one engineer to ten, managing over 90,000 networking devices globally. That experience (understanding network automation at scale, knowing where things break, knowing what "good" looks like for infrastructure code) is exactly what makes AI-assisted work effective. It's not typing speed. It's pattern recognition. It's taste.

The engineers who will thrive aren't the ones who can write the most code. They're the ones who can think the most clearly about what needs to be built and why.

We're All Beginners Again

Here's the humbling part.

When Python showed up, C programmers had a head start. They understood programming concepts and could transfer them. But they still had to learn a new way of thinking. The ones who insisted on writing C-style Python missed the point entirely.

The same thing is happening now. Senior engineers have a massive advantage because they understand architecture, tradeoffs, and failure modes. But they still need to learn this new "language": how to specify intent clearly, how to review AI output critically, how to manage the feedback loop efficiently.

And junior engineers? They're closer to the starting line than anyone wants to admit. They may lack the deep experience that makes AI-assisted development precise, but they also don't carry the baggage of "this isn't how real programming works." Some of the most creative uses of AI I've seen come from people who never learned to do things the old way.

We're all figuring this out. The senior engineer who's been writing C++ for fifteen years and the bootcamp grad who just shipped their first app, they're both learning a new way to work. The gap in effectiveness comes from domain knowledge and systems thinking, not from who can write a tighter for-loop.

That's simultaneously terrifying and exciting. The playing field hasn't been this unsettled since the internet went mainstream.

I don't know exactly where this leads. Nobody does. But I know that sitting it out isn't an option. The engineers who learn to speak this new language, who combine deep technical fundamentals with the ability to direct AI precisely, will build things the rest of us can barely imagine.

And the ones who refuse? They'll be writing the equivalent of assembly while everyone else has moved on.

The best time to start was six months ago. The second best time is now.