What AI Changes About Software Engineering (And What It Doesn’t)
There is a lot of noise right now about AI and software engineering.
Depending on who you ask, AI is either a hoax, a toy for vibe coding, or a near total replacement for engineers. Some companies claim most of their code is already written by AI. Others argue that nothing fundamentally changes.
All of these positions miss something important.
Software engineering was never primarily about writing code. Writing code is an artifact of the job, not the job itself. The real work is understanding systems, cutting through ambiguity, making tradeoffs, and building confidence that what we ship actually works.
AI does not remove this work. It changes the speed at which we can do it.
The engineering loop has not changed
No matter the language, stack, or seniority level, engineering still follows the same loop:
Research
Plan
Implement
Verify
Every task fits into this shape. A one line config change and a kernel mode refactor both follow the same loop. The difference is how much time and rigor each stage requires.
AI does not collapse these stages into a single step. It does not magically turn engineering into a one click operation.
What it does is compress the feedback loop inside each stage.
You can research faster.
You can explore plans faster.
You can implement faster.
You can verify faster.
The loop stays intact. The latency between questions and answers shrinks.
That distinction matters. Most failed attempts at using AI in real codebases come from trying to skip stages rather than accelerate them.
The real productivity unlock is feedback speed
Traditional engineering is slow because of waiting.
Waiting to understand unfamiliar code.
Waiting to find the right documentation.
Waiting to validate assumptions.
Waiting to discover that a mental model was wrong.
AI is extremely good at turning “I wonder how this works” into a first draft explanation. Not a perfect one. Not always correct. But fast enough that humans can react, correct, and iterate.
This is the core leverage:
AI lets us run more high quality loops per unit time.
If you use AI primarily to generate code, you are optimizing the least interesting part of the process. And yes, AI is also extremely fast at generating code. That can be a real bottleneck remover. The danger is treating that speed as a shortcut around research, planning, and verification.
The three building blocks that actually matter
If you want a system you can trust, there are three things you have to be deliberate about:
Context (knowledge the system needs).
Intent (what you want to achieve).
Mode (how you want it to behave right now).
These are often conflated. Keeping them separate is the difference between a reliable workflow and constant prompt fiddling.
1) Context
Context is the knowledge substrate: your codebase structure, the relevant files, architecture notes, and domain specific knowledge.
In practice, you build context by doing things like:
• Checked in markdown files that describe architecture and invariants
• Sub agents that explore the codebase and summarize execution flow into a compact brief
• Specialized RAG systems for deep domain expertise
The goal is simple. Give the system enough truth to work with, without dragging in noise.
2) Intent
Intent is the goal: what problem you are trying to solve, what constraints matter, and what success looks like.
Intent should be short and stable. If you find yourself repeating the same instructions over and over, it usually means those instructions belong somewhere else.
3) Mode
Mode is the operating stance: research, planning, review, implementation etc.
This is where behavior rules belong.
For example, “do not edit code yet” is not intent. It is a mode constraint. Intent is where you want to go. Mode is how you want the system to act right now.
In practice, you can build modes using pre defined, specialized prompts. Sometimes that is as simple as an inline slash command in your editor. Sometimes it is implemented as a sub agent invocation that runs in isolation and returns a structured artifact.
When context, intent, and mode are explicit, AI becomes predictable.
Human in the loop
One mistake people make is treating “human in the loop” as a vague safety slogan. In practice, humans should be involved at very specific points:
At the end of each stage.
Research produces a Research Brief
A concise snapshot of:
• How the system works today
• Where the change lives
• What constraints apply
• What is still unknown
A human reviews this for correctness before anything else happens.
Planning produces an Implementation Plan and a Verification Plan
The Implementation Plan is the blueprint. It lays out the exact steps and the structure of the change, including what gets added, what gets modified, and how the pieces connect..
The Verification Plan describes how correctness will be proven.
Tests are not an afterthought. They are part of the plan.
Humans review this before code is written.
Implementation produces the Change Set
Code and tests that map directly to the plan.
Review here is less about discovering brand new ideas and more about confirming adherence to the plan and basic quality.
Verification produces Results
Verification should produce clear, observable evidence: test results, static analysis, linter output, relevant performance metrics, and any required security or safety checks.
Humans decide to ship based on data, not a feeling.
This structure scales naturally with task complexity. Simple tasks produce lightweight artifacts. High risk tasks produce deeper ones. The shape stays the same.
Closing thoughts
AI does not make engineering trivial.
It makes it faster.
That speed is a double edged sword. When iteration becomes cheap, it is easy to generate code faster than we understand it. Many people have already experienced shipping code they do not fully understand. Left unchecked, AI will preserve whatever complexity already exists, mistaking history for intent.
The solution is not to slow down or reject AI.
The solution is to anchor speed to understanding.
Research and planning artifacts create shared mental models. Verification plans turn confidence into something measurable. Human gates prevent conversational drift from turning into architectural drift.
The teams and individuals who benefit most will not be the ones chasing maximum code generation. They will be the ones who use AI to compress feedback loops while keeping the engineering loop intact.
Research.
Plan.
Implement.
Verify.
Run the loop faster.
Keep the human gates.
Ship with confidence.