Marian Sorca’s Take on AI in Software Companies: AI Coding Works Best When Experienced Engineers Stay in Control
By Marian Sorca - Full Stack Software Engineer with 15 years of experience, using AI-assisted coding for the past 1.5 years
AI-assisted coding is one of the most powerful productivity tools I have seen in software development.
I use it myself, and I think it is genuinely valuable. It can speed up implementation, reduce repetitive work, help with boilerplate, improve momentum, and multiply output. Used well, it is a real advantage.
But my opinion is simple:
AI-assisted coding works best when you already know how to build that part by yourself.
That is where the real value is.
If you understand the architecture, the tradeoffs, the edge cases, and the likely failure points, then AI becomes a force multiplier. It helps you move faster. It helps you produce more. It helps you save time on the parts that do not need your full mental energy every single minute.
But when that understanding is missing, AI can become dangerous very quickly.
AI is best as a speed multiplier, not a replacement for engineering judgment
This is the part I think some companies still do not fully understand.
AI is extremely useful when it helps an experienced engineer go faster. In that case, the human is still doing the most important work: defining the system, understanding the constraints, reviewing the output, checking the architecture, and making the final decisions.
That is a very strong setup.
The engineer knows what “good” looks like. The engineer knows when the code is wrong, fragile, misleading, overcomplicated, insecure, or poorly aligned with the bigger system. The engineer can use AI for speed without giving up control.
That is the sweet spot.
AI writes faster.
The human thinks deeper.
Together, the result can be much better than either one alone.
The problem starts when companies try to let AI do too much
What worries me is not AI itself.
What worries me is when companies start acting as if AI can replace too much of the thinking, too much of the review, and too much of the engineering responsibility.
I do not know why some companies are so eager to let AI operate with too little human intervention, especially in production systems or critical business logic.
Software systems are not simple.
They are usually large, layered, interconnected structures with many moving parts: frontend, backend, databases, infrastructure, APIs, auth, payments, caching, queues, third-party integrations, deployment pipelines, logging, monitoring, error handling, permissions, performance, recovery paths, and more.
When you change one part, you are often affecting many others.
That is why software is not just about “making the code work.” It is about making the system work safely, predictably, and sustainably over time.
And that is exactly where too much blind trust in AI becomes risky.
Software is a multi-level system, not a pile of code snippets
This is one of the biggest misunderstandings in the current AI coding conversation.
Some people talk about software as if it is mostly a matter of generating code files quickly. It is not.
Real software is a multi-level system.
There is the visible code.
Then there is the structure underneath it.
Then there is business logic.
Then data integrity.
Then scaling behavior.
Then deployment behavior.
Then security.
Then unexpected user behavior.
Then failure recovery.
Then long-term maintainability.
A tool can help write a piece of code, but that does not mean it understands the full consequences of that code inside a real business system.
That is the difference between shipping something that looks functional and shipping something that is actually safe, reliable, and durable.
If you mess that up, the damage can be much larger than people think.
A weak change in the wrong place can take down features, break payments, expose data, create downtime, corrupt records, or hurt trust in the product.
And once that happens, the cost is not just technical. It becomes business damage.
AI can generate code faster than it can understand systems
That is probably the shortest way to explain my view.
AI is often very good at generating output fast.
But software companies do not fail because they lacked generated code. They fail because systems were misunderstood, assumptions were wrong, dependencies were ignored, edge cases were missed, and fragile decisions reached production.
That is why I believe smart, experienced humans still need to stay in the loop.
You need people who can look beyond the output and ask:
Does this fit the system properly?
What breaks if this goes wrong?
Is the logic correct under pressure?
Will this scale?
Is this secure?
Is this maintainable six months from now?
Does this align with the real product and business needs?
Those questions are still very human questions.
And in serious software environments, they matter more than raw speed.
My view on “vibe coding”
I hinted before that some of the outages and software problems we have seen over the past year may have been caused by what people loosely call vibe coding.
By that, I mean building too much with AI-generated output, too quickly, with too little deep understanding, too little review, and too little system-level thinking.
That concern still feels valid to me.
I am not saying every outage was caused by AI coding. That would be too simplistic. Software failures have always existed. Teams have always made mistakes. Engineers have always shipped bad assumptions or weak implementations.
But I do think the current environment creates a new kind of risk:
teams moving faster than their actual understanding.
That is dangerous.
If AI allows people to create more code than they can properly reason about, then companies can end up with systems that look productive on the surface but become fragile underneath.
And that fragility may only show up later - during scale, during peak usage, during deployment, during refactors, or during incidents.
That is one of the biggest reasons I think the industry needs to be careful.
AI is cool - but cool is not the same as safe
I like AI. I use AI. I benefit from AI.
But there is a difference between a tool being impressive and a tool being safe to trust with too much autonomy inside important systems.
A lot of the excitement around AI-assisted coding is justified. It really can help strong engineers do more. It can save time. It can remove friction. It can make smaller teams more productive. It can help experienced developers stay in flow longer and focus more of their attention on higher-level work.
That is all real.
But none of that removes the need for judgment.
None of that removes the need for ownership.
None of that removes the need for experienced people who understand what they are building and what could go wrong.
In software companies, those things still matter a lot.
The best setup is not anti-AI - it is pro-responsibility
I do not think the answer is to reject AI.
That would be the wrong conclusion.
The right conclusion, in my opinion, is to use AI in a smarter way.
Use it to accelerate strong people.
Use it to reduce repetitive work.
Use it to explore options faster.
Use it to draft, refactor, summarize, and speed up execution.
But do not confuse faster output with deeper understanding.
And do not confuse AI assistance with engineering ownership.
The best setup is one where AI helps experienced engineers move faster without removing them from the loop.
That is how you get the upside without creating unnecessary system risk.
Why experienced humans still matter so much
This is really the core of my view.
Software companies still need smart, experienced humans in the loop because somebody has to understand the full shape of the system.
Somebody has to know the architecture.
Somebody has to understand the business impact of technical mistakes.
Somebody has to review changes with context.
Somebody has to notice when the AI output is technically plausible but strategically wrong.
Somebody has to make the call on what should never reach production.
That responsibility cannot just disappear because a model can write code quickly.
If anything, the need for strong engineers may become even more important in an AI-heavy workflow, because the volume of generated output increases and the cost of weak oversight grows with it.
Final verdict
My personal view is straightforward:
AI-assisted coding is most effective when you already know how to build that part yourself.
That is when AI becomes a multiplier instead of a liability.
It helps you go faster, ship more, and remove friction from your work. But it only works well when the human behind it understands the system deeply enough to guide it, review it, and own the result.
The real danger is not AI coding itself.
The danger is letting AI do too much without enough experienced human intervention.
Software systems are too complex, too layered, and too connected for that kind of blind trust.
That is why I still believe this:
AI is cool, but software companies still need smart, experienced humans in the loop.
That is not old thinking.
That is responsible engineering.