People should prepare for a tsunami
The last few weeks, I have talked to many people in the tech sphere, and a few from other areas. Experienced developers, managers, architects. My impression is that most people are absolutely not prepared for what's coming. Let's take a step back to see why they're not.
The evolution of the AI bots
A few years ago, AI suddenly became household. Or, more specifically, ChatGPT did. Up until then, AI was something most people probably connected with sci-fi movies and games. Not something "real" and accessible. But ChatGPT changed that. Suddenly, everyone and their mother was using AI for all kinds of mundane things. It was everywhere.
But for professional use in the tech industry, the tools were lacking. They could do many things, and used correctly could speed up some processes. But they weren't mature enough to really matter. Hallucinations happened way too often. AI generated code introduced subtle bugs that were hard to track down. Or straight up invented functionality that did not exist in the framework they were using. So most people, including myself, filed it under "not there yet".
The tech got better. The models made leaps. The AI progression has been extremely rapid. But in the last quarter of 2025, they crossed a threshold. Suddenly, the models could produce better code than most experienced developers. POSSIBLY not better than those developers COULD produce (and definitely not better than what the developers believed themselves to produce...), but probably better than what most developers produced on a regular basis. Especially if we take the speed into consideration.
And from that inflection point, the speed with which the models will improve increases dramatically. Most of us think the developments during the last few years has been incredibly fast-paced. I believe that development will end up being the slowest part of the AI revolution. Now that the AI models are improving themselves with such speed and ability, the coming improvements will likely be an order of magnitude faster. All development of Claude Code is now done by Claude Code itself.
How will that affect our industry?
2026 will be the year when everything changes
That is my prediction. Up until now, both developers and companies have had decently good arguments for not throwing themselves on the AI rocketship. And many of those who did, got burned by it. But the biggest mistake you can make right now is to believe that this is still the correct choice. If you waited rather than continue experimenting, that is understandable. And you can still get away with it, because so did most people. But not anymore. The only way you can avoid not being left behind (or run over) is to pivot. What you have previously discarded (or even resented), you must now embrace. Yes, embrace. Not accept. Not gradually incorporate into your current workflow. Not experiment with for fun. Embrace.
What does embracing AI look like?
I'll tell you what it does not look like:
It does not mean that you enable Copilot in your IDE again
It does not mean installing Claude for desktop so you can ask it to make stuff for you
It does not even mean using Claude Code in your project with an IntelliJ plugin so it can make large refactorings or create parts of your code for you.
Everything we've done regularly for the last few decades is now obsolete
I agree, that's a little hyperbolic. But, really, only a little. That deserves some background:
I wrote my first-ever computer program by transcribing, or "typing in" a program from a computer magazine (this was actually really called "type-in programs" back then). Some years later, I started learning Turbo Pascal in school. Not until the late 90's did I start programming professionally. However, the way we as an industry have been doing this craft really hasn't changed much since then. Many of principles we've held true up until now are decades old.
SOLID. DRY. YAGNI. KISS. Separation of Concerns. Single Level of Abstraction. Cohesion & Coupling. Law of Demeter. Composition over Inheritance. TDD...
Most of these principles date back to the 90's. Some even back to the 60's and 70's. Only a couple are from this millennium. And although they aren't all equally potent today, they still make sense. Why? Because, as I said, the art of programming has not really evolved much. Yes, we have better tooling. We've gotten a wider array of technologies that solve some usecases that were harder to solve before. Many things today make our lives easier as developers. But almost every invention in the industry is tech that make what we already do easier, faster, more secure etc.
And this is why most attempts to incorporate AI into our work has failed. A METR study in July last year showed that when the 16 experienced developers participating in the study started using AI in their work, they first predicted that AI would make them on average 24% faster. When the study was concluded, they estimated that the real number would be 20% faster, on average. The actual result: they were 19% slower! That's an almost 40 percentage point deviation.
There have been several studies through 2025 on the results of using AI in the workforce. And they all document that, except for a few outliers, introducing AI has failed. And it makes sense, because most people and companies have not embraced it. They have attempted to basically shoehorn AI into their current, ages-old processes. And that will never work. Our processes were created for a different time. They make sense only in the context of translating commercial needs into working apps/programs through code produced by a human.
Ask yourself this: if the code is not written by a human, and it will not be read by a human, does it matter how it looks? Does it matter whether or not it is easy to reason about? Does it matter that some of the code is repeated several places in the code base rather than refactored into a reusable function that is called from those places instead? If changing the way a program works doesn't mean refactoring it to behave differently but rather changing the specification document, which causes an agent pipeline to recreate the whole program, who cares what the code looks like?
Going forward, computer programs will no longer be produced by humans typing code on keyboards. I think most people accept this fact. But I don't think most people have been able to appreciate how quickly this shift will take place.
If you are a company developing your own software, if you think you can continue like today, focusing on your main business deliverables, thinking you can give your employees access to Claude Code so they can gradually improve their work with the help of AI, you will find yourself outclassed by your competitors who took the shift seriously. The engineers cannot do this by themselves. The shift is too fundamental, and it is going to cost a lot more than licence money or tokens. The companies who choose to really invest in their employees in this moment, and manages the transition, they will be the winners.
If you are a software engineer who spend most of your day writing code today, you will be out of a job if you're not able to transition from what you might have been doing for the last few years (or decades, like me) to instead building autonomous systems that write the code for you. You must learn how to switch from writing code to writing prose.
If you think that sounds boring, I get it. It does. But it really isn't. The skills that separated good software engineers from bad ones were never their ability to type code. It was their ability to understand systems architecture, user needs, why the service that runs smoothly with 1000 users grind to a halt with 100.000 users, how to translate the defined needs into working programs. And that will still be your job! But you will do that translation differently than before.
So whether or not you fear for your job or your industry: whatever you do, don't continue as before. Pivot now. Don't risk proving me wrong.