Carl

Carl Mastrangelo

A programming and hobby blog.


The End Of Software Engineering: The Advent of Vibe

(Before we begin, I like the idea of robots, which we now call AI, enabling humanity’s progress. This post is a lament, but not yet a eulogy for the software engineer’s way of life that is going away.)

My team at $currentCompany has been using Claude and other AI tools recently to build a project that is beyond my comprehension. As part of a team, I feel both a desire to help others, but also an accountability for when things go wrong. As I see my team lean more into AI generation of code, both of these team oriented feelings are evaporating. I feel cold and unfeeling about what they are working on, and no stock in the success or failure of their project. (Its project?) As a software romantic, I can’t help but feel emotionally torn about the UX of AI, as it seems to offer both incredible ability, but at the cost of all our dearly earned practices.

The premise of the problem is simple. We must justify our salaries, and we must have something to work towards promotion, so a new big project it is. However, how can we show enormous and lightning fast impact? As our fearless leaders have pronounced, the answer is to use AI to write our code.

What does that mean?

In my case, the answer is “writing” lots of code. Code whose authorship is not quite certain. Code which is by most humans’ reading is distasteful. Code which does, in fact, fulfill the desire of the human who asked for it. For those of you who haven’t seen this yet, think about all the AI art you have seen over the past 2 years, and then imagine it as code. Most people I know consider AI art to be somewhere in the Uncanny Valley. So we have lots (thousands of lines a day) of code written and being checked in.

Many companies expect their software engineers to engage in code review. It’s frequently a legal requirement. Many human beings would agree that the practice of having someone else read, review, and provide commentary on code is a good thing. But here is where I see our way of life beginning to change. Let’s check our assumptions. Why do we think code review is valuable?

Code review spreads the knowledge of what one human is working on with the rest of the humans. The other humans know, at least a little, what is changing. If something is hard to understand (for a human), they can provide that feedback to the author. The knowledge flows from the review to the author too! A [senior] reviewer can provide feedback on possibly better ways to doing things, either through different structure or different APIs. Knowledge is spread between the humans, and everyone increases in skill as a result.

With AI generated code, that’s all gone. The Author vibe codes 1,500 lines of something, sends the PR out for review, and then submits at the first sign of approval. Does the [human] author understand what it does? Well not really. But that’s okay. Our $fearlessLeaders said it was okay to do it. You’re not going to directly contradict them, arrrrreeee youuu?

So curmudgeonly me provides feedback on 1,500 lines of mystery meat. Another reviewer comes in and approves the whole mess, and the code is submitted without any knowledge pollination. My feedback is, at best, ignored. The old way of doing things. Understanding. Providing. Learning. At worst, it’s much worse.

As I read through countless lines of slop, I provide my thoughts. 14 years of hard earned battle experience. To my amazement, the author takes my feedback seriously, and sends out the next commit in minutes, completely addressing the 20 minutes I took to reason through the mess. The “author” scrapes the GitHub comments, feeds it into their agent, applies the changes, and sends it right back out. No need to spend time learning, arguing, disagreeing, or learning. I’m absolutely right!™

It’s at this point I realize my feedback is not valuable. I’m not working to help improve my teammates or improve the code. I’m being used to train my replacement. Any more words that I say are basically going to be used against me. Those 14 years of being a hard worker don’t seem so good now. The “author” of the code is merely a proxy to the agent. (One wonders if the author realizes how little they matter in this? Why do we need a meat-bag to copy paste the words from one reviewer to the agent who really wrote it all?)

There’s a deeper problem here though. Let’s check our assumptions. We assumed that writing good, easy to maintain, easy to understand code, is a good thing. But why? If humans are going to be maintaining and modifying the code, it is a good thing! But that’s not the future. The machine is capable of swallowing and digesting all knowledge, all code, all things ever written. And it does not forget. So why is good code needed, when the machine can keep track of everything? A machine can remember all things, and keep an enormous working set in its digital brain.

The conclusion is that “good” code, is really just good-for-meat-bags code. Since AI lacks our weaknesses of limited brainpower, it can re-absorb everything in a moment. Consider the case where you have joined a team with a 15 year old code base, and the code evolved from tens of amateur programmers, to the point it’s a hot mess of undebuggable garbage. And your manager wants you to add a big, complex feature, in the next 7 days, or else. You have no hope! You might as well take some vacation days because there is no way you’ll untangle the Gordian knot of pig-shit code with your puny, staff software engineer, brain.

But with AI, that isn’t a problem any more. It’s no challenge at all that the code is bad. There is no problem at all to make sweeping changes. The goal-oriented approach of software development means that we can verify that the new code delivers the feature. Why bother “reviewing” code, when it can be “fixed” with an utterance of agent prose?

Here lies the deeper problem. With AI, it can keep track of more details than you or I ever could. It can know all things. It will write code that exceeds both your and my ability to understand it. It meets the goals, but humans can no longer grok it. As a result, the only way we can interact with it from now on is through the agent. In effect, it becomes the only entity able to code. And the longer this goes on, the longer only it knows WTF is happening.

I have to return here to my central premise: that our way of life is going away. In the words of Scarlett O’Hara: “Where shall I go? What shall I do?” How we adapt to the new world isn’t clear. Even being experienced and wise does not seem to be enough. My experience is being used by my replacement. It’s hard to see how I provide value in such a way that my future value is real. I don’t think our way of life, as experienced software engineers, is going to stick around much longer. We are going to be sucked dry by the machine, or left by the roadside as the sacrificial lambs re-purpose our work, one last time.


Home

You can find me on Twitter @CarlMastrangelo