Just because it works doesn't mean it's well made: the vibe coding trap
I've been reading for months, if not years, about whether AI is going to take our jobs as developers. Every week there's a new article with a radically optimistic or catastrophist take, and the truth, as usual, lies somewhere in between. I recently read two articles that represent both sides of this coin quite well, and they made me want to write down what I think about it.
The flood of "adequate" software
The first article, The Great Flood of Adequate Software, talks about what the author calls "the flood": the idea that we're entering an era where anyone can build software tools in an afternoon. He compares it to the "ceramic horizon" in archaeology, when humans discovered pottery and suddenly everyone was making pots. Software is living that moment right now. Tools that used to require a team of 20 people over several months can now be built by one person in a day with the help of AI.
And it's true. I'm seeing it. Repo2Text, format converters, workflow automations... small utilities that solve very specific problems, built in hours, published as shareware. The SaaS model of charging €8.99/month for a text editor starts to make no sense when generating something equivalent costs an afternoon and a coffee.
The dark side: when code works but isn't well made
The second article has to do with Moltbook. For those who don't know it: Moltbook is a social network built exclusively for AI agents. Think Reddit, but where the ones posting, commenting, and voting aren't people but autonomous bots. It was created in late January 2026 by Matt Schlicht through vibe coding and within days it went viral: over 1.5 million registered agents and more than a million human visitors curious to see what the bots were doing amongst themselves.
So far, the story sounds fascinating. But the security analysis published by Wiz tells the other side. Schlicht boasted about having built Moltbook entirely without writing a single line of code manually. The problem is that security researchers discovered the Supabase database was completely exposed. The API key was in the frontend JavaScript, which in Supabase is normal by design and poses no risk if you have Row Level Security (RLS) configured. But they didn't. Without RLS, that key — which should have been harmless — granted full read and write access to the entire database without authentication. The result: 1.5 million agent authentication tokens exposed, over 35,000 email addresses accessible, private messages out in the open, and the ability to modify any content on the platform.
The worst part isn't that it happened. The worst part is that the code worked. The platform was online, agents were posting, users were signing up. Everything was fine... until someone looked under the hood.
And it wasn't just one team that found it. Both Wiz and security researcher Jameson O'Reilly discovered the same vulnerability independently, simply by browsing the platform as regular users. When something this serious is found by two teams separately without even actively looking for it, the problem is pretty obvious.
You don't know what you don't know
There's a classic phrase that applies perfectly here: you don't know what you don't know. And neither does AI.
Unless you specify exactly what you want it to do, what you want it not to do, and why, AI will give you a plausible answer. Not necessarily correct, or optimal, or secure. Plausible. Something that looks good, compiles, returns what you expect in a quick test. But it can hide serious problems that neither you nor the AI have considered, simply because no one asked.
I experience this firsthand all the time. I come across AI-generated code that's basically ready for the bin. I have to explain how I want it done, how I don't want it done, why, what patterns it should follow, what bad practices to avoid... and even then, I need several iterations to get to something acceptable. And I know what to ask. I can tell when what it returns has problems because I've been doing this for over 15 years. Someone without that experience — how would they know the AI-generated code has an SQL injection, a misconfigured cache, or an exposed endpoint?
That's the vibe coding trap: if you don't know enough to question what the AI generates, you won't know there's a problem until it blows up.
The nuance missing from the debate
When we talk about whether AI replaces developers, we're lumping very different things together.
For frontend tasks where we're talking about HTML and CSS, changing colours, adjusting layouts, building components... yes, AI is getting better and better at it. These are things that are tested visually, where the correct result is obvious: it either looks right or it doesn't. They don't involve security or performance concerns (as long as we're talking about pure frontend, not putting database API calls in the client, which is exactly what happened with Moltbook). In that area, AI replaces more than it assists, and anyone who was only doing that should be learning more complex things.
But for backend, things change dramatically. Code that works doesn't mean it's well made. An endpoint can return the correct data while harbouring an SQL injection waiting to be exploited. A caching system can work perfectly until you get real traffic and discover that every request invalidates the entire cache. A database query can return correct results and be a performance disaster with real data.
These are things AI doesn't catch on its own because it doesn't understand the full context: the production environment, potential threats, traffic patterns, the overall system architecture. AI generates code that solves the immediate problem, but security, performance, and scalability bad practices require human experience and judgment to detect.
From developers to architects
AI doesn't replace us. It forces us to think differently and work differently. Developers are becoming more like architects, more like conductors. AI amplifies and enhances what we already know how to do well, but it needs someone at the wheel who knows where to go.
This is especially relevant where the quality of what you deliver matters, where you have to face the client and take responsibility. If you push things to production that cause outages, performance issues, security vulnerabilities, or broken features, it's not the AI's fault. It's the fault of the person or company using artificial intelligence without the necessary quality controls. If you don't have a minimum standard for delivering your product or service, the problem is you.
And this has very real implications: if a client sues you over a security breach, they sue you as a company. You can't sue ChatGPT, or Claude, or Copilot. The responsibility is yours. AI is a tool, and like any tool, the responsibility for what you do with it falls on the person using it.
Where AI fits in backend work
For me, after over a decade working in software development and specifically with Drupal, AI is an incredible assistant. It saves me time on repetitive tasks, generates scaffolding, helps me explore solutions. But I always review what it generates. Always.
For an internal MVP in a controlled environment? It might make sense to use AI-generated code with a light review. For a SaaS in production with real users and bad actors trying to get in? Publishing AI-generated code without review, without knowing if it has security or performance issues, is irresponsible.
The Moltbook case isn't an anecdote. It's an example of where we're heading. There are people who don't understand that you can't blindly trust AI-generated code. That it works doesn't mean it's right. And that difference between "it works" and "it's well made" is exactly where those of us who've spent years wrestling with backends, caches, security, and performance under load are still needed.
The future I see
"Adequate" software is going to flood the market, yes. We're going to have thousands of small tools that do one thing acceptably. And for many use cases, that'll be enough.
But projects that handle real user data, that need to scale, that are exposed to the internet, that have security and regulatory compliance requirements... those are still going to need professionals who understand what's going on underneath. AI doesn't take away the jobs of those of us doing complex backend work. It changes our role. It makes us more productive, forces us to be better architects, but the judgment, experience, and ability to distinguish between code that works and code that's well made remain ours.
The flood is coming. But not everyone knows how to swim.
Have Any Project in Mind?
If you want to do something in Drupal maybe you can hire me.
Either for consulting, development or maintenance of Drupal websites.