I've written a handful of articles about vibe coding from different angles. Why it's not the solution it looks like, why something working doesn't mean it's well built, how my hourly billing model is starting to fall apart. Rereading them, I realized I keep circling the same topic without putting a name to what's actually going on underneath.
The other day, talking to a friend who isn't a programmer and who was thrilled about a website he'd built on Lovable, two concepts I hadn't thought about in a long time came back to me. I think I picked them up in some economics or entrepreneurship podcast or YouTube video, I honestly don't remember where. I had them parked there, not connecting them to my day-to-day, until that conversation. Listening to him talk about his site with that proud look on his face, while I was thinking about everything underneath it that was going to break in three months, the pieces clicked all at once.
The first one: the IKEA effect
In 2011, researchers from Harvard, Tulane and Duke published a paper titled "The IKEA Effect: When Labor Leads to Love". The thesis is that people value things they've built themselves more highly, even when the result is objectively worse than what a professional would have made.
The data is interesting. Participants were willing to pay 63% more for furniture they had assembled themselves compared to the same furniture already built. In another experiment they had people fold origamis, and the ones who folded their own cranes valued them five times more than outside observers did, even though the figures looked like a mess.
The case I like the most, which is probably where I first heard about all this, is from the 1950s. Instant cake mix brands weren't selling. They were too easy, you just had to add water. One of the brands removed the powdered egg and forced customers to add a real egg instead. Sales took off. That tiny bit of effort turned the product into "my cake", and people valued it much more.
That's exactly what's happening with Lovable. Someone writes four prompts, sees screens that work, and feels like they've built something. It doesn't matter that 99% of the code was written by the LLM. They don't experience it as buying a service, they experience it as making something of their own.
And that's why they're so happy. They don't care if the code is mediocre. They like it more than anything a third party could deliver, because they built it themselves.
The second one: the market for lemons
The other concept is from 1970. George Akerlof published a paper, "The Market for Lemons", which ended up being one of the works that earned him the Nobel Prize in Economics in 2001. He explained it using the used car market.
In that market there are good cars and defective ones. The good ones he called peaches, and the defective ones lemons. The issue is that the buyer can't tell which is which before buying, because the important parts are hidden and can't really be inspected. So the buyer's best guess is to assume the car is average quality and pay accordingly.
And that's where the problem starts. Someone with a good car, well maintained and never crashed, isn't going to sell it at the average price because they know it's worth more. They pull it from the market. With the peaches gone, the average quality of what's left drops, the buyer pays even less, and the owner of the next decent car doesn't sell either. In the end the market fills up with lemons, not because lemons won, but because the peaches left.
The same thing happens with programmers, especially at the lower end of the market. The average client doesn't know whether the code they received is any good, and after getting burned by one or two mediocre freelancers their conclusion is that we're all the same and we all charge too much. Meanwhile the good ones stop working in those tiers and move to where the client actually knows how to tell the difference, or they build their own products, or they switch industries altogether.
What's new in 2026 is that there's now a third option in the mix that didn't exist before. Say the client has three choices:
One, pay 8,000 EUR to a freelancer or agency that works properly. Git, separate environments, deploy pipelines, tests, code reviews, best practices for whatever tech stack they're using. Everything that makes a project survive over time and stay touchable six months later without fear.
Two, pay 3,000 EUR to a freelancer or agency that doesn't work that way. Doesn't know the tech deeply, doesn't use Git, uploads files one by one over FTP, edits directly in production, learns about the project on the client's dime. Delivers something that works on day one and starts falling apart the moment you need to touch it.
Three, pay 20 EUR a month to Lovable, or just use the free version of ChatGPT, and do it themselves. No prior knowledge, trusting whatever the AI comes back with. And that's the trap most people haven't internalized yet: AI lies with confidence. It tells you things that aren't true in the same tone it uses for the things that are. If you can't tell the difference, it slips past you, and you pay for it later.
For a client with no technical judgment, the three options all look pretty similar. They can't see the difference between the first and the second, so they assume the second is "the same thing but cheaper". And when the third option shows up, two orders of magnitude cheaper and with the IKEA effect thrown in, the decision almost makes itself.
Vibe coding isn't eating the low end of the market on quality. It's eating it because it's made quality something the client no longer needs to evaluate. And the knock-on problem is that the freelancer or agency that does work properly ends up in the same bucket as the one that doesn't, for a client who doesn't know how to look under the hood.
The other side: same Thermomix, two cooks
Before I go on, an important clarification, because otherwise it sounds like I'm saying AI is the problem, and it's not. AI is a tool, and depending on who uses it, the result changes enormously.
Think of a Thermomix with its automatic recipe book. You throw in the ingredients it asks for, pick the program, and something decent comes out without knowing how to cook.
Someone who can't cook drops in the ingredients the screen tells them to, hits the button, and out comes an edible cake. To them it's a miracle, because they'd never made a cake before. They compare it to the sad cakes at the supermarket and yes, theirs is better. They're delighted. And they're right, within the world they know.
An experienced cook, with the same Thermomix, doesn't make a better cake than before. They do something else. They can have two or three Thermomixes running at the same time and, while one is making the cake, they're preparing the icing by hand, picking seasonal fruit, or prepping the ingredients for the next one. In the same time it used to take to bake one cake by hand, they're now putting out five with proper presentation. They also know what the other one doesn't: that the recipe built into the Thermomix isn't always the best one, that with this specific flour you need to lower the speed, that this sugar works better added later, or that the automatic program's timer is too short if the kitchen oven isn't quite doing its part. Adjustments the inexperienced person doesn't even know exist, because to them the recipe in the book is the recipe.
Both use the same machine. Both are happier than before. But what comes out of the machine is nothing alike.
The problem is that a client who's only tasted their own cake and the supermarket ones has no way of knowing there's a third level.
And in case it sounds contradictory, I use AI all day. It literally does 60% of my work, I wrote about that in another article. And I'm still here saying vibe coding is a problem. The tool is the same, what changes is the upfront planning and the review afterward that I put in, and that the person running Lovable doesn't even know they should be putting in.
So, what do you do with all this?
The interesting thing about Akerlof is that he didn't stop at diagnosing, he also pointed at a way out: emitting signals the buyer can verify without understanding the product from the inside, and that bad sellers can't fake without it showing.
For me, as a freelance Drupal dev in 2026, that translates into a few very specific things.
The most basic is putting things in writing. Performance audits with before-and-after metrics, maintenance commitments with a set timeframe, technical guarantees. All the stuff a Lovable customer can't ask Lovable for, and that a freelancer working with FTP against production can't offer either.
Then there's drupal.org. Contributing patches to widely used modules, maintaining your own. The client doesn't understand the code, but there are thousands of developers who do, and the fact that that code has been running in other people's production projects for years without breaking says a fair bit.
Next come the rescues. Vibe-coded projects that have gone sideways and I've had to fix. This doesn't replace the previous stuff, it complements it, because building properly and fixing what others built badly are two different skills, and having both says something.
And the hardest one to fake is having your own products. A SaaS, a module with real users, something someone actually pays for. If you've built it end to end yourself and it's still running, that weighs more than any CV.
None of this is going to convince someone who pays 20 EUR a month for Lovable, and that's fine, they were never going to be my client anyway. But it does convince the agency that's been burned twice by vibe coding, the client with a serious project where real money is on the line if something fails, and the one who's already tried doing it themselves with AI and seen where it falls apart.
And here's the part I think isn't fully obvious yet. AI hasn't crushed the value of technical knowledge, it's crushed the floor, which isn't the same thing.
The basic tasks, the ones anyone can solve with four prompts, aren't worth what they used to be. That's real. But everything above that floor has become more valuable, not less. Knowing whether what the model gives you back is actually good, choosing the architecture before writing a single line, understanding the specific technology with its quirks and its traps. That AI doesn't have, and the person using AI without experience doesn't either.
All that said, I don't really know how this ends. In two or three years we'll find out whether the IKEA effect holds, whether clients are still happy with their Lovable projects once it's time to scale or maintain them, or whether lemons end up eating this part of the market too, the way they've eaten others. I have a hunch, not an answer. What I can do in the meantime is keep working the way I work, leave a trail of how I do it, and trust that the client looking for that will find me.