I Found Out AI Steals 60% of My Work
After working with artificial intelligence on my Drupal projects for over a year and refining my processes, I've arrived at what I believe are the percentages that define my current way of working: 20-60-20.
Twenty percent human planning, sixty percent AI execution, twenty percent final review. This has allowed me to cut my Drupal development time in half while maintaining, and in some cases improving, code quality. But let's be honest: these are my rough percentages based on what I'm seeing in my specific use case, the ones I use when I prioritize doing things right. If you prefer speed over quality, your numbers will be radically different.
How It Actually Works in Drupal
The first 20% is completely my own work. I sit down, think through the Drupal module architecture, decide which approach to take. I write a detailed specification: which hooks I need, which services, which plugins, inter-module dependencies, important edge cases, performance requirements, security considerations. This time is pure planning and specification, and it's 100% human.
Then comes the 60%. And here's where it gets interesting, because this percentage isn't always handled the same way. I use two different modes depending on the complexity of the Drupal task at hand.
Supervised Mode: When Quality Comes First
When I'm more interested in code quality than implementation speed, I go through and provide guidance, hand-holding the AI so it doesn't go off on tangents or take paths I don't want in the Drupal ecosystem.
I use this approach mostly for very complex Drupal tasks where I have few examples to give the AI. As case examples, it would be when I'm building a very custom, very specific Drupal module that I haven't done a similar one before, so I don't have examples from other client projects to base it on. Or when it's a complex performance analysis with custom Drupal code where I need to understand exactly what it's doing and where the problem is, perhaps in a complex service implementation or a custom plugin affecting the site's overall performance.
Basically, it's this: when it's custom Drupal code, very personalized, non-standard, not functions I do recurrently for different clients, but rather very specific things for a single client where I even need to research the best way to implement it following Drupal best practices. In these cases, I prefer to go in phases: the AI generates a part, I review, give feedback, adjust direction, and we continue. Here the 60% is still AI work, but with my constant active supervision.
Loop Mode (Ralph): When Everything Is Specified in Drupal
If they're fairly simple or very normalized Drupal tasks and I already have several examples and clear rules I can specify, that's when I run an automated Ralph-type loop.
The best example is Drupal migrations. If I have an established pattern for how I do migrations on that project, with examples of previous migrate plugins, documented common transformations, with tests validating the result... I can let the AI loose in loop mode and let it work alone.
Another clear example is standard modules: creating a module with its info.yml, implementing basic hooks, generating forms with Form API, or creating blocks with Drupal's plugin system. When the structure is repetitive and known, loop mode works perfectly.
In those cases, I can let it run perfectly well for an hour without being on top of it. I'm not supervising every step during that hour, the AI works completely autonomously. But when it finishes, I review everything before making any commits. Remember: the rule is that the AI never makes commits on its own, I always review first.
This is true unattended work. That 60% becomes real time I can dedicate to other Drupal projects or tasks I used to neglect.
The Last 20%: Where Most People Get It Wrong
The last 20% is completely mine again. And here's where I see many Drupal developers fail: this time is not optional if you want to maintain quality.
One rule I never break: the AI never makes commits on its own. Whether I'm in supervised mode reviewing constantly or I've left Ralph mode running for an hour, in no case do I allow the AI to execute git commit or git push. All commits are done by me personally, always after reviewing the code.
Why? Because before committing I review all files the AI has generated or modified. And here's an important nuance about what "review" means. If they're more mundane changes, I read them diagonally, superficially, checking that the general structure makes sense and looks like how I would write that code. But if I see something problematic or it's more complex code, that's when I stop to understand exactly what it's doing and how it's implementing it.
The golden rule is this: if you don't understand the code the AI generated for you, stop. Google it, research it, and understand exactly if what it's implementing is correct or not before moving forward.
Only with this type of review have I found serious problems: spaghetti code that makes no sense, violations of Drupal best practices, implementations that work but don't scale or are a maintenance nightmare. I don't blindly trust what the AI does. It's a statistical machine that generates plausible code, not perfect code.
In summary: I do 40% of the work (20% before + 20% after), and approximately 60% is done by the AI. This is maintaining, or even improving, Drupal code quality compared to when I did everything manually.
My Ralph System with OpenCode for Drupal
The loop mode I just mentioned works thanks to an adaptation I've made of the Ralph pattern specifically for my Drupal workflow. For those unfamiliar with it, this pattern consists of running the AI in a continuous, autonomous loop. Instead of giving it a task and waiting for it to complete in one go, you run it in a loop where each iteration has fresh context. The AI picks the most important task, implements it, runs tests, and repeats. All without constant supervision.
But with one crucial difference from the original Ralph: in my system, the AI never makes commits or push. I used to let the AI make automatic commits, but I realized it generated code that I sometimes didn't review well and ended up doing things wrong. Now it's a strict rule: all commits and push are done by me personally, always after reviewing the code.
I tried Claude Code, the tool that popularized this concept, back in the day, but I dropped it. For a while now I've been using exclusively OpenCode and have adapted the Ralph concept to my Drupal workflow with this tool.
The result is that I can leave processes running for hours while I work on another Drupal project or other tasks. I use it mostly for routine, well-defined tasks in Drupal:
Generate base migrations in Drupal. When I need to create the initial migrate plugins for a new project with patterns I already have established, define the custom process plugins I usually use, configure the source plugins for connectors to external APIs or databases.
Refactor simple Drupal modules. Reorganize code, improve service structure, add type hints, convert procedural functions into injectable services, all that mechanical work that takes time but doesn't require complex architectural decisions.
Add tests to existing Drupal modules. Generate unit or kernel test coverage that was never done at the time, run them, fix the ones that fail, iterate until everything passes. Drupal has a very powerful testing system and the AI can take advantage of it well when it knows what it's doing.
Create standard Drupal plugins. Blocks, Field Formatters, Field Widgets, Queue Workers, Cron hooks... all those plugins that follow very defined patterns in Drupal and that the AI can generate without problems once you give it the right context.
Fix PHPStan or PHPCS issues. Those static tasks that are purely mechanical: the linter marks an error in the Drupal code, the AI fixes it following Drupal standards, re-runs, and continues until everything is clean.
In all these Drupal tasks, the loop works because they're well-defined problems with clear success criteria. Drupal tests or the linters themselves serve as guidance. The AI iterates, fails, fixes, re-tests, until everything passes.
This is probably the path toward that future 10-80-10 or 5-90-5 in Drupal development. When you can let the AI loose for hours with a well-configured loop and tools that automatically validate progress, the percentage of unattended time skyrockets.
But I insist: the final human review step remains crucial.
Why That First 20% Is So Critical in Drupal
The better you are at that first 20%, the more real that 60% of unattended work becomes. If you write a superficial specification for your Drupal module, the AI will need constant guidance and your 60% becomes 40%.
If you don't give it context about the Drupal project, it'll generate code that doesn't fit. If you don't mention Drupal-specific performance or security requirements, you'll have to redo it in the last 20%. For example, if you don't indicate that it should use Drupal's cache service instead of making direct database calls, it can generate code that works but scales poorly.
But if in that first 20% you leave everything clear, well specified, with sufficient context from the Drupal project, with examples from existing code in your codebase, with edge cases identified, with explicit Drupal performance and security requirements, and specifying that it generate tests and run them, the AI can work much more autonomously.
And that 60% savings becomes real time you can dedicate to other Drupal projects or aspects you used to neglect.
What Nobody Tells You: More Tests in Drupal, Better Code
Something curious I've discovered working this way with Drupal is that I now implement many more tests than before. And here's the interesting part: it's not that I'm writing more tests manually. It's that the AI generates them, and I review them more superficially than production code.
Before, writing tests in Drupal was the most tedious part of development. It always got relegated to the end, and was often done incompletely due to lack of time. Now, tests are generated almost automatically along with the code: unit tests for services, kernel tests for logic that depends on the database, functional tests for complete flows.
With tests I'm less strict. As long as they pass and I see they cover the main use cases, I don't stop line by line to read every assertion. I scan to check the code makes sense and the test scenarios are decent, but I don't analyze the test's internal code with the same detail as production code. In my case, tests add value through their existence and because they validate that the code works, not so much through how they're written internally.
But there's something even more important: tests are fundamental for the Ralph system to work with Drupal. The loop process I described earlier is only possible if you have tests. Without tests, the AI has no way to automatically validate if what it's doing works or not. With Drupal tests, it can iterate completely autonomously for hours, refining the code until everything works correctly.
It's a paradigm shift: tests go from being that boring task you do at the end to being the tool that allows the AI to work autonomously and reliably on your Drupal projects.
That's why one of my key recommendations is: if you're going to work with AI in Drupal, make sure your code has some type of tests. They can be unit tests with PHPUnit, kernel tests using the Drupal environment, functional tests with BrowserTestBase, whatever. But you need some automated way to validate that the code does what it should do.
The 5-95 Trap: Speed Without Quality in Drupal
Now, let's be clear about something: you can do 5-95 right now. You dedicate 5% to superficially explaining to the AI that you want a Drupal module, let it do 95% of the work, and push it to production without deep review.
Many Drupal agencies and freelancers are doing this. And it works... until it doesn't.
The problem is that if you can't detect whether the Drupal code the AI generated follows best practices or is spaghetti code, if you can't identify potential security issues (like incorrect permission validation or potential XSS), if you don't review query performance, if you don't validate that edge cases are covered... you're playing Russian roulette with your Drupal projects.
That 5-95 basically means: "I blindly trust that the AI did everything right." And the AI, remember, is a statistical machine. It gives you what's likely to work, not what definitely works well in Drupal's specific context.
So yes, you can reduce your time to 5% if you want. But don't be surprised when within a few months that Drupal module starts giving you performance problems, subtle bugs, or security vulnerabilities. The AI gives you plausible code, not perfect code for Drupal.
Where We're Headed: From 20-60-20 to 10-80-10 in Drupal
The reality is these percentages are changing fast. Six months ago I was more at 30-50-20. Every few months I get closer to that current 20-60-20.
And the trend is clear: we're heading toward 10-80-10, or even 5-90-5, in Drupal development.
Why? Because AI models are constantly improving and learning more and more about Drupal. They're smarter, can do more things, are more specialized. What six months ago required constant supervision, they now do autonomously. What three months ago generated Drupal code that needed heavy refactoring, now generates code that only needs minor adjustments.
If the trend continues at the current pace, in a year I'll probably be at a real 10-80-10 for most Drupal tasks. And in two years, maybe at 5-90-5 for many standard Drupal implementations like base modules, common integrations, or repetitive features.
The Real Value Is Knowing What NOT to Do in Drupal
The conclusion I've reached is paradoxical: the better a Drupal developer you are, the more work you can safely delegate. Not because the AI is better at programming than you, but because you know exactly what to specify, what to supervise superficially, and what to review in detail.
You know how to detect when the generated Drupal code is solid and when it's disguised spaghetti. You know which Drupal tests are necessary to validate that everything works correctly. You know how to recognize when a service is properly injected or when a hook might cause performance problems.
Drupal developers who complain that "the AI generates bad code" are usually the ones not investing in that first 20%. They ask for a Drupal module without context, without clear specification, without thinking about service architecture, without specifying to generate tests. And they get superficial code with no way to validate it works.
Those who say "the AI does perfect work for me" probably aren't doing that critical last 20% of review. And they're accumulating technical debt without knowing it.
My Percentages Aren't Yours
Let me be clear: these are my percentages because I prioritize doing things right in Drupal. If your priority is shipping projects fast and your client won't notice the difference between clean Drupal code and functional-but-messy code, your percentages will be different.
You can do 5-95 and get away with it.
But if you care about long-term maintainability of your Drupal projects, security, performance, scalability... that final 20% of review is where you demonstrate your real value as a Drupal developer. The AI saves you the tedious work, but you remain the one who guarantees the quality of the code you deliver.
In short, 20-60-20 isn't a universal formula. It's my current way of balancing speed with quality in Drupal development. And the clear trend is toward 10-80-10 as models improve. But that final percentage of human work, that final quality control, will remain necessary as long as we want Drupal code that doesn't just work, but works well.
And if there's one thing I want you to take away from this article, it's this: invest in tests for Drupal. Not just because they improve your code, but because they're the tool that allows the AI to work truly autonomously. Without tests, the AI is a helper that needs constant supervision. With tests properly configured for Drupal, it becomes an autonomous worker that can iterate and improve for hours without your intervention.
Have Any Project in Mind?
If you want to do something in Drupal maybe you can hire me.
Either for consulting, development or maintenance of Drupal websites.