Blog

Tagged by 'claude'

  • Published on
    -
    4 min read

    Is AI Coding The False Messiah?

    I recently started using Claude Code for my development work, and I have to admit, it is without a doubt the most powerful AI coding assistant I have ever tried. It sets an entirely new precedent for what agentic AI can achieve. The sheer competence of it caught me off-guard. You give it a prompt whilst connecting it to your project, and it effortlessly navigates through all the files you give it access to and spits out its logic in seconds.

    Witnessing the steadfastness of completing tasks that would normally take hours to be resolved in a matter of minutes leaves you constantly wanting more, pulling you into a different mindset where you realise that truly anything is now possible.

    This exact power has triggered a profound doubt. When AI can handle the intricate architecture and the tedious implementation with little to no thought required on my part, the following question emerges: What use is there for me as a developer?

    It reminds me of a scene from the very first episode of the popular 80s classic, Knight Rider. During one of his initial exchanges with Devon Miles after test-driving KITT, Michael shares his unease over the car's terrifying level of autonomy:

    Michael: Oh, great. You mean it can decide to take off and go for gas, or a car wash. Just like that? Well, that would be terrific if I happened to be working under it.
    Devon: It wouldn't do anything to harm you, I assure you.

    Like Michael, we are sitting behind the wheel of a machine that seems fully capable of driving itself. AI might not "harm" us in the physical sense, but if we aren't careful, letting it take over entirely can quietly crush the craft we have worked on for years.

    For the past couple of years, the developer world has been swept up in the era of vibecoding — the practice of letting Large Language Models (LLMs) do the heavy lifting while we sit back and play the role of high-level orchestrators. But as tools like Claude Code, Gemini and Copilot continue to push the boundaries of autonomy, I've come to a quiet realisation: AI is actively eroding our ability to code for ourselves.

    Getting High On AI

    The constant influx of new capabilities is not just exhausting to keep up with; it slowly chips away at the parts of the job we genuinely love. When the machine does all the thinking, we stop being developers and become prompt-aholics. Writing a story to get the perfect output becomes our only real skill, leaving the underlying code as a black box we no longer care to understand.

    I am concerned at the potential loss of the muscle memory we have acquired over the years where we have been able to patiently read through class libraries, trace logic through multiple files, and meticulously debug a stubborn issue. Like a drug-addict on crack cocaine waiting for the next hit, we are trading our skill of problem-solving for a quick hit of instantly generated code.

    Delusions of Grandeur

    AI coding agents are remarkably good at writing units of code that look perfect in isolation. If you need a specific algorithm or a standard component, AI will provide a clean, consistent snippet that perfectly matches your prompt. But here is the danger: LLMs are the ultimate yes-men. They will rarely push back and tell you that the feature you are building is fundamentally flawed, or that the architecture you are releasing is absolute rubbish.

    This creates a dangerous divide. An already experienced developer can look at the generated code, question its validity, and make an informed judgement call on whether it is genuinely acceptable for the production environment. They have the hard-earned scars to know when a shortcut will result in technical debt.

    Conversely, novice developers or non-technical managers wielding these tools can quickly fall victim to delusions of grandeur. Just because they can suddenly spin up a functioning web app or a complex API in an afternoon, they begin to believe they possess senior-level engineering prowess. However, software engineering is not just about stringing together functioning isolated components; it's about cohesive architecture, long-term maintainability, and understanding how a change in one area of an application effects the entire system.

    When you blindly stitch together AI-generated code for months on end without that seasoned oversight, the results aren't going to be pretty. To quote a fellow developer I know, it becomes "AI slop". You eventually wake up to a codebase filled with inefficiencies, repetitive patterns, and short-sighted design choices. AI was consistent with the immediate prompt, but it failed entirely to maintain the long-term context of the project's evolution.

    Conclusion

    So, what is the solution? It certainly isn't a total retreat back to the analogue days of manual coding. The precedent Claude Code has set proves that AI is far too valuable a tool to discard. The answer lies in finding a fine balance.

    We must stop treating AI as an outsourced developer that writes our code from start to finish, and start treating it as a brilliant, if occasionally short-sighted, peer reviewer. When AI offers a solution, we shouldn't just hit "accept" and move on. We need to dissect it. We must judge if its suggestion truly fits the broader architecture, learn from the new techniques it introduces, and actively verify its logic.

    This approach keeps you engaged in the "why" and "how" of the code rather than just the "what". AI cannot be allowed to act as a substitute for human reasoning. It is there to assist, not to take the steering wheel completely.

    We are the engineers; AI is the co-pilot. By finding this balance, we maintain was is required to actually learn and grow, ensuring we build software that stands the test of time without losing the joy of the craft itself.

Support

If you've found anything on this blog useful, you can buy me a coffee. It's certainly not necessary but much appreciated!

Buy Me A Coffee