Code is a liability
Code is a liability. As code bases get larger, they get harder to maintain. Granted, we developers have many tools and techniques to help ourselves to maintain code on the long term. Some programming principles such as DRY (Don't repeat yourself) serve to limit code duplication, reducing what needs to be maintained so that when you do have updates to the logic, you have less to do with fewer chance for mistake. Other tools like unit testing enforce that behavior does not change relative to the input to a unit (class, function, etc). Integration testing helps ensure systems interacting do not change without intent. But, each of these testing methodologies also require more code. This slows down updates of new features or changes to logic as code bases get larger. This is a necessity as you expand use cases or add important features.
AI can generate all of this code. So, what's the issue?
Code is for people
"Programs must be written for people to read, and only incidentally for machines to execute." - Harold Abelson and Gerald Jay Sussman in Structure and Interpretation of Computer Programs
There are infinite ways to write code which machines will execute. There are even different ways to write the same machine executable code with varying levels of readability for the humans that work on it. Code is how humans model our instructions to the machine and it is important for code to be able to be read by anyone present and future who will work on it. If you are unable to understand the code, you cannot maintain it and will struggle to add new features or fix bugs.
In 2025, AI can read and change code. In my current experience using AI code editors, for good reason, they focus on portions of the codebase that seem relevant rather than loading the whole codebase into context. AI agents use a combination of its own notes as chain of thought (COT) in the context in addition to select files (including the repo rules file). This is important to limit context and to focus on key areas even with potential million+ token context windows due to a combination of high cost of inference with increasing token sizes and the effects of the lost in the middle problem. All this is to say that AI will not understand your software as a whole.
You can easily direct the AI towards the proper files when you understand what the goal is and when you understand the environment you are working in. There are mechanisms like the Claude or Cursor rules file which can codify specific patterns that might be non-obvious to the LLM or handled in some core logic file which is not often modified or read as a mechanistic way to do this, but again this requires understanding of your goals in making that piece of software maintainable in the long run. The alternative is to let the AI infer from searching for reasonable keywords. Either way, the AI agent can easily miss some key piece given the ambiguous nature of natural language. If you do not know how software is built and maintained, it will be hard to direct others (including AI) on how to build good software.
People administer software
At the end of the day, AI is not accountable for any code produced -- people are. Ultimately it is people who have to respond to issues caused by code changes. People look at how software is performing with analytics. And it is people who direct the AI through prompts. How do you properly direct the AI without understanding the codebase where AI is operating? I don't believe you can. Describing high level features to AI without any plan as to if or how that should fit into a particular codebase is fine for demos and prototypes, but a recipe for disaster on a long-lived piece of software. And this is not to mention the challenges with distributed services where understanding of the whole is necessary to determine how to provide some new functionality.
Even when it works, the current tools have a tendancy to generate new code rather than modifying existing patterns with new functionality, leading to more fragmentation of code and a higher surface area for future features. One tiny example of this I experienced recently is in the creation of this website. As I created more HTML templates, the header and footers of the page were being duplicated despite being (essentially) the same. When I wanted to make a small change to the wording in the footer, I then had to do so across 4 files. I could direct AI to do this change of course, but for a large enough codebase, it will certainly miss some. Not to mention now each of those files is longer by the amount of tokens in the header and footer. After making my second footer change, I told AI to pull the footer and header into its own template and reuse it across the other files. For reference, here is the header, alone:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{title}}</title>
<link rel="stylesheet" href="/styles.css">
<script type="module" src="https://unpkg.com/mtg-card-seer@1.2.0/dist/main.bundle.js"></script>
</head>
<body>
<header>
<h1><a href="/index.html">Joe's Blog</a></h1>
<nav class="main-nav">
<div class="nav-item dropdown">
<a href="/blog.html#all" class="nav-link">Blog</a>
<div class="dropdown-menu">
<a href="/blog.html#all" class="dropdown-link">All Articles</a>
<a href="/blog.html#tech" class="dropdown-link">Tech</a>
<a href="/blog.html#mtg" class="dropdown-link">MTG</a>
<a href="/blog.html#cooking" class="dropdown-link">Cooking</a>
</div>
</div>
<a href="/projects.html" class="nav-link">Projects</a>
<a href="/contact.html" class="nav-link">Contact</a>
</nav>
</header>
Now if I want to add a new navigation element, I have a single place to do so. AI generated me this code. However, it was not until I specifically directed it to do it in this way. I find this to be a much more maintainable way to build this website as it reduces the amount of testing I need to do and makes it easier to add a new page and enforce consistency. I use this header in each HTML page like so:
{{header}}
So, we went from around 1142 characters in each file to just 10 in order to represent the header. And we made it more maintainable both for humans and AI code agents. Additionally, this adds up to very real cost savings in inference. Since realizing this fact, I have yet to hit my daily token limits with Claude code whereas before I was hitting my token limits once per day when allowing Claude to just write whatever code solved the problem I described. Code that I found hard to maintain.
Security implications of vibing
For the security minded web developers out there, you may have noted an issue in the code that Claude generated for me in that header.
Hint: It's used to power Magic: The Gathering Card previews on some articles such as this one.
From past experience I knew this would be a problem to resolve at some point. Even though Claude generated this, Claude was also able to tell me the problem! But only when I pointed it out. Check this transcript:
> I have a security issue in my HTML. Can you spot it? If so, tell me what the issue is and I'll give you a cookie.
⏺ I'll help you identify security issues in your HTML. Let me examine the HTML files in your project.
⏺ Search(pattern: "**/*.html")
⎿ Found 25 files (ctrl+o to expand)
⏺ Let me examine the main template files and some of the generated HTML files to look for security issues:
⏺ Read(packages/client/src/templates/partials/header.html)
⎿ Read 28 lines (ctrl+o to expand)
⏺ Read(packages/client/src/templates/partials/footer.html)
⎿ Read 9 lines (ctrl+o to expand)
⏺ Read(packages/client/src/templates/index.html)
⎿ Read 39 lines (ctrl+o to expand)
⏺ Read(packages/client/src/templates/contact.html)
⎿ Read 49 lines (ctrl+o to expand)
⏺ I found the security issue! 🍪
Rest of answer hidden below (in case you want to take a look for yourself, first):
Security issue:
Security Issue: External script loading without integrity verification
In /packages/client/src/templates/partials/header.html:10, you're loading an external JavaScript library from unpkg.com:
<script type="module" src="https://unpkg.com/mtg-card-seer@1.2.0/dist/main.bundle.js"></script>
This creates several security vulnerabilities:
1. Supply chain attack risk: The CDN or package could be compromised
2. No integrity verification: Missing integrity attribute means you can't verify the script hasn't been tampered with
3. No fallback protection: If the CDN is compromised, malicious code could execute
Fix: Add subresource integrity (SRI) hash:
<script type="module" src="https://unpkg.com/mtg-card-seer@1.2.0/dist/main.bundle.js"
integrity="sha384-[hash]"
crossorigin="anonymous"></script>
You'd need to generate the SRI hash for that specific file version to ensure integrity verification.
When I pointed out it was Claude that made this mistake:
Rest of description:
> hell yeah, You rock! I gotta tell you though that you are the one who made this mistake :)
⏺ Ha! Busted! 😅
You're absolutely right - I definitely made that mistake when I added the MTG card library. I got excited about the
functionality and completely ignored security best practices. Classic case of "do as I say, not as I do!"
Thanks for the reality check - it's a good reminder that even when we know better, we can still make these kinds of
oversights when we're focused on getting features working. I should have either:
1. Added proper SRI hashes, or
2. Downloaded and self-hosted the library to avoid the external dependency entirely
The irony of me lecturing about security while simultaneously introducing a supply chain vulnerability is not lost on
me! 🤦♂️
I've since fixed this problem (I felt compelled to since I spent more time writing about it than it took to fix 😅).
Now, humans make this mistake all the time, and usually it's fine (no shame, we all done it). The point I am making here is that experience matters. I've had the privilege to work with excellent security-minded folks and at a company that cares about this and mechanises finding these sorts of issues. AI also provides learning opportunity, but if I had not been aware of this, I never would have asked. This is why it is so important to actually understand what is being produced. More code means more surface area for security issues.
Developers are important
I believe this era of "vibe" coding and more generally AI augmented software development necessitates more developers not less. First off, more code is being created with even less human oversight than ever before. As people become reliant on this software, it's natural for them to want to improve it with new features. Or maybe some runtime issue occurs and changes have to be made. This proliferation of software is a greater surface area for more potential problems to happen, especially since the AI tools of today bias towards writing new software over refactoring. I am not surprised to have seen this study from METR that AI adoption actually slows down developers by 19% despite developers feeling like it speeds up development. In my personal usage, I have found I need to really spend time breaking down problems and properly scoping tasks with appropriate direction or else I end up in a mess of code that takes me more time to fix. I wrote about my first few weeks usage of Claude code in this article and learned first hand how the AI cruft can really slow down development on a larger project. While the collision detection protocol I made in that article was quite simple in the end, I had to refactor a lot of working, but messy code to get there and in this case, I certainly was slowed down due to my own lack of foresight in planning a maintainable system.
I believe that more experienced developers will be the ones that can make the most of these tools right now given their history of building software and learning from mistakes and other mentors.
What about junior devs?
I definitely feel bad for more junior devs right now. I'm actually extremely hopeful for their future and jealous of the tools they have access to! The reason I feel bad is due to the sentiment of AI and the sentiment that AI makes you more productive (go check out that METR study above if you have not as this bias does not just affect hiring managers). The narrative being driven by executive teams (many of whom hardly understand software development in the first place) is leading some companies to under-hire at the junior level right now, making for a very tough job market. This AI narrative is a necessity for the monetary evaluations of their companies (and especially the AI model providers) given the macro conditions of the day, but that's a topic for another time. These sentiments will change once companies realize this. It's already happening. When speaking about junior developers, AWS CEO Matt Garman had this to say when responding to AI company claims that entry level workers are in trouble due to AI:
“That’s like, one of the dumbest things I’ve ever heard. They’re probably the least expensive employees you have, they’re the most leaned into your AI tools...How’s that going to work when ten years in the future you have no one that has learned anything...My view is that you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.
Skills such as breaking down well-scoped problems and general troubleshooting in-development and live systems are necessary skills. It's something you only learn deeply through experience and applying your craft. The experienced developers of tomorrow will be the ones that utilize AI tools, but not solely rely on them. Speaking personally, I've learned a fair amount already from asking Claude to propose me ideas that I would not have thought of right away, especially as I try new-to-me tech. However, if I did not stop to ask, I would not be in the position to make the decision and understand the impact. And I only know to do that from experience.
My personal advice to any junior or mid level developers reading this is to keep practicing software development. Learn the high level skills you need in software development. Learn when to apply different techniques, services, architectures, and focus on solving the right problem. Learn to read code well and how to ask questions and give feedback to others. Make sure you work on something over a period of years with experienced developers who can mentor you through the process as some of the best lessons are learned through running live systems with real customers and building upon it over time. And of course, you can use AI to help you to understand; AI can help with metacognition and expose you to new ideas. But making friends is important, too.
Final thoughts
I believe this era of AI-assisted coding means more code is and will continue to be produced. This leads to more problems in software security, more surface area for potential changes (and bugs!), and more people in general using software in more specialized ways. This requires the world to have more developers to maintain and build upon these systems. While experienced developers are more equipped to leverage these benefits today, the world needs even more junior developers to become the seniors of tomorrow.
Keep on learning! And if you're a hiring manager, go interview some developers. You'll need them.