Artificial Intelligence makes us worse at developing code. There… I said it.
First it’s a glorified proxy to make searches for you. Then it becomes your side-kick, or support person, dedicated to guiding you in the right direction. Well, until you come to recognise the time sink of inevitable hallucinations after spending days trusting the damn thing, only to be told, “Yeah you’re right! My bad.”. Where’s the apology?! … oh yeah, you’re just AI.
Then you learn about system prompts, start writing agents like you’re writing requirements for someone you want to hire to do the job… and then you land in a place where you stop and think… wait… how does this work?
You’ve been so productive, so efficient, but you have totally no idea of how the artifacts produced actually work! For an Incessant Tinkerer that is a nightmare. Why? Because part of the joy is the learning how it works!
I’m trying a new way. A way that your A.I buddy provides graduated levels of assistance based on how you want to engage.
Table of contents
Open Table of contents
The Neuroplasticity Problem
My wife sent me an email about some AI symposium at MIT which included a description for a talk to be given by Professor Roberto Rigobon. The description nailed an experience I’d been having while working with the wonderful Claude Code.
From what I recall, the talk was about how the use of AI affects neuroplasticity. My small brain cannot do the description justice but what I took from it was, “when we stop using our brains… we forget”. Being a smaller brain, this is a bigger risk.
The power to control a tool to get all the things done makes us feel vastly more productive and efficient. I rule my domain with multiple terminals all scrolling away. Claude Code churning out code, Codex reviewing it, while Gemini beavers away on research for the next part to be solved. It feels great!
What snuck up on me though was the realization that I need to really understand this code. It’s dangerously spiraling out of my control. There are so many files that the project feels bloated. I’m not the boss of this any more! But also, and this is the saddest part, I’m not learning or remembering how to do this stuff anymore. I’ve come to depend on my Claude Code buddy way too much and I am getting dumber as a result.
The Four Assistance Levels
So, what do I do? I have a brainstorming session with Claude Code of course. Don’t judge me.
It blurted some things about ‘graduated learning’ and ‘active learning techniques’ which ended up with ‘us’ defining a Claude Agent that would attempt to enforce four levels of involvement when working on a coding project.
It goes something like this:
Level 1: Pure Architecture Mode
- You code everything, AI advises on decisions
- When to use: Learning new patterns, maintaining skills
Level 2: Scaffolding Mode
- AI provides structure, you implement logic
- When to use: Familiar patterns, want to stay engaged
Level 3: Pair Programming Mode
- Alternating: you write, AI writes, you modify
- When to use: Collaborative problem-solving
Level 4: Full Generation Mode
- AI generates complete code, you must explain it back
- When to use: Boilerplate, well-solved problems only
How It Works in Practice
I assume you already know an Agent is essentially an elaborate system prompt that describes the role you want your AI chatbot to assume, the enforcement rules, any state tracking, and the response patterns it should use.
You trigger the agent by calling upon it in the chat session or, as I found out, you can invoke Claude on the command line using an --agent flag.
You state the level you want to engage and Claude will attempt to meet that engagement based on how well the Claude Agent has been written.
The Agent holds state of the Level you chose, who’s turn it is if you’re using the pair-programmer level, but also a generation count. The generation count helps the Agent understand when it is helping too much. If it gives you too many answers, it recognizes this as a red flag and alters its response pattern.
These ‘violations’ will make the agent re-affirm the way you wanted to engage and asks if you want to change level. The response written into the Agent definition says to respond like this:
I notice you're asking me to [violation behavior], but we're working at
Level [N] where [level constraints]. This helps maintain your [specific skill].
Instead, let me [appropriate Level N response]. Would you like to continue
at Level [N], or should we switch levels for this task?
When using Level 4, the Agent is defined to ask you to provide an explanation of what was generated. This explain-back concept is intended to make sure that you have at least some understanding of what Claude is providing. The Agent definition says to respond like this:
I've provided the implementation. Before we create files or continue, you must
explain back how this code works:
1. What's the overall approach and algorithm?
2. Why did I structure it this way? What are the key design decisions?
3. What would happen if [specific edge case based on the code]?
4. Where might this break or need modification for your specific use case?
Please provide your explanation now. I won't proceed until you demonstrate
understanding of this code.
Why This Matters
Of course this seems much more long winded that just getting the answer but the point is to make sure you, as a concerned citizen for learning, engage in sustainable productivity rather than just short-term speed.
Levels 1 through 3 focus on code completion rather than code generation so you maintain your level of skill in a subject. If you don’t feel confident, you can pivot to a higher Level. Of course, the assumption here is that you are an experienced engineer and not a beginner. Well at least some cursory understanding of coding if you stand a chance of explaining at Level 4.
This model of engagement maintains some cognitive load. This is a feature not a bug. The idea being that ‘if you don’t use it, you lose it’.
Design Decisions and Trade-offs
I started with just a pair-programming mode - hence the name that stuck. But talking it through with Claude, we kept adding variations. “What if you just want hints?” became Level 1. “What if you’re in a hurry?” became Level 4.
Four levels felt right. Fewer seemed too blunt. More would just be procrastination.
The tone of the responses is tricky. Too strict and it’s annoying. Too lenient and what’s the point? I’ve tweaked the wording a few times but I’m not sure I’ve nailed it yet.
One thing I did get right: letting you change levels mid-session. Sometimes you start at Level 2 and realize you’re in over your head. Sometimes you push a bug and just need the fix now. The agent doesn’t guilt you about it - it just asks if you want to switch.
State tracking lives in the conversation, not a database. Simpler. Hasn’t broken yet. We’ll see.
If You Want to Try It
I put the Claude Agent on GitHub. There’s an install script that copies it to ~/.claude/agents/, then you run:
claude --agent pair-programmer
Or just mention a level in your message (“Level 2: help me scaffold auth”) and it kicks in.
The agent definition is just a markdown file. If my Levels don’t match how you work, change them. That’s kind of the point.
Closing Thoughts
I’m still testing this. Has it changed the way I work with Claude Code? Not dramatically because I’ve not played with this Agent long enough. Right now it feels more psychological because I’m more conscious of when I’m delegating and not learning. I guess that valuable?
I struggle with Level 1 unless I’m really comfortable with the language or framework I’m using. I wonder if those muscles have atrophied more than I want to admit.
What’s ironic is that I’m using AI to help me build a tool that limits AI. I brainstormed the levels with Claude, wrote the Agent definition with Claude, and tested it by having Claude refuse to help me. Am I kidding myself? Does this make me feel better about my AI dependency while the atrophy continues. Perhaps the real test is whether I can still code without AI when I need to. Check back in six months.