You’re probably treating your AI agent like a junior developer. I know I was.
It makes sense, right? The framing shows up in genuinely respected places — OpenAI’s own prompt engineering documentation advises treating GPT models like “a junior coworker” who needs explicit instructions to produce good output; engineering teams at major consultancies have characterized AI assistants as well-read but inexperienced, something you supervise rather than trust with judgment calls. I don’t think anyone writing that was trying to create a limiting mental model. But the shorthand is spreading, and the conclusion I keep hearing from developers goes further than any of those sources intended: AI isn’t capable of the hard work, so don’t ask. Agents make mistakes, need detailed instructions, require careful review. So you break tasks down into small pieces, spell out implementation details, and hand-hold through every decision. It feels responsible.
But I started noticing something strange. The more I treated my agent this way, the more junior-level work I got back. I was getting frustrated—asking for solutions to complex problems and receiving implementations that technically worked but weren’t maintainable. I’d end up rewriting significant portions or hand-holding through revisions. The real AI agent collaboration I needed wasn’t happening because my mental model was getting in the way.
Then something happened that made me question everything.
The Moment That Changed My Approach
I was deep in a refactoring project, working with the Strategy pattern across a system of polymorphic objects. The pattern was working beautifully—clean separation of concerns, easy to test, maintainable. Each object type had its own strategy, with all operations orchestrated cleanly.
Then we introduced a complication: the objects gained nested polymorphic properties with varying data structures. These nested structures had similar data types but needed fundamentally different operations performed on them. The Strategy pattern, which had been so elegant for the parent objects, suddenly felt forced and awkward when I tried to apply it to these nested structures.
I was stuck. Something wasn’t fitting right, but I couldn’t articulate what. I was about to do what I always did—break down the implementation into detailed steps and direct the agent through exactly what to build.
Instead, on a whim, I tried something different. I asked: “Why isn’t this fitting the strategy pattern cleanly, and how would you organize it for better maintainability?”
What Came Back Surprised Me
The agent’s response stopped me in my tracks.
It explained that the Strategy pattern wasn’t the problem—I was actually solving two different concerns. It suggested continuing to use Strategy for the parent object operations, but also introducing a Visitor pattern specifically for the varying operations needed on the nested data structures. It walked me through why this combination would handle both concerns elegantly, comparing the trade-offs of different approaches.
I’d struggled to understand where the Visitor pattern would be useful in production code for years. Textbook examples never clicked. But seeing it applied to solve a real architectural challenge I was facing? That was a revelation. I implemented the combination, and it worked beautifully.
Here’s what hit me: No junior developer could have provided that kind of insight.
This was senior-level architectural guidance. The kind you’d expect from someone with deep pattern knowledge and the ability to see how different patterns complement each other. The kind of advice I’d seek from the most experienced engineers on the team.
So why was I treating the agent like a junior?
Testing a Different Hypothesis
I started experimenting with a new approach. Instead of treating the agent like someone who needed detailed implementation instructions, what if I treated it like an expert who needed context about my specific project?
The next opportunity came quickly. I needed to add versioning to a rule-based system—one set of rules needed to become multiple version-specific rule sets. This is exactly the kind of problem where I’d been getting unmaintainable solutions.
Before, I would have prompted something like: “Add versioning to this system so different versions use different rule sets.” One directive, straight to implementation.
This time, I tried: “Our system has a set of rules that need to be applied based on an input version number. Currently we only have a single set of rules and we need to integrate a versioning system such that an arbitrary version can be specified and a different set of rules will be used according to the specified version. What are some good ways we could organize this with design patterns that would result in testable, maintainable code? Don’t make any changes, just explain your ideas.”
The difference was remarkable.
From Directive to Design Conversation
The agent provided multiple solution approaches. It explained the pros and cons of full duplication versus shared rule sets across versions. It thought through testability and maintainability trade-offs. It presented options: a base rule set with version-specific overrides, a rule composition strategy, a versioned rule registry pattern. It invited my input on what would work best for my specific constraints.
We had a design conversation. I explained the requirements I hadn’t mentioned—backward compatibility needs, rules that must remain consistent across versions. We refined the approach together based on my domain knowledge. Then we implemented it with shared understanding.
The result was dramatically better than what I’d been getting before. Better architecture, better maintainability, better code.
And I learned something about working with AI agents in the process.
The Pattern Behind Effective AI Agent Collaboration
This wasn’t a fluke. The more I experimented with this approach, the clearer the pattern became:
When I treated the agent like a junior who needed hand-holding, I got junior-level results. Workable implementations that took the first obvious path without considering maintainability or exploring alternatives.
When I treated the agent like an expert collaborator who needed context, I got expert-level results. Thoughtful architectural advice, understanding of patterns I could learn from, solutions that considered trade-offs I hadn’t thought about.
I can’t claim this holds universally, but the pattern has been holding consistently in my own work — and the mental model shift is worth testing.
I realized the “treat AI agents like junior developers” advice everyone gives is working against you. It’s not that the advice is meant to be harmful—it comes from a reasonable place. Agents do need review. They do make mistakes. But framing them as juniors trains you to engage with them in limiting ways.
The Mental Model That Changes Everything
Here’s what actually makes sense: Think of your AI agent as an incredibly experienced software engineer who just joined your team.
They have extensive understanding across languages, frameworks, and design patterns. They can use retrieval tooling to read documentation and become an expert on a new library in seconds. They understand not just what different patterns are, but the nuanced question of when and why to apply them.
What they don’t know: anything about your specific codebase, your domain requirements, your business constraints, or the architectural decisions your team made six months ago.
Unlike a human senior, your agent won’t proactively push back or surface ambiguities unless you invite it to — which is why the practices below matter.
When a brilliant senior engineer joins your team, you don’t hand-hold them through how to write a for loop. You give them context. You explain the problem domain. You discuss architectural approaches. You collaborate on design before diving into implementation.
That’s exactly how you should work with your AI agent.
The New Playbook
The agent’s capabilities hadn’t changed. My approach had.
The techniques below aren’t a formula for how to prompt correctly — there’s no single right way to engage. What they are is a set of examples of what naturally tends to emerge when you’ve made the mental model shift. Once you’re genuinely thinking of the agent as an expert collaborator who needs your context rather than a junior who needs your instructions, you’ll find yourself doing variations of these things without thinking about it. Different problems will pull you toward different approaches. That’s the point.
Here’s what changes when you adopt this mental model:
Ask for design advice before implementation
Instead of jumping straight to “build this feature,” start with design discussion. Ask: “What are some good ways to organize this? Don’t make changes, just explain your ideas.” Explore the problem space before committing to an approach. That’s exactly what happened in the versioning conversation — instead of “add versioning to this system,” I asked what good approaches existed and got a design dialogue in return.
Invite the agent to ask clarifying questions
When a senior engineer doesn’t have full context, they ask questions before diving in. Give your agent the same permission. Try adding: “Ask me clarifying questions until you’re at least 90% confident in how to proceed.” This creates a collaborative dialogue where the agent surfaces assumptions and fills knowledge gaps before committing to an approach.
Expand the solution space, don’t narrow it
Avoid over-specifying details the agent already knows. You don’t need to tell it to “use camel case” or “write clean code.” If you’re seeing the agent violate these standards, investigate what patterns in your existing codebase it’s following—you might discover pre-existing code quality issues.
Describe the larger problem and let the agent help brainstorm approaches.
Draw on their pattern expertise
Don’t dictate exactly which design pattern to use. Ask for advice on how to design for maintainability and testability. You might learn something. The agent can compare different patterns, explain trade-offs, and help you make informed decisions. The Visitor pattern I described earlier? I never asked for it specifically. I asked how to make something maintainable, and the agent saw a problem I hadn’t framed yet.
Define edge cases together
Rather than specifying every test case detail upfront, provide your requirements and known limitations. Collaborating with your agent to formulate edge case scenarios produces better coverage and surfaces cases you hadn’t considered.
When you see mistakes, investigate the root cause
If the agent produces junior-level code, resist the urge to blame the tool. Instead, investigate:
- What patterns in your codebase is it following? It might be mimicking existing problems.
- Did you treat the agent like a junior in your prompt? Did you ask for immediate implementation without design discussion—the kind of task you’d give a junior that would require them to ask clarifying questions?
- Did you give it enough context about your specific constraints and requirements?
In my experience, when I was getting poor results, the root cause was almost always the same thing: I was still treating the agent like a junior.
The underlying principle: The mental model does the work. When you genuinely think of the agent as an expert who needs your context, the right kind of tasking — architectural discussion, design exploration, collaborative problem-solving — follows naturally.
Your Turn
The next time you reach for your AI agent, pause before typing. Ask yourself: “Would I talk to a senior engineer this way?”
If you were working with a brilliant new hire who didn’t know your codebase yet, would you skip the design discussion and just tell them to start coding? Would you dictate basic code quality standards that they already know? Or would you engage them as a collaborator, explaining your constraints and asking for their expert perspective?
The answer changes how you prompt. It changes how you think about collaborating with your agent.
The agent stops being a tool you direct and starts being a collaborator you think with.
When you make this shift, something remarkable happens. In my experience, the quality of solutions improves. Your own understanding deepens. Problems get solved in more maintainable ways. You start using capabilities you didn’t realize were there.
In raw technical breadth — patterns, frameworks, approaches across the entire landscape of software development — you have access to something no single engineer on your team can match. The question is whether you’re working with AI agents at the right level.