AI tools for coding aren’t a nice-to-have anymore — they’re table stakes. At SEP, we recognized that early and gave engineering teams the green light to start using them, provided the tools met our security standards and our clients’.
Once the green light was given, engineers across the organization were free to use AI in their work. The only stipulation: each tool had to match the unique needs of its project and keep our security and quality standards intact.
On an engagement I manage, our team selected an AI toolset best suited to what they were building. Rather than leaving installation to individuals, they took a structured, workshop-style approach: all engineers installed the tools together in a single session, ensuring no one ran into technical barriers from day one.
At that point, I expected that vision, permission, access, and a shared understanding of how to use the tools would remove enough barriers to adoption to get us going. That said, I still didn’t want to leave it to chance or simply hope that things would work out on their own. I wanted to make sure we were moving with purpose — not just as individuals experimenting in isolation, but as a team developing the skills and confidence to use AI in ways that met our professional standards.
After a few weeks, I began to wonder if our initial rollout was enough. Were engineers using the tools in meaningful ways? Were they confident applying them to production code, or still hesitant? To find out, I designed a short survey to measure actual usage, comfort levels, reservations, and training needs.
The Survey — and What It Revealed
The survey asked each question in both personal and professional contexts. The differences between the two were telling.
The biggest barrier to AI adoption among experienced engineers isn’t resistance — it’s quality standards. Our survey found that not a single engineer was willing to use AI to generate production code without significant scrutiny.
Usage: Most engineers had “dabbled” with AI tools, but few described themselves as proficient, and none claimed expert-level capability. Personal use generally outpaced professional use, showing that curiosity outside of work didn’t carry over to their day-to-day project code.
Comfort: Personally, many were open to trying AI on projects that mattered to them. Professionally, hesitation was deliberate and thoughtful. Not a single engineer reported that they would use AI to generate production code — the work that carries their name and reputation. With many having 1-2 decades of experience, their professional identities are tied to producing code that is high-quality, correct, secure, and maintainable. That level of accountability shapes their evaluation of AI output — and this scrutiny is healthy, not blocking progress.
Reservations: The top concern was quality. Others cited the learning curve, uncertainty about where to start, lack of time, or not knowing which tool to use. No one said they had zero reservations — a reflection of their high bar for correctness and professional responsibility.
What Would Help: Engineers said their top need was learning how to prompt AI effectively, followed by setting up AI agents and understanding realistic quality expectations for AI-generated output. They needed reps to build trust.
Preferred Learning Styles: Our software engineers prefer learning AI tools through pairing and mobbing — working alongside peers on real tasks — over workshops, videos, or written guides.
Why the Results Mattered
The results challenged my assumption that early vision, access, and shared context would be enough. They also underscored how important it is to treat AI adoption not as a technical rollout, but as a cultural shift.
I shared the results openly with other leaders and with my team because I didn’t want anyone to be left behind. While learning AI tools can initially slow someone down, the long-term cost of not learning them at all is much higher.
Right now, we have a unique opportunity: everyone is still new. That makes it easier — and more important — to learn together, build a shared vocabulary, collective expectations, and repeatable practices. Openly acknowledging hesitations removed stigma and It helped people see that taking it slow wasn’t foot-dragging — it was being thoughtful.
From Data to Action
Transparency led to honest conversations about the path forward. The survey helped identify early adopters — engineers already experimenting with the tools — and made it clear who was willing to teach others. Those connections are now explicit. Engineers know exactly who to approach for pairing sessions, and they have permission to set aside time for collaborative learning. No one is left to “just figure it out” in isolation.
The Outcome — A Culture of Deliberate, Shared Learning
What followed was a healthy, organic exchange of knowledge: mobbing sessions, pairing sessions, recorded demos, article sharing, and ongoing lessons learned.
This had two benefits:
- It helped individual engineers level up their AI skills efficiently.
- It reinforced a culture of learning, teaching, and constructive scrutiny that will outlast any specific AI tool.
Because AI tools evolve rapidly, this culture matters more than today’s technical details. The willingness to evaluate, teach, and learn together — while upholding engineering quality standards — will be what keeps us relevant and effective.
Had we simply rolled out the tools with unclear expectations, we would almost certainly face costly catch-up efforts later, along with morale issues from engineers feeling conflicted or left behind. Instead, we’ve made adoption a collective, quality-focused, transparent process — one that honors both speed and craftsmanship.
Final Reflection — Better Questions, Better Outcomes
If there’s one thing this experience reinforced for me, it’s that success with AI is deeply tied to the quality of the questions we ask. Whether we’re prompting a tool or guiding a team, it is the right questions that move us closer to the outcomes we want.
Successfully engaging an engineering organization around AI is no different. As a manager, my job is to ask questions — regularly, thoughtfully, and with a willingness to learn. Some days, I’ll ask clumsy or “dumb” questions. Other times, I’ll ask sharper, more informed ones. And sometimes, the most important moments come when I ask vulnerable questions — the kind that admit it’s okay to not have all the answers.
A lesson learned through AI use is to keep prompting until you see the results you want.
If you manage software engineers or are responsible for solution delivery, you know that tools alone don’t ensure success. Here are my takeaways from the process: Humans are still the solution. Engage them well—respect their expertise, listen to their questions and concerns, and invite them into the problem‑solving process—and they become more comfortable integrating AI as a tool that supplements their expertise. Follow up regularly with your teams – the landscape is changing rapidly, and taking time to celebrate progress and plan for adaptive growth is time well spent.
That, more than any single feature or algorithm, will determine how well we continue to integrate an ever-evolving AI toolset into our craft.