Stop Drowning In AI Information Overload
Your inbox is flooded with newsletters. Your feed is chaos. Somewhere in that noise are the insights that could transform your work—but who has time to find them?
The Deep View solves this. We read everything, analyze what matters, and deliver only the intelligence you need. No duplicate stories, no filler content, no wasted time. Just the essential AI developments that impact your industry, explained clearly and concisely.
Replace hours of scattered reading with five focused minutes. While others scramble to keep up, you'll stay ahead of developments that matter. 600,000+ professionals at top companies have already made this switch.
The Power of Speaking to Real Professional Engineers
At DevNexus Conference in Atlanta, I spent time speaking with enterprise developers, engineering managers, and QA practitioners about how AI is actually showing up in software delivery.
And I’m excited to share these insights from the ground because it matters with how tech is being experienced and integrated in real time. What’s working and what isn’t. And what to do about it. Also, this is a major area of focus for my professional work.
All of these engineers are using AI coding agents.
But they’re having serious challenges with how to preserve software quality, consistency, and accountability as agentic development becomes the standard.

Principal Architect David Parry teaching MCP and AI Agents at DevNexus
AI coding is normal and expected now, even without proper training. And with that comes the code governance issue.
“How do engineering teams keep their codebases coherent and defensible when more code is being generated, reviewed, and shipped through a growing mix of tools and agents?”
This was the underlying pattern in all my conversations.
Some AI Dev Tools Are Chosen by Infrastructure, Not Preference
One thing that stood out to me is that enterprise engineers are not always choosing their AI tools from scratch. In many cases, access is shaped by the infrastructure and platforms their organization already supports. That means adoption often starts with what is already available, not necessarily with a deliberate decision about what is best for the team.
And once a tool is already embedded in the environment, it’s sticky. The standard often becomes “good enough to keep” rather than “best fit for how we want to build software.” That is a very specific kind of adoption dynamic.
AI Coding Feels Better Than AI Code Review
I also noticed a significant difference between how engineers talk about AI coding tools and how they talk about AI code review.
The sentiment around coding assistants was generally positive. Engineers see the productivity value. They are using the tools. They appreciate the acceleration. They like most of the UX.
The sentiment around AI code review was much more neutral…?
Just… underwhelmed. This partly because no engineers loves reviewing code. It’s deep work, it’s taxing, and with many underlying social elements at play beyond checking that the code is up to par. We won’t get in to all those details, though.
The sentiment gap is interesting because with the influx of AI-generated code, the code review phase becomes even more of a boundary where human judgment re-enters the process/flow. It’s where teams reclaim accountability, surface risk, and decide what is actually safe and maintainable enough to ship. And if intent matches implementation.
This process is documented as a “bottleneck” now with the speed gains that seem to come from AI code generation, and the insights derived from code review analysis are often not compelling enough to even notice or act on urgently to begin with.
So when you pair that with default code review bots made available to teams and not loving code review anyway, you have a lackluster view on the experience altogether.
The Standards Drift Problem in Enterprise Codebases
And that leads to the third signal that stuck with me: standards drift.
One engineering manager from a massive telecommunications company described the growing difficulty of reviewing code when different engineers are using different coding agents, different configurations, and different sources of context.
It was code being written according to different versions of the team’s conventions. Older patterns appearing in new code. Newer standards being inconsistently applied. Implementation styles diverging enough that the codebase started to feel like it was being shaped by multiple eras of the same engineering organization at once!
That’s where the problem becomes deeper than code style preference.
This engineering manager complained how it’s so difficult now to even interpret the codebase at all. Harder to understand what the code is trying to do, why it was written that way, and whether it reflects the standards the team actually intends to uphold now.
This is one of the places where I think agentic engineering is forcing a new level of honesty.
We have spent years treating coding standards as something culture and documentation can handle. But AI increases code volume, abstraction, and inconsistency pressure. When that happens, culture doesn’t scale well enough. Engineers need stronger mechanisms for defining, updating, and enforcing what good implementation accurately looks like in their environment.
Just hearing that gave me a bit of anxiety. Where do you even begin to fix that as a manager? When AI use is the one variable that’s new for everyone.
“We Need Code Controls”
This is why I think AI code governance is becoming one of the defining enterprise software conversations now.
Governance = engineering infrastructure for standards, controls, code review, verification, validation, and visibility that help teams preserve trust and software quality at scale.
Because code quality can’t scale solely through engineering culture. It also needs to scale through proper guardrails enforced at multiple checkpoints in the software development lifecycle. Some of that is automated, other parts manual (human), and other parts now AI.
Those were my biggest takeaway from DevNexus.
AI coding is definitely here. And enterprise engineering teams are trying to figure out how to make speed sustainable and make software quality enforceable, teachable, and
This is the first entry in Field Notes in Agentic Engineering, where I’ll be sharing a lot more about I’m hearing from enterprise teams as AI changes how software gets built, reviewed, and governed.
If you’re seeing similar patterns in your organization or others, I’d love to hear what’s emerging in your environment! Please reply 😄



