Stop typing prompts. Start talking.
You think 4x faster than you type. So why are you typing prompts?
Wispr Flow turns your voice into ready-to-paste text inside any AI tool. Speak naturally - include "um"s, tangents, half-finished thoughts - and Flow cleans everything up. You get polished, detailed prompts without touching a keyboard.
Developers use Flow to give coding agents the context they actually need. Researchers use it to describe experiments in full detail. Everyone uses it to stop bottlenecking their AI workflows.
89% of messages sent with zero edits. Millions of users worldwide. Available on Mac, Windows, iPhone, and now Android (free and unlimited on Android during launch).
For those curious, please consider using Wispr Flow. I use it every single day and it’s the easiest, hands-off way to ramble into an AI app, notes, or google doc, reply to people on LinkedIn, send any messages I’d like, etc. I am faster with it, I can get all my thoughts out and strategize easily due to speaking and not having to type. It is part of my AI stack for 10x-ing myself! If your mind moves 1000mph and you need AI to keep up, this app is perfect to use on your laptop and phone. 😄
Not Every AI Problem Is a Tool Problem
When I speak with engineers at large orgs, they sometimes complain that “AI tools [are] not working the way we hoped.”
A lot of promises are made about the capabilities of LLMs and agent harnesses these days. There’s a sea of noise from tactical marketing schemes and “AI is going to take your job” quotes from frontier lab executives.
This post isn’t about the marketing, though. I aim to dig deeper into what engineers are really experiencing and dissect it surgically to uncover nuance, as there is always nuance at the intersection of technology and business.
Here’s a great way I usually begin peeling back the layers of AI adoption in engineering:
“If we turned the AI tools off tomorrow, what would still be hard about your workflow?”
When I ask this question, practitioners usually stop sharing nitpicks about features and UX of AI dev tools. They switch to speaking about on-prem setups, legacy systems, manual patching, unclear ownership, and codebases that already felt fragile pre-GenAI.
Peel back those layers and you uncover that the complaints have very little to do with new, innovative tech! Those are the critical details that tell a much more intriguing story.
Fundamentally, any initiative involving humans is one that can become complex (sorry, I have to be blunt here!).
So it is no surprise to me that for AI adoption, the challenges are not always inherently a “tool” problem. Sometimes, new tooling (or innovative tech) uncovers pre-existing pain from people, systems, and processes they work within.
Enterprise Diagnostics: Three Problem Buckets
Listening to these stories, I keep hearing frustrations that actually belonged to three different buckets:
Tool problems - the AI behavior itself is off: irrelevant suggestions, noisy review, weak reasoning, poor integration.
People or process problems - unclear tickets, missing acceptance criteria, ad hoc review, inconsistent QA, no defined standards.
Codebase / architecture problems - tightly coupled systems, legacy patterns, no tests, complex on-prem constraints, brittle services.
The trouble is that in the middle of a busy sprint, all of these get collapsed into one sentence: “The AI isn’t working for us.”
But if you don’t separate those buckets, you can end up swapping tools when what you really need is process clarity. Or you end up blaming the process when the system is genuinely constrained by over-engineered or overly complex architecture built from decisions made several years ago that are difficult to graduate from.
On-Prem, Patching, and “We Just Need to Go Faster”
Engineers talk about the friction of working in highly regulated verticals (fintech, health tech, defense etc.) which tends to involve on-prem environments.
Typical systems and process they have to manage:
older deployment pipelines,
manual patching and upgrades,
multiple environments with slightly different configurations,
and a lot of implicit knowledge about how “things actually work here”
When you add AI tools into that mix, it feels like a lot more friction:
Integrations take longer.
You can’t use all the new fancy tools if they don’t support your configurations and restrictions (which can get legal teams involved if you’re working in highly regulated environments)
Some repositories are off-limits or hard to access.
Outside of very clear and documented shortcomings from AI tools that do not fit the needs of engineering orgs operating in highly regulated industries, what often happens is that the environment has accumulated a lot of debt and complexity over time.
AI tools are just running into constraints that already made it hard for humans to work smoothly. And these engineers would be complaining about these exact issues regardless of AI dev tools popping up in the market.
By the way, I am not blaming engineers. And I am certainly not absolving AI tool companies from lacking proper integration support for complex deployment pipelines and regulated environments. I’m stating that the problems are complex. Many things can be true at once!
Get Clarity on a Tool Swap vs. Workflow Redesign
One pattern I keep seeing in these conversations is the instinct to “try a different tool” when AI adoption feels frustrating.
But tools magnify whatever structure you already have. AI amplifies that even further.
If the structure is strong - clear requirements, predictable processes and quality boundaries - AI can help you move faster through a process that already works.
If the structure is weak, AI can make you move faster into confusion: more code, more changes, disjointed changes, and more places for misalignment to hide until it’s so loud it can’t be ignored.
I wouldn’t be so quick to refer to that as a tooling problem. It could be. But it also sounds more like a workflow design problem. And it could be time to rethink the workflow design, regardless of AI adoption.
When Codebases Speak Up and Air Dirty Laundry
There’s an undercurrent of “we are nervous about touching this part of the system” in some of the stories I listen to. Engineers talk about:
legacy services that everyone is afraid to modify,
core modules with no tests and no clear owners,
copy-pasted patterns that nobody wants to be responsible for cleaning up.
In that context, AI code generation and AI review feel risky for very human reasons:
If nobody fully understands how a component behaves, AI-generated changes there feel like guessing.
If tests are thin or nonexistent, AI review has less to anchor to.
If ownership is unclear, mistakes are harder to catch early and fix safely.
It’s tempting (and much easier) to say, “We need a smarter AI to help us deal with this.”
But I’d argue that the first moves are:
make ownership explicit,
add the most critical missing tests,
document known constraints,
and start chipping away at the worst coupling.
Those are process and architecture moves. AI can assist them, but it can’t replace them magically. It certainly won’t absolve leadership of the responsibilities here to align engineering teams on these cleanup goals.
Lessons from Field Notes in Agentic Engineering
Agentic engineering is powerful when agents are operating inside a well-understood system.
You know what “good” looks like in codebases and processes for working in them. And “good” is codified and accessible to all. (People need access. And their agents need access, too, to align accordingly)
You know which parts of the system and processes are fragile. Awareness is the first part.
You understand that to introduce innovative technology, the groundwork must be set to audit current gaps in processes and systems, create the roadmap for improvement, and prepare to benefit greatly from future adoption in a feasible timeline. This will reduce the pains of rework from diving in too deep unprepared.
When those foundations are missing, it becomes much harder to tell whether a bad outcome is the AI’s fault, the process’s fault, or the system’s fault.
Here is everything we want to avoid in these cases:
1) changing tools instead of fixing workflows
2) tightening process checks without addressing architecture
3) or blaming process when the real issue is that the codebase has been under-invested in for years (where tradeoff decisions ultimately put evolutionary architecture on the back burner).
My takeaways from these conversations are simple:
If you want AI tools to work for you, you have to be honest about what in your system is ready for acceleration and what is not. Not every AI problem is a tool problem. Some are tool problems. Others are process and people problems. Some are codebase problems. Many tend to be a convoluted mixture.
To be an organization ready for fully-native AI, you must move with clarity:
“This is a tooling gap.”
“This is a workflow gap.”
“This is an architecture gap.”
“This is a people gap.”
…and then design the right interventions for each. (easier said than done, but like I said, awareness before introducing new tools is critical here. And responsibility from engineering leadership must follow).
Agentic engineering is about building systems where both human and machine intelligence can see what is going on, make good decisions, and change things without breaking trust.
This is another entry in Field Notes in Agentic Engineering. If your team has run into “AI problems” that turned out to be something else entirely, I’d love to hear your thoughts on all of this. More stories to come!



