For technology professionals and teams using agentic AI to improve performance and efficiency, this is playing out most clearly in architecture, engineering, and code development, where these tools are used most heavily.
I have been extensively using several agentic AI solutions embedded within my IDEs for months now, and I have been frequently frustrated by the AI’s lack of understanding of my work style. Despite having instructed the AI on how I would prefer working, how I would prefer it to do its due diligence before asking repetitive or low-value questions, it continues to ask the same questions and fails to do the diligence required for the instructed task.
If you have ever resorted to using “colorful” language to clarify to the AI solution what you wanted, you are definitely not alone. That’s just an expression of your growing frustration in coaching the AI and the gap in your expectations of its actual performance versus reality.
That gap is starting to show up more publicly as well. The recent leak tied to Anthropic’s Claude Code offered a rare look into how these systems are actually guided behind the scenes and how heavily they depend on correctly interpreting instructions. The leak even exposed their branching logic when dealing with “frustrated feedback”. Generally, the leak reinforced something many of us are already experiencing – these tools are powerful, but they are still operating without true understanding of context, intent, or how real engineering work gets done.
For me and my work style, time is of the essence. One of the greatest benefits of using AI is as a time-saving tool in your workflow. Having to repeat instructions, respond to the same questions, and restate your workflow preferences does not save time.
I am sure I’m not alone in the frustration of having to constantly re-instruct the AI on my work style, preferences, and expectations for how it approaches and executes tasks.
For example, without confirmation from me, I have often seen it literally change requirements on the fly to which I have to respond “Back that out. I didn’t ask you to do that.” I group this in the category of AI assuming it knows better than me, or worse, doing the “lazy” thing instead of the right thing.
This is one of the fundamental reasons why I say AI should never be used by junior technologists because the assumption that AI knows more than the user leads to inaccurate interpretation of requirements. The ability to articulate requirements is fundamental to the effective use of any AI, and most especially agentic AI given its semantic power over less capable AI solutions.
I should also point out that the learning process, at the time of this writing, is “solution specific”. This means that what you teach the AI about your preferences and how you work is retained only in the solution you are working in 1.) Only when you explicitly tell the AI to remember a preference, and 2.) does not carry over to other solutions (e.g. folders/repos). Each project gets its own separate memory folder. If you work on many solutions like I do, this limitation can be frustrating. You will most likely have to adapt your work process to these limitations, which is just the reality today.
I will admit I’ve used several agentic AI solutions, and your mileage may vary, or you may have best practices that have helped you train AI in your work style, your process, and your expectations for how it executes the tasks you define. And you may consider those best practices a competitive advantage over your competition, which is great for you. My point here is that however you are training AI to meet your needs, it is probably not what you would have expected from a supposedly intelligent AI solution.
In the end, this just means AI is in an ever-evolving state and is not the be-all, end-all “autopilot” feature that casual IT technologists might assume it to be. This is another reason why I believe it is not a viable tool for junior technologists who have not learned the fundamental blocking and tackling required to be proficient at their job. If anything, use by underqualified technology staff can slow their development and lead them to think less critically about why they are doing what they are doing, which can never be good.





