Changes introduced outside the agent window create a new state that is different from the agents.
After commands or changes are made outside of the agents doing; the agent would notice its world view changed and eventually recover, but that fills up precious context for it to bring itself up to date.
I have seen many cases of Claude ignoring extremely specific instructions to the point that any further specificity would take more information to express than just doing it myself.
It's easy to get these models to introspect and give quite detailed and intelligent responses about why the erred. And to work with them to create better instructions for future agents to follow. That doesn't solve the steering problem however if they still do not listen well to these instructions.
I spend 8-20 hours a day coding nonstop with agentic models and you can believe I have tuned my approach quite a lot. This isn't a case of inexperience or conflicting instructions, The RL which gives Opus its fantastic ability to just knock out features is the same RL which causes it to constantly accumulate tech debt through short-sighted decisions.
This prompt will work better across any/all models.