Capturing project knowledge to help coding agents
When I talk to engineers who are just starting to use coding agents, I often see the same pattern. They use the agent in an interactive way asking for a small refactor, a tweak to a component, or a new feature in a file they are working on. The agent makes a change, but not in the way they expected (it changed multiple files, implemented a different pattern, put things in the wrong place). So they try asking for agent to correct it, the agent tries again, and before long the whole thing starts to feel more frustrating than helpful.
What I have found is that the frustration often is not that the coding agent is bad at coding. It is that we leave too much project knowledge implicit. We know which logging approach the team uses, which packages are preferred, how components are usually structured, how testing is done, what internal standards apply, and which patterns are already considered normal in the repository. We rarely write all of that down when we prompt an agent. But if it is not captured somewhere, the agent still has to do something, so it fills in the gaps with guesses. The more the agent has to guess, the more inconsistent the results feel.
Over time, my workflow has changed. I started with the same prompt-by-prompt approach. Then I got better at describing the specific state change I wanted to make in the repository. But the biggest improvement came when I stopped repeating the same background and standards in every session, and instead started capturing that knowledge in markdown files inside the repository. That gave the agent a much better starting point. It did not need to keep rediscovering how the project works, and it did not need to reinvent patterns each time. The result was not perfection, but it was much more consistent.
That is probably the most useful advice I would give to anyone getting started with coding agents: capture the project knowledge that Agents should not have to guess.
Where the frustration comes from
The request to the Agent is often broad enough that the agent has to interpret what the user meant. It may choose a pattern the user did not want, change more files than expected, or implement the right idea in the wrong way. The user then has to clarify, correct, and redirect. After a few rounds, it can start to feel like you are spending more time correcting the agent than getting useful work done.
In an existing codebase, there are usually dozens of unwritten rules and norms. There may be a preferred logging library, a standard testing pattern, an expected way to structure components, internal security requirements, or assumptions about how services interact. Co-workers usually absorb these over time. Coding agents do not. They can only work from what they can infer from the repository and what you give them in context.
When that context is missing, the agent has to guess. Sometimes those guesses are reasonable. Sometimes they are not what you wanted. Either way, the more the agent has to infer from scattered clues, the more variable the results become.
How my workflow changed
Stage 1: Interactive prompting
I would ask the agent directly for a small change, look at the result, and then steer it with follow-up messages.
That works up to a point. But it puts a lot of weight on the agent interpreting a short request correctly while also piecing together how the repository already works.
Stage 2: Better-scoped state changes
The next shift for me was getting better at describing the intended state change in the repository.
Instead of vaguely asking for a refactor or a feature, I started being more explicit about what should change, what should stay the same, and how I would know the work was done correctly. This is where approaches like SDD (Spec Driven Development), RPI (research-plan-implement) loops, tools like in my previous post on GitHub Spec Kit, or other similar workflows can help. The useful part is not the label. It is that they force you to describe the change more clearly.
That helps a lot. But I found there was still a limit. I kept repeating the same background information, standards, and expectations across different pieces of work. And if I forgot to include one of them, the agent would often fall back to a different interpretation.
Stage 3: Central project knowledge
The biggest improvement came when I realised I could move a lot of that repeated information into a more central place in the repository.
Instead of cramming everything into a prompt or repeating it in every spec, I started capturing recurring knowledge in markdown files. Things like how logging is done, which packages are preferred, what testing patterns the project uses, what internal requirements apply, and even high-level background on what the system does and how the repository is organised.
That changed the role of the prompt. Instead of carrying all the context itself, the prompt could focus more on the specific state change I wanted, while the repository already held the background knowledge and standards the agent should follow.
Capture knowledge centrally
The practical lesson for me was: capture the things the agent should not have to guess. Save them as markdown files in your repository so that it is explicit, and the Agent doesn’t need to guess or assume each time.
Which could include:
- logging and observability standards
- package and library choices
- testing patterns
- UI conventions
- coding standards and common patterns
- service boundaries and repository structure
- project background and system context
- secrets handling and security requirements
- internal company requirements (internal libraries, internal services that must be used, etc)
A lot of this is useful for people too, but it is especially useful for agents. It gives them a clearer baseline for how work is expected to be done in this repository.
It also reduces wasted effort. Instead of traversing the repository from scratch each time and trying to infer the local norms, the agent can start from a clearer understanding of the project.
Example: Logging
Across different changes, I would ask the agent to add a feature, and it would reimplement logging in a different way each time. The agent would see it needed to log, it would implement it again, without realising that it is already available in other parts of the codebase. The Agent was just trying to infer how logging should be done from the local context it had at that moment.
I captured the expected logging approach in markdown, the package we use, and a short code example. Afterwards the results became much more consistent. The agent no longer had to guess which pattern to follow. It had a clear reference point inside the repository.
That was the moment it really clicked for me. A lot of the problem was the missing context.
How to get started right now
Keep it lightweight.
You do not need a big methodology before this becomes useful. Just start by capturing the things your agent should know so it does not have to keep guessing. That could be logging, testing conventions, preferred packages, service boundaries, security expectations, or a high-level map of the repository.
# example folder structure
docs/
agent-context/
architecture-overview.md
repo-conventions.md
logging.md
observability.md
security-practices.md
testing-and-quality.md
Use the Agent to help bootstrap the context
You can use an agent to inspect the repository and capture useful context such as where the main services live, what the standard code structure looks like, how logging is currently done, how secrets are handled, and which libraries are already used and preferred.
If your tooling supports AGENTS.md, you can have it point it to where the knowledge files live.
Here are two example prompts you could try to get started quickly. One focused on a single cross-cutting concern, and the other a broader prompt to help bootstrap the entire repository context.
Example prompt - Logging standard
A focused example of how you could prompt your Agent to generate it for you.
Analyse the repository and determine how logging is implemented across the codebase.
Look for:
- logging frameworks and libraries in use
- log structure and field conventions
- correlation or trace ID handling
- severity levels and naming conventions
- where logging is expected in application flows
- error logging vs informational logging patterns
- any shared wrappers, helper utilities, or middleware
- any gaps, inconsistencies, or duplicate approaches
Capture the findings in `docs/agent-context/logging.md` as reusable guidance for future coding agents.
Example prompt - Knowledge bootstrap
A more broad example of prompting to have it pre-process and analyse your implicit standards.
Analyse the repository and prepare reusable context for future coding agents.
Your goal is to pre-process the codebase and capture the most important implementation knowledge into focused markdown files under `docs/agent-context/`.
Inspect the repository for:
- service and application boundaries
- major modules and responsibilities
- key architectural and integration patterns
- primary technologies, frameworks, and libraries
- security-related practices and constraints
- observability patterns such as logging, tracing, metrics, and error handling
- configuration and environment conventions
- testing approach and quality gates
- common coding patterns, abstractions, and shared utilities
- important repo-specific conventions that a new agent should follow
Create a small set of focused markdown files, for example:
- `docs/agent-context/architecture-overview.md`
- `docs/agent-context/service-boundaries.md`
- `docs/agent-context/technology-stack.md`
- `docs/agent-context/security-practices.md`
- `docs/agent-context/observability.md`
- `docs/agent-context/testing-and-quality.md`
- `docs/agent-context/repo-conventions.md`
For each file:
- capture what is actually true in this repository
- write concise guidance that another agent can use later when making changes
- include concrete examples or file references where helpful
- call out inconsistencies, partial migrations, or unclear areas
- avoid repeating the same content across files
Update or create the root `AGENTS.md` so future agents can discover this knowledge easily. Add a short section that:
- explains that repo-specific reusable context lives under `docs/agent-context/`
- lists the most important files
- tells agents to read the relevant files before making changes in those areas
Do not produce a vague documentation dump. Synthesize the repository into practical, repo-specific context that helps future agents make better implementation decisions.
Closing
The main idea is simple: if the project has standards, patterns, and background information that helps, capture them somewhere central. The less the agent has to infer from scratch, the more consistent its work will be.
If you are looking for a better way to work with coding agents, this is probably the first place I would start.