How to prompt effectively for coding

How to prompt effectively for coding

Rafael Fernández López
Rafael Fernández López
/ 2025-10-16

When I started working with Claude, I used to ask for very open-ended changes, with minimum guidance. An example of such a prompt could be:

Create a new library to define the schema of status.json files and load them.

In a human-to-human relationship, this might be ok, because a conversation will follow: “What do you exactly mean with a library? A module? A class? A function? Are all fields of status.json mandatory? Some optional? Some missing?”

Over time, I learned how to effectively use agents to successfully complete a task. But first, I had to understand how agents and the LLM behind them work.

Understanding agents and models

It’s crucial to understand that models are non-deterministic and that the more room we leave open for non-deterministic behaviors, the more the result might not look like what we are expecting. It’s simply too open.

Models are what agents use underneath to implement their cycle: you prompt the agent with a task, and they will use the model underneath, plus a number of tools (e.g. as reading or writing files) until they consider they have accomplished what the prompt is asking for.

A good prompt sets the first interaction with the agent and the model underneath. It will unleash a number of intermediate steps that might lead the agent to one result or another, always taking into account that models themselves are not deterministic, and that the very same prompt will not respond in the same way even with the same model in the general case.

Understanding from a high level what a model and an agent are is core to get the most out of them.

Anatomy of a great prompt

Based on my experience, there are three key features a prompt must have:

  • Goal: what we want it to implement / do.
  • Context: what is important for the model to know before it tries to implement / do.
  • Plan: even if sketchy, what things are important to consider in the process of implementing the solution.

Taking this into account, we might rewrite the previous prompt as something like this:

I'm working on extracting the agent workflow definition from the @packages/cli/
package into a more formal and separate library under @packages/agent/. I want
to develop the schema definition and the library to load it. For now, I just want
you to get familiar with the way the CLI sets and loads the different AI workflow
steps from @packages/cli/src/lib/prompts/ and @packages/cli/src/lib/setup.ts and
explain it to me in a summarized form.

The “explain this to me” part gives us the chance to understand what the model has grasped from the code, and if we are missing something important. At the same time, this response itself will be the context that the model will use later on when we actually request the change; it’s already in the current session context.

GIF showing the previous prompt and the answer Claude gave

Most developers know how to use search engines like Google effectively. Writing queries the right way gets you to the web page you were looking for, faster. The equivalent for the AI-driven world is being able to write good prompts for our models to do what we want them to do.

We’ll also want to keep an eye on token consumption. Our prompt itself is made of tokens, which account for input and output tokens – this is especially important if we are not interacting with the agent in a subscription model and we are being charged by token consumption instead.

Split a prompt into separate steps

Sometimes, it’s better to drive the AI agent in a specific direction. With a single prompt, the agent looks for a potential solution and you need to point it to the correct one, which is different from the one it started exploring. Instead, you can provide smaller and concise prompts to ensure the agent goes in the right direction.

As an example, in a project that implements some sort of social network, we might write:

Create a new model for relationships (family, significant other, friends, ...)
between people. The current person model can be found at @src/models/person.ts.
Do not implement tests of any kind for this model yet.

This sort of prompt drives the model to the right place from the beginning, without being too specific about how it should implement the new feature. If your session is going to be longer in this context, it’s usually a good idea to tell the agent to read relevant files, and then request the different changes, such as:

Read the models that can be found at @src/models.

Now you can ask questions about what the model introduced in its context, or even to help you get into the situation to be more effective with the following prompts. Then, you can request the changes you want:

Create a new model for relationships between people (family, significant other, friends, ...).

Your mileage may vary depending on whether you are using a well known web framework, or a custom one, for example. If you have a Rails application, it’s very common for modern AI models to detect this and know where to look for Rails controllers or models. However, there are always certain practices that apply to your project or application only that the model should know beforehand. It might or might not detect them on its own, so it’s better to be explicit about them.

It’s always a good idea to give the model context about the project in order for it to be more effective.

Try it out on your own project, and check the difference it makes!

My top tips

What follows are the most important takeaways in terms of what I found worked best for me during this experimentation and learning time.

  • Add TODO tasks

    Despite some agents already doing this, explicitly ask the agent to add TODO tasks for the feature you want to be implemented. This way, you can check at a glance if the plan looks correct or if it would deviate from what you had originally in mind.

  • Ask for explanations

    Asking the agent for an explanation helps to get the agent and yourself on the same page. This becomes part of the session context, ensuring that the following tasks and prompts are focused and to the point.

  • Start from scratch

    When the agent starts to contradict itself and multiple contradictions enter into the context window, the probability of succeeding is dramatically lower. When you identify that the agent is running in circles, it’s always better to start from scratch.

Different agents: different approaches

You can find a lot of different agents in the wild as of today. You name it: Claude, Gemini, Codex, Qwen, Copilot CLI…

We have described what’s the main goal of an agent, and what’s its relationship with a model. In the process of iterating over your prompts when implementing a task, the agent will need to perform different actions such as reading or writing files, accessing system configuration, or reading resources on the Internet. There are two important things that differentiate agents at this stage:

  • What model it uses: depending on the task, one model might be much more fit for the task at hand than others. Only agents implementing support for that model will be a good choice.
  • Interaction with the system: agents come with different tools to interact with the system: this might set apart one agent from another. Also, some agents might be more advanced in detecting what is the system they are running on, so that they run commands they know will work before trying out commands that fail and then retry with some variation of them.

Astronaut looking at some tools labeled after agents: claude, codex, gemini, qwen

This is why having a tool that joins the experience of running agents into a solid and cohesive experience is a great advantage, allowing you to mix and match agents and models in the same place.

In order to better work with multiple agents, we recently introduced Rover. A manager for AI coding agents that allows you to run multiple AI coding agents in parallel. If you are curious about how it looks, you can check it out.

Conclusion

We have seen what models are, and what their relationship with agents is. We have also described how choosing one agent or another might be a better choice for certain tasks, because they limit the model choice and not only that, what tools they have available, and how they might detect what tools they can run in a system.

Working with agents and LLM’s is similar to being effective with search engines. Better keywords or better searching strategies will yield better results. It is as important to give a minimum amount of context to the model as not giving too much, because it might confuse it. We have also seen that once there’s too much context in the window that might contradict itself will confuse the model, and it’s likely better to restart the session at that time to start with a clean context.

We would love to know what you are building with Rover! Let us know!

Ready to boost your AI coding agents?

Star Rover on GitHub to show your support and check out the documentation to get started

Open Source • Apache 2.0 License

Share this article