Running the Agent and Reviewing Its Output
Veeg om het menu te tonen
You have chosen a platform, connected your tools and written your instructions. Now it is time to actually run a task. This chapter covers what happens when you give the agent something to do, what good output looks like, and how to develop a reliable habit of reviewing results before acting on them.
Giving the Agent a Task
The way you phrase a task has a direct impact on the quality of the result. Agents respond well to requests that are specific about the goal, the format you want, and any constraints that matter.
A vague request produces a vague result. A specific request produces something you can actually use.
Prompt – the message or instruction you give to an agent to start a task. A well-written prompt includes the goal, relevant context, and the format you want the output in.
Compare these two versions of the same request:
Vague: Summarize these emails.
Specific: Read these five customer emails and produce a bullet-point summary of the main complaints. Group similar issues together and note how many emails mention each one.
The second version gives the agent a clear goal, a specific format and a useful instruction about how to organize the output. The result will be significantly more useful.
What to Look for When Reviewing Output
Every agent output deserves at least a quick review before you use it. This does not mean reading every word with a red pen – it means knowing what to check for.
The three most common issues are:
- Missing information – the agent summarized or paraphrased but left out something important;
- Confident errors – the agent stated something incorrect as if it were fact, without flagging uncertainty;
- Tone mismatch – the draft it produced does not match the voice or register you actually use.
Agents do not know what they do not know. If a piece of information was not in the material you gave them, they may fill the gap with something plausible-sounding but wrong. Always verify factual claims that matter.
Iterating on the Output
Reviewing output is not the end of the process – it is the middle. When something is not quite right, the fastest fix is to tell the agent specifically what to change rather than starting over.
Instead of rewriting your prompt from scratch, try:
"The tone is too formal – rewrite the second paragraph to sound more conversational";"You missed the point about delivery timelines – add that to the summary";"This is too long – cut it to five bullet points maximum".
Each of these takes ten seconds to write and gets you to a usable result in one more exchange.
How many iterations is too many?
If you find yourself going back and forth with the agent more than three or four times on the same output, it usually means one of two things: either the original prompt was missing important context, or the task is genuinely too nuanced for the agent to handle well on its own.
In the first case, stop and rewrite the prompt with more detail. In the second case, consider taking over the task yourself and using the agent's output only as a starting point rather than a near-final draft.
Bedankt voor je feedback!
Vraag AI
Vraag AI
Vraag wat u wilt of probeer een van de voorgestelde vragen om onze chat te starten.