The Monday morning KPI summary is the report that should be automated and almost never is. The data exists. The query exists. The template exists. What is missing is the half-hour every Monday that somebody spends pulling, pasting, and writing the same paragraph that says "revenue is up 8%, signups are flat, we'll dig in this week".

An agent does that. The shape: pull from your stack, compare against last week and last month, deliver a one-page summary with three callouts. The trick is keeping it boring. Boring weekly reports get read. Clever weekly reports get ignored after week three.

What this agent does

Every Monday morning, the agent runs. It calls your data sources, computes each metric for last week, compares against the prior week and the prior month, and writes a one-page summary. The first paragraph is three callouts: the biggest delta up, the biggest delta down, and one anomaly. Then the table of metrics. Then a closing paragraph: any metric that is missing because the source was unavailable, plus the caveats.

The agent does not produce slides, executive summaries, or strategy. It produces the same one-pager every Monday. Consistency is the value.

For the read-only-first-agent framing, see how to set up your first AI agent.

Which KPIs belong on it

Five to nine numbers. The discipline is harder than the technology. Pick the small set that, if all moved the wrong way, would tell you the business is in trouble. Resist the urge to add a tenth. Resist again at week six.

A starting list for a B2B SaaS:

Tune for your model. A consumer app prioritises retention curves; an agency prioritises billable hours; a marketplace prioritises both sides separately. The list is the conversation; the report is the artifact.

Data sources and tool access

Each KPI gets a tool. The agent calls the tool, receives the number, computes the comparison. No memory, no recall, no estimation. If the tool fails, the metric appears as missing in the output, not as a guess.

Common sources:

Define the SQL queries or API calls upfront and store them in the agent's tool definitions, not in the prompt. The agent calls each as a function. This separation matters: the prompt evolves; the queries stay stable. Updating a query (because the schema changed) does not require touching the prompt.

For more on the prompt-vs-tool separation, see AI agent tool use explained.

The one-page format

One screen. The constraint forces the discipline. The structure that works:

  1. Three callouts at the top. One sentence each. Biggest mover up, biggest mover down, biggest anomaly.
  2. Metric table. Name, value last week, delta vs prior week, delta vs prior month, sparkline if your delivery target supports it.
  3. Caveats. Any metric that is missing or partial. Any data source that returned an error. Any known instrumentation change in the past two weeks.

Deliver to a Slack channel or as an email. Avoid attaching it as a PDF; nobody opens PDFs. Inline content gets read.

Up: Revenue +8%vs last week Down: Signups -12%vs last week Anomaly: Tickets +3xTuesday spike Metric table (5-9 rows)Name · value · vs prior week · vs prior month · sparklineAll numbers traced to a tool call. Missing values shown as —. Caveats: missing data, instrumentation changes, source errors
One-page format: callouts at top, metric table in the middle, caveats at the bottom.

Preventing hallucinated numbers

The single biggest risk with a KPI agent is a number that looks right and is wrong. Three rules:

  1. Every figure traces to a tool call. No figure comes from the model's memory. The prompt must explicitly forbid this.
  2. Missing is not zero. If a tool failed, the metric is shown as missing or with an error annotation. Never as zero, never as "approximately last week's".
  3. Per-run audit. Each report includes a hidden footer with the SQL query or API call ID for every figure, retained ninety days.

The audit footer is non-negotiable. Anybody who questions a number should be able to retrieve the exact query that produced it. Without the trace, the report is an opinion. With it, it is a fact.

For the broader pattern around verification, see how to test an AI agent before you deploy it.

30-day reality check

After a month:

The "we used to track that" problem. A metric you set up week one stops being interesting by week three. Drop it. The list of nine became seven; that is fine.

The instrumentation drift. A backend deploy changed a column meaning and the agent did not notice. Add a sanity check: if a metric moves more than 50% week-over-week without a known cause, the agent should flag it as suspicious rather than report it confidently.

The "what about" creep. Each Monday's standup produces a "what about X" question. Resist adding X to the report. Add X to a separate one-time query the agent runs on demand. Permanent additions accumulate; the report becomes a dashboard nobody reads.

When to override the format

The one-page Monday report is the default. There are two times to break it:

Major launches. The week of a product launch deserves an event-specific report: launch metrics, comparison to projections, qualitative signal from support and sales channels. Run it as a separate agent for the launch window (one week before, two weeks after), not as an overload of the regular Monday report.

Anomaly weeks. If something unusual happens (an outage, a viral mention, a competitor announcement), add a paragraph at the top of the regular report and link to a deeper one-time analysis. Do not redesign the Monday report. The regular report's value is consistency; consistency is what makes the deviation legible.

The rule of thumb: anything that shows up only once does not belong in the weekly format. Add a one-time agent run for it. Put the link in the regular Monday header so readers can find it.

Common mistakes

Frequently asked questions

What does an AI agent for weekly KPI reports do?

It pulls a defined list of numbers from your stack (Stripe, Google Analytics, your database, your CRM), compares each to last week and last month, and emits a one-page summary with three callouts. The agent does not interpret strategy. It surfaces what changed, what the delta means in plain language, and what the top three movements were.

How many KPIs should the agent report on?

Five to nine for a one-page report. More than that becomes a dashboard nobody reads. The right list is the small set of numbers that, if all moved the wrong way, would tell you the business is in trouble. Start with the obvious ones (revenue, active users, retention, conversion, NPS or CSAT), then prune.

Should the AI agent interpret the KPI numbers?

Light interpretation only. The agent should describe what changed (revenue up 8%, signups down 12%) and offer one neutral hypothesis if a strong correlation appears (signups dipped after the pricing-page change on Tuesday). It should not recommend strategy. Strategy is a human conversation; the agent's job is to put the right numbers in front of that conversation.

How do I prevent an AI KPI agent from hallucinating numbers?

Give the agent tool access to the data source rather than asking it to recall numbers from prior runs. Every figure in the report should trace to a SQL query, an API call, or a verified export. The prompt should explicitly forbid figures that come 'from memory' and instruct the agent to mark a metric as missing rather than guess.

Can I use an AI agent to replace my analytics dashboard?

Not replace, complement. Dashboards are good for live exploration. Weekly reports are good for the question 'what happened last week and why does it matter'. The agent reads from the same sources the dashboard does and produces a narrative that the dashboard does not. Both have a job; neither is a substitute for the other.

Three takeaways before you close this tab

Sources