Zapier was the right abstraction for the 2010s. Trigger, step, action, condition, branch. A multi-billion-dollar business on the premise that the user could design the path. The premise is no longer right. The intelligence has moved from the user to the system, and the workflow editor is the bottleneck that's left over from a world where it had to be the user.

This is the thesis that came out of Vibe AI. Halfway through that build, the bigger problem stopped being any individual product and started being the abstraction the entire industry inherited from the 2010s. The case against it lives in this post; the framework that generated it lives in three startups, three shutdowns.

The 2010s abstraction

Workflow automation in the 2010s assumed a specific picture of the user. The user knows what they want, knows the steps to get there, knows the APIs of the tools involved, and just needs a low-friction way to wire those steps together. Zapier, IFTTT, Make, n8n, Pipedream, Activepieces, Workato, Tray, Relay,all variations on the same picture.

The picture was right. In 2014, when integrations were the hard problem and the LLM was not yet a user-facing technology, asking the user to design the path was a reasonable trade-off because the alternative was writing code. Workflow tools won by being not-code; the abstraction's value was the gap between "configure" and "code", not between "describe" and "configure".

That gap shrank fast in 2024 and faster in 2025. Foundation models became reliable enough to figure out the steps from a description, and the cost of inference dropped enough to make outcome-based agents commercially viable. The gap that workflow tools were sitting in started closing.

Why the editor is the bottleneck

The workflow editor has a fatal assumption: the user is willing to think like a programmer. Trigger-step-action is a developer mental model imposed on operators. Most operators do not want to think like programmers. They want the result.

When the underlying intelligence got smart enough to figure out the steps on its own, the workflow editor became the only bottleneck. The thing slowing AI adoption was not the model; it was the requirement to design the path. Every "if this then that" rule the user has to write is a moment where the user is doing work the system could be doing for them.

The work moves: from user to system Workflow tool User: design trigger User: design steps User: design conditions User: handle exceptions System: execute exactly what user designed Outcome agent User: describe outcome ("clear my inbox before 9am") System: choose tools System: design steps System: handle conditions System: handle exceptions System: execute In the workflow tool the user does most of the design. In the outcome agent the system does most of the design. The user effort gap is the TAM gap.
The user effort gap is the TAM gap. Workflow tools narrow the user; outcome agents widen it.

Outcome description as the new primitive

The new primitive is the outcome sentence. "Clear my inbox before 9am, replying to anything that needs a reply, archiving newsletters, flagging anything that mentions Q2." No trigger to design. No step list to maintain. No exception branches. The user describes what should be true at the end; the agent figures out everything in between.

The shift is real, not cosmetic. With a workflow editor, the user authors the program. With outcome description, the user describes the desired post-condition. Those are different developer-tools concepts,declarative-by-result versus imperative-by-step,and the difference shows up in who has to know what.

Three properties follow from the outcome-description shift:

  1. The vocabulary changes. Users describe in their own language, not in tool-vendor's vocabulary. They do not need to know what a "Zap" or a "node" is.
  2. The maintenance burden flips. When a downstream API changes its shape, the agent adapts; the user does not edit a flow.
  3. The audience expands. Anyone who can describe a recurring task can use the system. The skill ceiling that workflow tools required disappears.

The TAM expansion

The most important consequence is the TAM expansion. Workflow tools narrowed the addressable market to the subset of operators willing to learn trigger-step-action thinking. That subset is non-trivial,Zapier proved tens of millions can,but it's a small fraction of "people who have recurring tasks they would like to automate".

Outcome description does not require the same skill investment. The threshold is "can the user describe the task in a sentence?" That's a much lower bar than "can the user design the steps?",and it expands the TAM by an order of magnitude. The three checks framework calls this kind of expansion the scaling-potential check; outcome agents pass it where workflow tools pass it less.

Objections and where they hold

Three objections deserve specific responses.

Objection 1,"Some workflows really do require exact steps." True, and those workflows remain a workflow-tool problem. The argument is not that workflow tools should not exist; it's that most automation does not structurally need a workflow editor. The set of tasks for which step-by-step design is the right abstraction is much smaller than Zapier's TAM suggests. Surgical, regulatory, and irreversible-action workflows belong in step-by-step tools. Inbox triage and lead follow-up do not.

Objection 2,"Agents are not reliable enough." Reliability is the question, not a refutation. The 80-test methodology in how we test AI agents exists because the agent failure modes are different from workflow failure modes and need their own coverage. The numbers are not "100% reliable"; they're "reliable enough across the eight failure categories that the failure rate is below the human-error rate for the same task". For a lot of tasks, that bar is now achievable.

Objection 3,"But Zapier still works." Yes. Zapier still works for the user it was designed for: the operator willing to design the path. The argument is not that Zapier is broken; it's that Zapier's TAM has stopped being the limit. The new TAM is anyone who can describe an outcome.

What survives

What survives the shift: explicit-step tools for high-stakes, irreversible, regulated, or audit-heavy automation. What does not survive at the same scale: workflow tools for everyday operator automation. The latter category is where outcome-description agents are growing into.

For Gravity, the thesis lives in product design. How Gravity works: the user types an outcome sentence, the agent deploys in 60 seconds, the agent runs the task. No editor. No step list. No condition branches in a UI. The interface is the description box. That's the implementation of the abstraction shift.

If you're building outcome-based automation right now and want to compare notes,or you have a category where step-design is genuinely the right abstraction and want to argue with this thesis,my email is at the top of /contact.

Frequently asked questions

What is the difference between an AI agent and a workflow tool?

A workflow tool asks the user to design the path: trigger, step, action, condition, branch. An AI agent asks the user to describe the outcome and figures out the path on its own. The workflow tool's editor is the product; the agent's editor is the description box. The intelligence has moved from the user to the system.

What is the case against Zapier?

Zapier was the right abstraction for 2010s automation: connect two APIs, run a step on a trigger. Zapier built a multi-billion-dollar business on it. The case against it now is structural: workflow editors require the user to think like a programmer, and the underlying intelligence is good enough that the user no longer should. The editor became the bottleneck the moment AI could choose the path on its own.

Why does describing the outcome win?

Describing the outcome wins because it expands the addressable user. Workflow tools narrow the TAM to people willing to learn trigger-step-action thinking; outcome description opens the same automation to anyone who can describe a task in a sentence. The bottleneck on AI adoption was never the model; it was the requirement to design the path.

What about complex workflows that need exact steps?

Complex workflows that genuinely require exact steps remain a workflow-tool problem. The argument is not that workflow tools should not exist; it is that most automation is not structurally complex. Most automation is "when this happens, do this". The set of tasks for which step-by-step design is the right abstraction is much smaller than Zapier's TAM suggests.

Can an outcome-based agent fail?

Yes,and the failure modes look different. Workflow tools fail at the edge of their predefined paths; agents fail at the edge of their reasoning about which path to take. The 80-test methodology Gravity uses is built around the agent failure modes specifically: input variation, tool failure, partial results, hostile input, rate limits, schema drift, refusal correctness, idempotency.

Three takeaways before you close this tab

Sources