Vibe AI was an AI friend product. From mid-2025 to early 2026, real users showed up, used it daily, and told me, in their own words, that they liked it. It also lost money on every active user every month. The full structural postmortem is at Vibe AI postmortem. This post is the product layer underneath that postmortem: the five rules I now hold as non-negotiable, every one of them learned the expensive way.
If you are mid-build on a consumer AI product right now and your retention curves are flattering, you are exactly the audience for this post. Flattering retention on a money-losing cohort is a worse signal than poor retention on a profitable one.
Rule 1: engagement is not viability
The most flattering metric Vibe AI ever produced was daily active users per weekly active user. The DAU/WAU ratio was high. Session length was high. Return rate was high. The cost-per-active-user was also high, and the price was lower than the cost. Engagement was real. Viability was not.
The mistake is treating engagement as the proxy for "product is working". Engagement only proves the product is being used. Whether the product is being used profitably is a separate question with a separate answer. Founders who measure engagement without cost are measuring half the equation.
The Gravity rule that came out of this: every dashboard that shows engagement also shows cost-per-active-agent in the same view. The two metrics live next to each other. You cannot look at one without the other.
Rule 2: retention curves lie if you ignore cost
Vibe AI's week-four retention was strong. The cohort that stuck was small. The cohort that stuck was also the most expensive cohort to serve, because the people who stayed were the ones who used the product most heavily. Strong retention on an expensive cohort is a structural loss, not a structural win.
Retention curves are aggregate. The cost curve underneath them is per-user. The two curves can move in opposite directions for months before the founder notices. The Vibe AI lesson is to plot retention and cost-per-user on the same axis, by cohort, every month. If the retained cohort is also the expensive cohort, the curve is a warning, not a celebration.
Rule 3: scope discipline beats taste
I have decent product taste. So do most founders who have been doing this for a few years. Taste tells you what would be nice to add. Scope discipline tells you what to refuse. In Vibe AI, my taste regularly overrode my scope discipline; in Gravity, the order is reversed.
The version of this rule I now use: every feature request gets matched against the next-six-weeks roadmap. If the request does not intersect the roadmap, it gets refused with a calendar reason, not a quality reason. "We are not building this in the next six weeks because we are building X" is a defensible refusal. "It is not on brand" is not.
This is a constant theme across all three shutdowns. The first three checks I missed are documented at the three checks I missed; scope refusal is a tool for protecting all three.
Rule 4: capability pricing or nothing
Vibe AI was priced as a flat subscription. Compute cost scaled with conversation length, emotional context retrieval depth, and memory hydration frequency. Price did not scale with any of those things. The structural mismatch was visible in month one and unfixable by month four because by then the price was on the live site.
Capability-based pricing aligned to cost-of-inference would have made the unit economics legible from week one. Each capability has its own cost shape; the price for that capability has to follow the same shape. Flat-rate pricing on a usage-priced underlying is a pricing model that hides the loss until the loss is too big to absorb.
Gravity's pricing is capability-based from day one. The economics are detailed in economics of bootstrapped AI agents.
Rule 5: refusal is a feature
The fifth rule is the hardest. In Vibe AI, every feature request that came from a vocal user got listened to. Listening is good. Listening is not the same as building. Building every requested feature dilutes the product, increases the cost-per-active-user, and makes the kill threshold harder to hit in time.
Refusal is a product feature when the underlying economics are tight. The act of saying no, in writing, with a calendar reason, protects the product from feature creep that would otherwise compound monthly. The Vibe AI roadmap had over 30 user-requested features that I tried to build; 25 of them did not move the unit-economics needle. In Gravity, the same 30 requests would get one yes and 29 documented refusals.
How these rules shape Gravity
Five rules in, five visible product behaviours out:
- Engagement and cost live on the same dashboard. No metric stands alone.
- Cohort retention plotted against cohort cost monthly. Both directions, both axes, every month.
- Refused features logged publicly with calendar reasons. Build-in-public extends to refusals.
- Capability-based pricing from the first paying agent. Each agent has its own cost-of-inference price line.
- Six-week roadmap as the refusal filter. Anything outside it gets a polite no.
The framework that ties all of this together is in three startups, three shutdowns. The bootstrap context that makes these rules cheap to enforce is in bootstrapping an AI agent platform in 2026.
Frequently asked questions
What is the most important product lesson from Vibe AI?
Engagement is not viability. Vibe AI users loved the product. The product also lost money on every active user. A loved product with negative unit economics is a charity, not a business. The lesson is to set a cost-per-active-user kill threshold and check it monthly, separate from any engagement metric.
Why did the retention curve mislead you?
Retention was strong in week one and stronger by week four for the cohort that stuck. The cohort that stuck was small and expensive. Strong retention on a money-losing cohort is a structural loss, not a structural win. The curve looked like product-market fit; the cost curve underneath was not survivable.
How do you decide what to refuse in a product?
Anything that does not move the cost-per-active-user toward sustainable margin gets refused. Features that increase engagement without changing margin make the loss bigger, not smaller. Refusal is a product feature when the underlying economics are tight. The list of refused requests is now part of the Gravity product log.
Did flat-rate pricing kill Vibe AI?
Flat-rate pricing was the wrong instrument for compute-heavy conversational behaviour. The cost curve scaled with conversation length and emotional context retrieval. The price did not. Capability-based pricing tied to cost-of-inference would have made the unit economics legible from week one. Flat-rate hid the loss.
What would you build differently if you started Vibe AI today?
Capability-based pricing from day one, a kill threshold on cost-per-active-user checked monthly, ruthless scope refusal in the first six months, and a product framing built around an outcome that the user can describe in one sentence. The same five rules now shape Gravity from the first commit.
Three takeaways before you close this tab
- Loved and money-losing is the most dangerous combination. It feels like progress and ends like a shutdown.
- Cohort retention without cohort cost is half the picture. Plot both, every month, by cohort.
- Refusal protects the product more than any feature ever will. Every yes outside the roadmap is a tax on every user.
Sources
- TechStartups, "Top AI Startups That Shut Down in 2025: What Founders Can Learn", December 2025, retrieved 2026-05-07, techstartups.com
- CB Insights, "Why Startups Fail: Top 9 Reasons", 2026 analysis, retrieved 2026-05-07, cbinsights.com
- Wilbur Labs, "Why Startups Fail", 200-founder survey, retrieved 2026-05-07, wilburlabs.com