From October 2022 to October 2023, I built MindWave out of Pune. Mental health, private sharing, support, journaling, group therapy on the roadmap, free professional community next to it. The product worked in the small ways it needed to work; it failed in the structural way that matters. The public postmortem is at mental health platform postmortem. This post is the counterfactual: what MindWave should have been, by design, given what I now know.

Counterfactuals are not regrets. They are the cheapest form of learning available. The version of MindWave below is the one that would have passed the three checks I missed; it is also the one I would recommend a future founder build, if they wanted to rebuild this category carefully.

Why MindWave actually failed

Real users showed up. Real users used the product. The product was gentle and warm. None of those things shipped a 10x improvement against the alternatives the user already had. The category is saturated; BetterHelp, Wysa, Calm, Headspace, plus dozens of niche apps. The user's question was not "is this exactly right?" The question was "is this enough better than what I already use that I will switch?" The honest answer was no.

That is the structural failure. Not bad design. Not bad engineering. A marginal-improvement product in a saturated category will lose to the incumbents on distribution, even when the product is better in small ways. The framework that emerged from this is in three startups, three shutdowns: 10x value, scaling potential, sustainable margins, all three required.

The counterfactual product

If I rebuilt MindWave from a clean sheet under the rules I now hold, the product would look very different. Here is the spec, deliberately compact.

The original MindWave optimised for breadth. The counterfactual optimises for depth in a narrow place. Depth in a narrow place is how you produce a 10x improvement in a saturated category.

A narrower user, deliberately

The single change with the highest payoff is the user definition. "People who want to feel better" is not a user; it is a population. A user is a specific person with a specific problem and a specific timeline. Three example user definitions that would have worked:

Any one of these is a defensible niche. The trap is treating the niche as a launch lane on the way to "everyone". The niche is the product. The defensibility comes from being undeniably the best for that niche, not from a generic positioning that one day expands.

Measurable improvement, not vibes

The original MindWave measured engagement: sessions, return rate, time spent. None of those metrics tell you whether the user is actually getting better. The counterfactual would measure outcome: a validated questionnaire (PHQ-9, GAD-7, or condition-specific) at intake and at exit, with the delta as the primary success metric.

Engagement vs outcome (mental-health programme, illustrative) Engagement (sessions) Outcome (PHQ-9 delta) Original high low Counterfactual moderate high Industry baseline high low Source: Aryan Agarwal, MindWave counterfactual sketch, drawing on PHQ-9 / GAD-7 norms.
Engagement was the wrong primary metric. Outcome delta is the metric a mental-health product should defend.

Outcome measurement also unlocks honest pricing. If you can prove the programme moves the score, you can charge for the programme. If you cannot, you should not.

Pricing that rewards the user getting better

The original MindWave was subscription-priced. Subscriptions reward continued engagement. In mental health, the goal is for the user to need the product less over time. Subscription pricing creates an incentive misalignment that quietly corrodes the product. The counterfactual would price by programme: a single fee for an eight-week intervention, with a guarantee tied to the outcome score delta.

This is similar in spirit to capability-based pricing for AI agents (covered in economics of bootstrapped AI agents): the price aligns to the cost and the value, not to a generic recurring template. It is harder to ship. It is also the only honest model.

The AI question, answered carefully

The 2026 default is to wrap mental health in AI chat. That is the wrong default. Yara AI shut down in November 2025 after the founders concluded AI chat for serious mental health was too dangerous (Fortune, November 2025). That is not an indictment of AI in this space; it is an indictment of putting AI at the front of the care channel without clinical oversight.

The counterfactual would use AI in three places only: triage to the right human, structured journaling prompts, and homework reminders. None of these put AI in front of a user in distress as the primary respondent. Care is human; AI is an assistive surface. The OWASP LLM Top 10 (OWASP, 2025) covers the failure modes that make this constraint non-negotiable.

This careful framing is the same one that shapes how Gravity treats agent autonomy: capable, bounded, with refusal correctness as a primary metric. Read how we test AI agents for the methodology.

Frequently asked questions

Why did MindWave fail?

It failed the 10x value test. Real users showed up and used it, but the product was gentler and warmer than the alternatives, not measurably better. In a category as crowded as mental health apps, gentle is a feature, not a category-shifter. The structural failure was running a marginal-improvement product in a saturated category.

Who would the ideal MindWave user have been?

A specific narrow segment with a measurable problem the alternatives ignored. Founders carrying anxiety after shutdowns, junior doctors after long shifts, postpartum mothers in the first ninety days. A clear who creates a clear definition of better. The original product tried to serve every adult who wanted to feel better.

Should it have been an AI product from day one?

No. Yara AI shut down in November 2025 after concluding AI chat for serious mental health was too dangerous. AI in this category needs guardrails, escalation pathways, and clinical oversight. The version of MindWave that should have existed is human-supervised first, with AI as an assistive surface, never the primary care channel.

What pricing would have worked?

Outcome-based or programme-based pricing tied to a specific intervention with a measurable end state. Per-month subscription pricing rewards ongoing engagement, which is the wrong incentive in mental health where the goal is for the user to need the product less over time. The pricing should reward the user getting better.

Would you rebuild it now?

Not as a primary bet. The lessons are now general-purpose: 10x value test, narrow user definition, outcome pricing, refusal as a feature. Those lessons feed into Gravity. If a future founder wants to rebuild it from these constraints, the framework is here for them. Mental health needs better products built carefully.

Three takeaways before you close this tab

Sources