The 10x rule from Zero to One applies at the product level: the whole product needs to be 10x better than the closest alternative for users to switch (Volt Equity, Thiel 10x rule). It is well-known and widely cited. What is less well-known is the operational version inside a product, at the feature level. Inside a product the user has already chosen, every feature still has to clear a bar; that bar is roughly 3x.
The rule below is what I now use to decide which features ship and which do not. It is the operational sharp end of the 10x rule, scaled down to a quantitative threshold a single founder can apply weekly without overthinking.
The rule, stated plainly
Every feature must be at least 3x better than the closest alternative on a single measurable axis the user cares about. Anything less is incremental and gets perceived as marginal. Marginal features ship work, add maintenance cost, and do not move user behaviour. They do not move retention either. They do not move revenue. They mostly add a line on a marketing page nobody reads.
The rule has three preconditions. First, you have to pick a single axis; "better in many ways" defaults to "not measurably better in any". Second, the axis must be quantifiable; "feels nicer" is a vibe, not a ratio. Third, the comparison must be against the actual alternative the user has, not against a strawman.
Why 1.3x always loses
Switching cost is the answer. Every alternative the user already has comes with embedded muscle memory, integrations, configured preferences, and accumulated content. The cognitive cost of switching to a new tool is non-zero even when the tool is free. A 1.3x improvement on a single axis is not enough to compensate for the switching cost; the user looks at the new feature, agrees that it is a small improvement, and continues using the existing alternative.
3x is the rough empirical threshold where the improvement starts to overwhelm switching cost for a substantial fraction of users. Below 3x, only the most novelty-curious users move; above 3x, mainstream users move. The exact number varies by category. The principle is consistent: there is a discontinuous threshold somewhere between "incremental" and "switch-worthy", and it is not 1.3x.
How to measure the 3x
Pick one axis. Time saved, errors avoided, cost reduced, output increased. Numbers, not words. Then run the comparison.
- Time saved: how long does the alternative take? How long does the new feature take? Ratio.
- Errors avoided: what is the error rate of the alternative? What is the error rate of the new feature? Ratio.
- Cost reduced: what does the alternative cost the user (money, energy, time)? What does the new feature cost? Ratio.
- Output increased: how much output does the alternative produce per unit of input? The new feature? Ratio.
If the ratio is below 3x, the feature does not ship. If above, the ratio becomes part of the marketing. "Take 3 hours; we take 10 minutes" is a 3x marketing line; "10% faster" is not.
The 3x rule as refusal filter
The hidden value of the 3x rule is what it lets you refuse. Most feature requests cannot articulate a 3x improvement on a measurable axis; that means most feature requests get refused. Refusal is a feature when the underlying economics are tight (an argument I make in what Vibe AI taught me about product).
The mechanic is simple. A user requests a feature. The founder asks, "what is the closest alternative the user has, and what is the measurable axis of improvement?" If the answer cannot reach 3x, the request gets logged in the refusal column with a calendar reason. The user gets a polite, honest reply explaining why. The product stays compact; the maintenance cost stays bounded; the founder's time stays focused on the next 3x feature.
Worked examples from three failures
Three concrete examples from past products to make the rule tangible.
MindWave at the product level. The product offered "gentler than the alternatives" against BetterHelp, Wysa, Calm. Gentle is a vibe, not a measurable axis. On any quantifiable axis (response time, evidence-based outcome, clinician-hours-per-user) MindWave was not 3x better. The product failed the rule at the product level. The full reasoning is in the mental health platform I wish I built differently.
Super AI at the feature level. The router selected better models for tasks, allegedly producing higher-quality output. Measured against using GPT-4 directly, the quality improvement was perhaps 1.2x; the latency was 1.5x worse. The feature did not pass the 3x bar on any single axis. It shipped anyway. The deeper postmortem is at the mistakes I made with Super AI.
Vibe AI at the capability level. Several capabilities offered marginal improvements over a generic chat product on emotional reflection depth. The improvements were real and small. They added compute cost in the wrong direction. The capabilities did not pass the 3x bar; they shipped on enthusiasm rather than measurement. The full structural picture is at Vibe AI postmortem.
How Gravity uses the 3x rule
Every capability in Gravity is measured against the closest alternative on time-saved-per-task. The benchmark is a human running the same task, manually, end-to-end. Capabilities that take a 30-minute human task and complete it in 60 seconds pass at 30x. Capabilities that improve 3 minutes to 1 minute pass at 3x. Capabilities that improve 10 minutes to 8 minutes do not pass and do not ship.
The single axis is deliberate: time saved per outcome. Other axes (cost reduced, errors avoided) are tracked, but the gating bar is time. This keeps the rule simple enough for a single founder to apply weekly without ambiguity. Read more on the outcome framing in describe outcome, not workflow.
Frequently asked questions
What is the 3X rule for features?
Every feature must be at least 3x better than the closest alternative on a measurable axis the user cares about. Anything less is incremental and gets perceived as marginal. The rule is the operational version of Peter Thiel's 10x rule applied at the feature level inside an existing product, where the bar is lower but still strict.
Why 3x and not 10x?
The 10x rule applies at the product or category level: the whole product needs to be 10x better than the alternative the user is currently using. The 3x rule applies at the feature level inside a product the user already has reason to use. 3x is the threshold below which the user perceives the feature as not different enough to change behaviour.
How do you measure the 3x?
Pick one measurable axis: time saved, errors avoided, cost reduced, or output increased. Compare the new feature to the closest alternative on that axis with real numbers. If the ratio is below 3x, the feature does not ship. If it is above 3x, the feature ships and the ratio becomes part of the marketing.
What did 1.3x features cost in your previous companies?
MindWave was a 1.3x product against a saturated alternative set; that is why it failed the 10x value test. Super AI shipped many 1.3x features inside the all-in-one wrapper; they added complexity without changing user behaviour. Vibe AI had 1.3x features hidden in compute-heavy capabilities. The pattern is consistent: 1.3x ships work and adds liabilities.
How does the 3x rule affect what gets refused?
Most feature requests do not pass the 3x bar; therefore most feature requests get refused. The rule is a refusal filter as much as a build filter. Founders who say yes to 1.3x features dilute the product and add maintenance cost. Refusal is a feature when the underlying improvement is not 3x.
Three takeaways before you close this tab
- Pick one axis. "Better in many ways" defaults to "not measurably better in any".
- Below 3x, switching is flat. Above 3x, mainstream users move.
- The rule is a refusal filter. Most requests do not pass; that is the point.
Sources
- Volt Equity, "Peter Thiel on Identifying Disruptive Companies (10x rule)", retrieved 2026-05-07, voltequity.com
- CB Insights, "Why Startups Fail: Top 9 Reasons", 2026 analysis, retrieved 2026-05-07, cbinsights.com
- TechStartups, "Top AI Startups That Shut Down in 2025: What Founders Can Learn", December 2025, retrieved 2026-05-07, techstartups.com