Introduction: Why the Plateau Is a Signal, Not a Wall
If you have been running product growth for a few years, you know the feeling. After a strong launch and a steady climb in user adoption, the numbers flatten. Daily active users stop moving. Conversion rates settle into a tight band. The team tries campaigns, feature tweaks, and pricing experiments, but the line stays stubbornly horizontal. The typical advice is to push harder—more optimization, more channels, more content. But that advice misses a crucial point: a plateau is not just a barrier to overcome; it is a dataset. Experienced practitioners at Sixpack have learned to read plateaus as compressed histories of what is about to happen next. The key is identifying traction milestones—specific, measurable events that, when reached, have historically preceded a new S-curve. This guide explains how to spot those milestones, how to distinguish them from mere noise, and how to use them to predict—not just track—the next growth inflection. We focus on methods that work for products that have already crossed the early adoption chasm and are facing the more complex dynamics of sustained scaling.
The Physics of Plateaus: Understanding the Underlying Mechanisms
Before we talk about prediction, we need to understand why plateaus form in the first place. A plateau is not a failure of effort; it is a structural shift in how users interact with your product. In the early stages, growth is often driven by novelty, founder-led sales, or a narrow use case. Users arrive with high intent but low expectations. As the product matures, the user base broadens, and the average user has less intrinsic motivation. The low-hanging fruit is gone. The plateau represents a period where the product is delivering value to its existing core, but that core is not expanding. The physics here is about energy transfer: the energy you put into acquisition and activation begins to dissipate because the system (your market, your product, your team) has reached a temporary equilibrium. The question is not how to break the equilibrium with a bigger hammer, but how to detect the hidden tensions that will, once released, create a new growth curve. This is where traction milestones come in. They are the early indicators that the equilibrium is about to shift, often before any visible change in top-line metrics.
Why Plateaus Are Predictive, Not Just Descriptive
Most teams treat plateaus as descriptive—they tell you where you are. But if you look closely at the internal dynamics during a plateau, you can see patterns that forecast what comes next. For example, a plateau that follows a period of rapid feature adoption often masks a shift from early adopters to the early majority. The early adopters drive initial velocity, but the early majority requires a different value proposition. The plateau is the time when that new value proposition is being validated or rejected. If you track the right milestones—like repeat usage from a specific demographic, or a decrease in support tickets for a particular workflow—you can see the early majority forming before it shows up in aggregate numbers. This is not theoretical; it is observable in cohort data and engagement curves. The challenge is knowing which metrics to watch and how to interpret them without confirmation bias. We will address that next.
How Sixpack Veterans Approach the Problem Differently
The difference between a novice and a veteran in this space is not the tools they use; it is the questions they ask. A novice asks: "How do we increase signups?" A veteran asks: "What has historically happened right before a new growth curve began, and is that pattern forming now?" This retrospective-forward mindset is the core of plateau physics. Veterans maintain a personal library of past plateau patterns, not as formal case studies, but as mental models. They know that every plateau is unique but that certain classes of plateaus—those driven by market saturation, those driven by product maturity, those driven by channel exhaustion—each have their own leading indicators. By classifying the plateau, they can predict which traction milestone is most likely to appear next. This guide will help you build that classification system for your own product.
Identifying Traction Milestones: What to Look For and Why
A traction milestone is a specific, measurable event that, when reached, has historically correlated with a subsequent acceleration in growth. It is not a vanity metric like total registered users; it is a leading indicator that captures a change in user behavior, market conditions, or product-market fit. For example, a traction milestone might be: "When weekly active users from a specific industry segment exceed 500, the overall conversion rate begins to climb within the next 30 days." The key is that the milestone precedes the S-curve inflection, often by weeks or months. To identify these milestones, you need to look backward at your own data or, if you are early in your journey, at analogous products in your space. The process involves three steps: segment your user base by behavior, identify events that consistently occur before growth accelerations, and validate those events by checking if they hold true across multiple time periods. This is not a one-time exercise; traction milestones shift as your product and market evolve. But once you have a reliable set, they become your early warning system.
The Difference Between Leading Indicators and Traction Milestones
Many teams use leading indicators—metrics that tend to change before the overall business metric. But not all leading indicators are traction milestones. A leading indicator might be something like "demo requests," which predict sales pipeline. A traction milestone is more specific: it marks a threshold where the underlying dynamics of growth change. For example, a leading indicator might be "number of users who complete onboarding." A traction milestone is "number of users who complete onboarding and then invite a colleague within 7 days." The second metric is a milestone because it signals that the product has crossed a tipping point in viral coefficient. The difference is subtle but critical. Leading indicators tell you direction; traction milestones tell you that a new phase has begun. When you track milestones, you are not just predicting a number; you are predicting a structural shift in how your product grows.
Common Mistakes When Defining Milestones
One common mistake is setting milestones too early or too late. If you set a milestone that is reached too frequently, it loses predictive power because it does not signal a shift—it just signals normal operation. If you set a milestone that is almost never reached, it is not useful because you cannot act on it. The sweet spot is a milestone that is reached in about 10-20% of your observation windows (e.g., weeks or months). Another mistake is using aggregate metrics that mask underlying variance. For example, "total active users" can be flat while a specific cohort of power users is growing rapidly. The plateau might be masking the formation of a new growth curve. A veteran will segment by acquisition channel, by feature usage, by user role, and by time since signup. Only by drilling down can you see the milestone that matters. Finally, do not confuse correlation with causation. Just because a milestone preceded a growth spurt once does not mean it will again. You need to see the pattern repeat across at least three distinct periods before you can rely on it.
Three Approaches to Predicting the Next S-Curve: A Comparison
There is no single method for identifying traction milestones that works for every product. Depending on your data maturity, team size, and risk tolerance, different approaches will be more appropriate. Below we compare three methods that Sixpack teams have used successfully. The first is retrospective analysis, which relies on historical data to find patterns. The second is leading indicator mapping, which uses a structured framework to hypothesize and test potential milestones. The third is cohort velocity tracking, which focuses on the speed at which specific user segments move through the funnel. Each approach has strengths and weaknesses, and many teams combine elements of all three. The table below summarizes the key differences.
| Method | Best For | Data Required | Time to First Result | Risk |
|---|---|---|---|---|
| Retrospective Analysis | Products with 12+ months of stable data | High: granular event logs, cohort tables | 2-4 weeks | Overfitting to historical patterns |
| Leading Indicator Mapping | Products with strong domain theory | Medium: some event data, strong hypotheses | 1-3 weeks | Confirmation bias in hypothesis selection |
| Cohort Velocity Tracking | Products with clear user journey stages | Medium: funnel data with timestamps | 1-2 weeks | Missing non-linear growth signals |
Retrospective Analysis: Mining Your History for Patterns
This is the most data-intensive approach, but it is also the most reliable if you have the data. The idea is to look back at your product's history—ideally covering at least two full growth cycles (plateau, S-curve, plateau, S-curve). You identify the points where growth accelerated and then look for common events that occurred in the weeks leading up to those accelerations. For example, one team found that every time their "power user" segment reached 20% of the total user base, overall growth doubled within 45 days. The milestone was clear and repeatable. The challenge is that you need clean, well-structured data and the ability to run queries across large event sets. You also need to be careful about survivorship bias: you are looking at the plateaus that led to growth, not the ones that led to decline. If you only study successes, you miss half the story. To mitigate this, include plateaus that did not resolve into growth and see if the milestone was absent. That negative test strengthens your signal.
Leading Indicator Mapping: A Structured Hypothesis Framework
If you do not have enough historical data, or if your product has changed significantly, you can use a hypothesis-driven approach. Start by listing all the possible leading indicators you can think of—things like referral rates, trial-to-paid conversion time, support ticket volume per feature, or NPS scores from a specific user segment. Then rank them by plausibility based on your domain knowledge. The key is to be specific: instead of "referral rate," use "referral rate among users who have completed at least three sessions in their first week." For each hypothesis, define a milestone threshold (e.g., "when the referral rate exceeds 0.3") and then track it forward. The advantage of this method is speed: you can start testing in days. The disadvantage is that your hypotheses may be wrong, and you might waste time tracking noise. To reduce that risk, involve multiple team members in hypothesis generation and use a structured scoring system to prioritize tests. A common framework is ICE (Impact, Confidence, Ease), but you can adapt it to your context.
Cohort Velocity Tracking: Measuring the Speed of Change
This method focuses on the velocity of user movement through key funnel stages. Instead of looking at absolute numbers, you look at how fast users progress from one stage to the next. For example, if the time between first visit and first purchase has been stable for months, and then it suddenly drops by 20% for a specific cohort, that is a powerful signal. The milestone is the velocity change itself. This approach is particularly useful for products with a defined user journey, such as SaaS tools with a trial period or e-commerce sites with a shopping cart. The velocity metric smooths out noise from seasonality and marketing campaigns because it focuses on relative speed rather than volume. The challenge is that you need to define the right stages and ensure that the velocity change is not caused by a one-time event (like a major bug fix or a holiday). To use this method effectively, track velocity in a control chart and look for shifts that persist for at least two consecutive weeks before treating them as milestones.
Step-by-Step Guide: Building Your Custom Milestone Set
Now that you understand the theory and the methods, here is a practical, step-by-step process for building your own set of traction milestones. This process is designed to be iterative: you start with a rough set, test it against real data, and refine over time. The goal is to have a dashboard of 3-5 milestones that you check weekly. Do not try to track 20 milestones; that leads to analysis paralysis. Focus on the few that have the strongest predictive relationship with your next S-curve.
Step 1: Define Your Growth History and Identify Plateaus
Start by charting your primary growth metric (e.g., monthly active users, revenue, or key action count) over the last 12-24 months. Mark the periods where the metric was essentially flat for at least 30 days. These are your plateaus. For each plateau, note the date it started and ended, and what happened next—did growth accelerate, stay flat, or decline? If you have multiple plateaus, you have a rich dataset. If you only have one, you can still proceed, but treat your findings as hypotheses rather than confirmed patterns. For this step, a simple spreadsheet with dates and notes is sufficient. Do not overcomplicate it.
Step 2: Generate a Long List of Candidate Milestones
For each plateau you identified, brainstorm events that occurred in the 2-4 weeks before growth accelerated. Use the three methods described above: look at historical data (retrospective), apply your domain knowledge (leading indicator mapping), and examine cohort velocity changes. Write down every candidate, no matter how obvious or obscure. Examples: "a 10% increase in repeat visits from users who signed up via organic search," "a drop in time-to-first-value for the enterprise segment," "a spike in API calls from a specific third-party integration." Aim for 10-20 candidates. You will narrow them down later.
Step 3: Validate Candidates Against the “Three-Event Rule”
A candidate milestone is only useful if it has appeared at least three times before a growth acceleration (and not appeared before plateaus that did not resolve). Go back to your data and check: for each candidate, did it occur before each growth acceleration? Did it fail to occur before plateaus that ended in decline? If a candidate appears before all growth events and never before declines, it is a strong signal. If it appears sporadically, it is noise. If it appears before declines too, it is not a leading indicator of growth—it might be a sign of something else. Apply this rule rigorously. It will eliminate 80% of your candidates, which is exactly what you want.
Step 4: Set the Threshold and Monitoring Cadence
For each validated candidate, define a precise threshold. Instead of "a lot of referrals," define "referral invitations per active user > 0.4 for two consecutive weeks." Also define how you will monitor it: weekly, daily, or in real-time? Most milestones are best monitored weekly because daily fluctuations introduce noise. Set up a simple dashboard or spreadsheet that tracks each milestone and flags when the threshold is crossed. When a milestone triggers, it is time to prepare for the next S-curve: increase capacity, invest in the channel, or adjust the product roadmap. The milestone is not a command to act; it is a signal to investigate.
Step 5: Review and Adjust Quarterly
Markets change, products evolve, and milestones lose their predictive power. Set a quarterly review where you re-run the validation process. Are the milestones still holding? Are there new plateaus that reveal different patterns? Are there candidates you discarded earlier that now seem relevant? This review is also the time to add new milestones based on recent data. Treat your milestone set as a living system, not a static document. Teams that do this well find that their predictions become more accurate over time, not less.
Real-World Scenarios: How Milestones Predicted S-Curves in Practice
To illustrate how this works in practice, here are three anonymized scenarios drawn from composite experiences. These are not exact case studies but represent patterns we have observed across multiple products. Each scenario shows a different type of plateau and a different milestone that predicted the next curve.
Scenario A: The Feature Adoption Plateau
A B2B analytics platform had been flat for five months. User growth was stuck at 2,000 monthly active users, and the team was considering a major redesign. However, one team member noticed that users who had adopted a recently released dashboard customization feature were 3x more likely to invite colleagues. The milestone was set: "When the number of users who customize a dashboard and then invite someone reaches 200 in a month." That milestone was reached in month six. Over the next 60 days, total active users grew to 3,500, driven by the invitations. The redesign was shelved. The lesson: the plateau was masking a shift from individual use to team use. The milestone captured that shift before it showed up in aggregate numbers.
Scenario B: The Channel Saturation Plateau
A consumer app was dependent on paid ads for growth. After 18 months, cost per acquisition had risen steadily, and installs had plateaued. The team was considering a new ad platform. But the data showed something else: users who came from organic search had a 40% higher retention rate than paid users. The milestone was set: "When organic installs exceed 15% of total installs for two consecutive weeks." That milestone was reached in month 19, driven by a few viral blog posts. Within three months, organic installs grew to 35% of total, and the overall growth rate doubled. The plateau was not a sign of market saturation; it was a sign that the paid channel was maxed out and organic was about to take over. The milestone helped the team allocate resources to content instead of more ads.
Scenario C: The Product-Market Fit Re-Evaluation
A SaaS tool for small business owners had been flat for eight months. The team was debating whether to pivot to a different customer segment. A milestone was identified: "When the number of users who use the tool for inventory management exceeds those who use it for invoicing." That milestone was reached in month nine. The team realized that the product was shifting from a invoicing tool to an inventory tool, and that the new use case had stronger retention. They doubled down on inventory features, and growth accelerated by 150% over the next six months. The milestone not only predicted the curve but also pointed to the product direction.
Common Questions and Pitfalls: What to Watch Out For
Even with a solid framework, there are traps that can lead you astray. This section addresses the most common questions and mistakes we have seen, and how to avoid them.
How Do I Distinguish a Real Milestone from Random Noise?
This is the most common question. The answer lies in repetition and context. A real milestone repeats across multiple plateaus and is absent in plateaus that led to decline. It also has a plausible causal mechanism—you can explain why the milestone would lead to growth. Noise, on the other hand, is random. It might correlate once by chance. To test, use a simple rule of thumb: if the milestone has appeared before at least three growth events and never before a decline, it is likely real. If it appears before both growth and decline, it is noise. Also, look for the mechanism. If you cannot tell a story about why the milestone matters, it probably does not.
What If My Milestones Stop Working?
This happens. Markets shift, competitors enter, or your product changes. When a milestone stops predicting growth, do not panic. First, check if the underlying mechanism is still valid. For example, if your milestone was based on a specific feature, and that feature was deprecated, the milestone will stop working. Second, re-run the validation process with recent data. The plateau you are now in might be of a different type than previous ones. You may need to add new milestones or adjust thresholds. Third, consider that the milestone might still be valid but the threshold needs adjustment. A milestone that previously triggered at 200 users might now trigger at 500 because the user base has grown. Recalibration is normal.
Can I Use Milestones for Declining Products?
Yes, but with caution. The same framework can identify milestones that predict a downward S-curve—events that, when reached, signal that growth will slow or reverse. For example, a drop in repeat purchase rate among high-value customers might be a negative milestone. The process is the same: look for events that consistently precede declines. However, the emotional challenge is greater because the signal tells you to prepare for bad news. Use negative milestones to trigger contingency plans, such as cost cutting, feature prioritization, or market diversification. Do not ignore them; denial is a common pitfall in declining products.
Conclusion: From Tracking to Predicting
The shift from tracking growth to predicting it requires a change in mindset. Instead of asking "What happened?" you start asking "What is about to happen?" Traction milestones are the tool that enables that shift. They turn plateaus from obstacles into datasets, and they give you the confidence to act before the numbers confirm the trend. The three approaches—retrospective analysis, leading indicator mapping, and cohort velocity tracking—provide a toolkit that works across different data maturity levels. The step-by-step process gives you a repeatable way to build and maintain your milestone set. The most important takeaway is this: the next S-curve is not random. It is preceded by signals that are visible if you know where to look. This guide has given you the framework to find them. Start small, validate rigorously, and adjust as you go. Over time, you will develop an intuition for the physics of your own product's growth, and the plateaus will become less stressful and more informative. As always, this overview reflects widely shared professional practices as of May 2026; verify critical details against current official guidance where applicable.
Comments (0)
Please sign in to post a comment.
Don't have an account? Create one
No comments yet. Be the first to comment!