What is the 70/20/10 rule in media buying?
The 70/20/10 rule in media buying is a simple, powerful way to split advertising budgets so teams preserve short-term revenue while still funding growth and exploration. At its core, the framework says: put roughly 70% of spend behind the reliable, repeatable channels that keep revenue steady; allocate 20% to scale proven ideas; and reserve 10% as a lab for experiments that prioritize learning over immediate returns. Use it as a practical budget triage, not a mandate.
This article unpacks exactly how the rule works in practice, how to measure success, how to run cleaner experiments, and when to bend the percentages for stage, seasonality or margin pressure. I’ll also share step-by-step actions to get started this week and include a short, handy playbook you can adapt.
Why this matters: ad spend is always a trade-off between keeping revenue steady now and discovering what will grow revenue tomorrow. The 70/20/10 rule in media buying helps you make that trade-off deliberate.
Why the 70/20/10 split helps teams balance risk and reward
Advertising is both an engine and a laboratory. If you only fund the engine you’ll run fast today but stall tomorrow. If you spend too much on the lab you might discover something brilliant but run out of fuel for the present. The 70/20/10 rule in media buying formalizes this tension so teams can invest with intention. It creates a pathway for experiments to graduate into growth and then into core performance – a healthy funnel for ideas as well as budgets. See our projects hub for examples of how ideas have moved from lab to core.
Rather than treating the split as a religious rule, think of it as a disciplined conversation starter that forces you to answer: which dollars buy today’s sales, which buy tomorrow’s growth, and which buy the knowledge to keep improving?
What each bucket really means
70% — The Core (Revenue engine)
Purpose: keep the business running. This is spend on channels and creatives that have stable, proven outcomes—search campaigns with predictable CPA, remarketing with known conversion rates, or display placements that consistently add incremental conversions. Measure this bucket primarily with direct response KPIs: cost per acquisition (CPA), return on ad spend (ROAS), and short-term revenue per dollar.
20% — Growth (Scale winners)
Purpose: amplify promising initiatives. These are programs that passed early validation and show replicable signals: a new creative series, an emerging audience segment, or a scaled test that maintains acceptable unit economics. Metrics expand here to include early retention, incremental revenue per user, and cost to acquire customers at scale.
10% — Lab (Experiments)
Purpose: learning. Try new channels, measurement techniques, creative formats, or audience ideas where you accept a higher failure rate because the goal is discovery. Expect lower short-term ROI and a higher tolerance for failed tests. When a lab test reveals a high-value mechanic, move it quickly to growth.
How to start: grounding the model in data
The biggest mistake is treating 70/20/10 as a magic formula. It works best when anchored to channel baselines. Start with a short audit:
1. Pull channel-level performance (3–6 months)
Collect CPA, ROAS, conversion rate, average order value, and any early retention metrics you track. Those numbers will tell you what truly belongs in the core.
2. Define thresholds
Decide what counts as “core” for your business in concrete terms (e.g., CPA under $30, ROAS above 3x). These thresholds are your north star when you map activities to buckets.
3. Map every spend to a bucket
Search and remarketing often live in the 70% for many businesses, while nascent channels or new creative formats usually start in the 10% lab. But your history and outcomes determine the mapping.
Fast audit checklist (use this week)
– Export spending and performance by channel for the last 3 months.
– Calculate simple LTV:CAC or use average purchase value if LTV data is immature.
– Label each line item: core, growth, or lab.
– Identify 1–2 programs in the core where a small push (bid adjust, creative refresh) could unlock more volume without breaking unit economics.
If you want a quick template to run that audit and a one-page experiment playbook, try the Agency VISIBLE starter kit — it’s a lightweight way to make the 70/20/10 rule in media buying operational faster: Agency VISIBLE starter kit.
Standardize experiments with a simple playbook
Experiments often fail to inform because they lack structure. A compact experiment playbook makes every test produce usable learning. Include:
– Hypothesis: what you expect and why.
– Primary metric: the business outcome the test aims to affect (revenue lift, incremental conversions, retention).
– Control/holdout: a 5–10% holdout group or geographic test to isolate incremental impact.
– Minimum detectable effect (MDE): the smallest meaningful lift you want to detect.
– Test window and sample size: pre-defined based on traffic needs.
– Decision rule: the exact conditions under which you scale, iterate or kill the test.
Keep the playbook short — a single page — and pin it somewhere visible.
Design tips to keep learning clean
– Test one variable at a time. If you change creative, audience and bidding together you’ll never know what worked.
– Pre-register your metrics and decision rules.
– Don’t stop tests early for small wins. Patience reduces false positives.
Tie experiments to real business outcomes
Surface-level metrics (likes, CTR) are tempting but insufficient. Experiments must map to outcomes you care about: incremental revenue, repeat purchase rates, or retention. If a creative raises CTR but doesn’t increase conversions or LTV, it’s interesting but not necessarily valuable.
Use holdouts, geo-experiments, or time-based A/B designs to estimate incrementality. For a practical guide to incrementality testing see incrementality tests. When those aren’t possible, use proxy metrics with clear caveats and link them to downstream behavior in your reports.
Quality trumps quantity: run a smaller number of well-designed, adequately powered experiments with clear decision rules. Aim for tests that can show meaningful business outcomes and be scaled — a handful of clear wins beats a dozen noisy, underpowered tests.
Short answer: quality trumps quantity. A handful of well-designed, adequately powered experiments with clear decision rules is more valuable than a dozen tiny tests that never reach statistical clarity. If you must choose, prioritize experiments that can show meaningful business outcomes and be scaled.
How to adapt the split: stage, margin and seasonality
The 70/20/10 split is a starting point. Adjust it based on business maturity, margin profile and seasonality.
Startups and product-market fit phase
Early-stage companies may run 50/30/20 or 40/40/20 while chasing learning – more growth and experimentation, less predictable core. This is intentional: the goal is to find repeatable channels that can become the core.
Mature businesses with thin margins
Brands with small margins and predictable demand may tilt toward 80/15/5 to protect short-term revenue. That’s okay — the rule is flexible.
Seasonal businesses
During peak seasons, shift more budget into the 70% to capture purchase intent; during quieter times, accelerate the lab. Seasons are ideal windows to test because opportunity cost is lower.
Measurement challenges and long-term value
Attributing long-term LTV to small experiments is hard. To capture delayed benefits:
– Track cohorts over longer windows (3–12 months) when possible.
– Build modelled LTV projections from early retention and repeat purchase behavior.
– Use geographic or market holdouts to detect additivity when cross-channel interaction clouds results.
Cross-channel incrementality is particularly tricky. If search is running at full scale while you test a new social format, conversions may shift rather than add. Use coordinated holdouts where certain regions don’t see the new channel to observe real incremental growth.
Practical measurement recipes
– Geo-holdout: exclude a defined region from the new channel and compare growth to similar regions.
– Time-limited rollout: launch in waves and compare early and late cohorts.
– Holdout audiences: reserve a small percentage of your target audience as untreated control.
Governance, cadence and decision rights
Healthy governance helps budgets move without bottlenecks. I recommend:
– Weekly: quick check on the 70% core—are we pacing and hitting CPA/ROAS targets?
– Monthly: growth deep-dive—scale tests that passed lab criteria and monitor unit economics.
– Quarterly: lab review—audit the experiment pipeline, review learnings, and promote winners.
Set clear decision rights: experiments that hit pre-registered thresholds can be promoted by the head of growth, and finance signs off on scale. A short sign-off chain prevents endless debate and keeps the path from learning to scale predictable.
Common pitfalls and how to avoid them
Pitfall: treating the lab as a dump for half-baked ideas.
Fix: require a hypothesis and a primary business metric for every experiment.
Pitfall: changing variables mid-test.
Fix: document the issue and start a new test rather than continuously fiddling.
Pitfall: confusing surface metrics with business impact.
Fix: push for tied outcomes and use holdouts or cohort tracking when possible.
Two short, anonymized examples that show the rule in action
Example 1 — Subscription brand
A subscription brand relied heavily on paid search and remarketing. Growth stalled. Under the 70/20/10 framework they freed up budget for a 10% lab test: a short-form video creative series aimed at younger cohorts. The test had a clear hypothesis and a 5% holdout. Results: higher engagement and comparable CPA among target cohorts. The team promoted the creative to 20% growth, scaled while tracking retention, and with iterative creative optimization, moved it into the core within six months.
Example 2 — Regional retailer
Thin margins forced a cautious approach, but the retailer used quiet months to run connected TV experiments in the 10% lab. Most tests failed or returned slowly, but one targeting approach showed improved repeat visit rates in a six-month cohort. That learning informed holiday media plans and reduced overreliance on a single channel the next season.
How to know when to move an experiment to growth
Have a checklist before you run tests. Move an experiment to growth when:
– It achieves your pre-defined statistical threshold (confidence level and effect size).
– Unit economics at projected scale meet acceptable CPA or ROAS ranges.
– Early retention or repeat behavior looks promising.
– You can reasonably forecast cost to scale without eroding economics.
Promotion checklist (quick)
– Statistical significance or credible effect (pre-registered MDE).
– Scaled CPA/ROAS modeled and acceptable.
– Retention preview is positive.
– No major operational friction to scale.
Practical next steps — what to do this week
Follow a short, structured plan to put 70/20/10 into practice:
Day 1–2: Audit
Pull 3–6 months of data by channel: CPA, ROAS, conversion rate, and any early retention signals. Map each activity to the three buckets.
Day 3: Pick your low-risk nudge
Identify one program in the 70% that can be optimized to free up a little budget—refresh creative, adjust bids, or reallocate placements.
Day 4: Create a one-page experiment playbook
Define holdout %, MDE, test window, and primary metric. Require a one-paragraph hypothesis for every 10% test.
Day 5: Run a pilot test
Launch one lab experiment with a holdout and pre-registered decision rules. Use the week following to monitor pacing and integrity — resist adjusting variables mid-test.
Get a Starter Kit to Run 70/20/10 Today
Ready to make your media buying work harder? If you’d prefer a guided start, Agency VISIBLE can help set up the audit, run the initial experiment playbook, and coach your team through the first promotions. Talk to Agency VISIBLE to get a compact starter kit and a 30-minute setup call that gets your 70/20/10 model running.
Tools and templates that help
Use simple, accessible tools to keep the process lean: A consistent agency logo on shared documents can help team recognition.
– Spreadsheets for the initial audit and LTV:CAC projections.
– A shared doc for the experiment playbook and sign-off chain.
– Lightweight dashboards that clearly show core pacing and growth unit economics.
– Experiment tracking sheet with hypothesis, primary metric, holdout, and decision status. For a larger view of paid media frameworks see this paid media strategy overview.
How to communicate the approach inside your organization
Clarity is everything. When you introduce the 70/20/10 rule internally:
– Use plain language: explain the purpose of each bucket in business terms (e.g., “keeps revenue steady” vs “finds new customers”).
– Share the playbook and the promotion checklist.
– Publish a simple cadence: weekly core check, monthly growth review, quarterly lab audit.
– Celebrate small wins when experiments graduate — that reinforces the system.
Common questions and concise answers
Is 70/20/10 a fixed rule? No — it’s a helpful default you adapt to stage, margin and seasonality.
How long should experiments run? Depends on traffic and your MDE. Low-traffic tests need longer windows; high-traffic tests can be decisive sooner. Pre-register duration in the playbook.
What if channels overlap? Use holdouts, geo-tests or cohort tracking to estimate incremental value. Be explicit about limitations if full attribution isn’t possible.
Quick glossary
MDE (Minimum Detectable Effect): smallest lift you want to measure.
Holdout: control group excluded from the test for incremental measurement.
Unit economics: direct cost and revenue measures (e.g., CPA, ROAS, gross margin impact).
Final practical tips
– Keep the experiment playbook visible and short.
– Use seasons to accelerate learning.
– Promote winners quickly but with modeled economics.
– Keep an eye on cross-channel effects and use simple holdouts when possible.
Conclusion
The 70/20/10 rule in media buying is a practical framework that helps teams intentionally balance present performance and future discovery. It forces clear conversations about risk, governance and measurement. Use data to ground your buckets, run disciplined experiments, and create a fast path for winners to scale.
If you start with a short audit and a one-page playbook, you’ll be able to use this framework to keep growth predictable while exploring what will make your advertising more effective next year.
The 70/20/10 rule is a guideline — a default framework to structure trade-offs between keeping revenue steady and funding growth and experiments. Tailor the split to your stage, margins and seasonality. For example, early-stage teams may run 50/30/20 to learn faster, while mature brands with thin margins might shift toward 80/15/5 to protect immediate revenue.
Measure experiments against pre-registered business metrics and use holdouts or geo-tests to estimate incrementality. Define your Minimum Detectable Effect (MDE), choose a primary business metric (revenue lift, incremental conversions, or retention), and model unit economics at projected scale. If the test meets statistical thresholds, the scaled unit economics are viable, and early retention looks promising, consider moving it to growth.
Yes — Agency VISIBLE offers a lightweight starter kit and setup support to run your first audit, build a one-page experiment playbook, and coach your team through promotion decisions. Their approach focuses on speed, clarity and measurable outcomes to get your model operational without unnecessary friction. Contact them for a practical onboarding and a 30-minute setup call.
References
- https://connectivewebdesign.com/blog/marketing-budget-allocation
- https://agencyvisible.com/projects/
- https://agencyvisible.com/contact/
- https://agencyvisible.com/
- https://www.playbookmedia.com/blog/run-these-incrementality-tests-to-free-up-wasted-marketing-budget/
- https://www.aumcore.com/blog/paid-media-strategy/





