ROAS won’t tell you this...

How to analyze creative tests the right way

Hey friend,

I keep seeing brands make the same mistake when analyzing creative tests.

They obsess over ROAS and CPA - but that only tells you the scoreboard, not the game.

And when you only look at performance metrics, you miss the real insight:

→ Why did this ad win?
→ What part of the story actually resonated?
→ How can we replicate it again and again?

We’ve been digging deep into behavior data lately, and it’s completely changed the way we test and iterate. It’s like going from playing darts in the dark… to flipping the lights on.

This week, I’ll break down the exact framework we use to separate performance metrics from storytelling metrics, and how that shift turns random wins into predictable creative processes.

Let’s dive in.

1. Split Metrics Into Two Buckets

Most brands lump everything under “performance” and then wonder why creative feels unpredictable. You need to zoom in.

  • Primary metrics (performance): Spend, Purchases, CPA.
    These confirm if the ad worked financially.

  • Secondary metrics (storytelling): Scroll Stop Rate, Hold Rate, Outbound CTR.
    These reveal why the ad resonated (or didn’t).

Pro tip: Think of primary metrics as the scoreboard… and secondary metrics as the play-by-play commentary.

Why this matters:
Performance helps you scale what already works.
Storytelling metrics help you manufacture more winners on demand.

2. Use Behavior to Fix Underperformers

Most brands kill ads that “don’t work” without asking why.

That’s a waste. Instead, use behavioral data to surgically fix problems.

  • Low Scroll Stop Rate = Weak hook
    Your first 2–3 seconds didn’t earn attention.
    → Test bold claims, fast motion, curiosity gaps, or pattern interrupts.

  • Poor Hold Rate = Weak narrative
    People tapped, then bounced.
    → Improve pacing, remove fluff, add tension/resolution.

  • Low Outbound CTR = Weak CTA/offer
    They watched but didn’t click.
    → Strengthen the call-to-action, make the offer clearer, sharpen urgency.

Why this works:
You’re no longer guessing what to fix - you’re running a diagnosis.
It’s the difference between throwing pills at symptoms versus running bloodwork and treating the root cause.

3. Find Patterns in Your Winners

One great ad doesn’t mean you cracked the code. What matters is spotting repeatable patterns.

Ask yourself:

  • Do my best hooks share a structure? (question, bold claim, transformation)

  • Do my winners follow a pacing rhythm? (fast open → problem → solution → proof)

  • Do they lean on a particular proof type? (testimonial, before/after, stat)

Document these answers in a Creative Optimization Library - a living playbook that grows with every test.

Example: You notice 7/10 of your top performers open with a testimonial. That’s not luck… that’s a pattern. Now you can double down with confidence.

Why this matters:
Patterns = predictability.
Predictability = scale.

4. Test With Purpose

Here’s where most brands go wrong: they test random variations with no clear objective. That burns cash.

Instead, every test should be designed to answer one specific question.

  • Weak Scroll Stop Rate? → “Can this hook style win attention?”

  • Weak Hold Rate? → “Does this edit sequence keep people engaged?”

  • Low CTR? → “Does reframing the offer improve clicks?”

This makes creative testing less like gambling… and more like running controlled experiments.

Why this works:
Purpose-driven tests stack learnings faster. Instead of 20 random variations, you run 5 intentional ones - and the insights compound.

What You Can Expect

If you shift from performance-only analysis to behavior-driven testing, you’ll see:

✅ Fewer failed creative tests (because you know what to fix)
✅ Faster identification of winners (because signals are clearer)
✅ A repeatable, documented creative system
✅ Higher creative win rate driven by smarter, not bigger, spend

The psychology:
Behavior metrics show you what your audience actually responds to.
Patterns reveal what consistently moves them.
Purpose-driven testing compounds learnings into a system.

Next Steps

Week 1 → Set up tracking for Scroll Stop Rate, Hold Rate, CTR
Week 2 → Analyze your last 10 winners for recurring patterns
Week 3 → Build your Creative Optimization Library
Week 4 → Test with purpose (diagnose → hypothesize → validate)

Be honest…

Are you iterating creatives based on data… or gut instinct?

Back in your inbox next week,

– Toby.

If you’re spending $50K+/month on ads and suspect you’re leaving growth on the table…

We’ll run a full acquisition audit of your ad account - breaking down what’s driving results, what’s wasting budget, and where scale is hiding.

You’ll walk away with:

→ The exact creative angles and formats you need
→ A clear roadmap to lower CAC and increase ROAS
→ A proven plan to scale efficiently

📅 Book your audit here → https://form.typeform.com/to/STTysERA

1. Start With Mass Desire (The Non-Negotiable)

Our 30-Day Pilot lets you test our strategy, creative, and media buying… without locking into a long-term commitment.

We’ll plug into your account, audit what’s holding you back, and deploy new creative built to scale.

Clear improvements. Smarter testing. Zero fluff

 If we don’t move the needle, you walk away. No questions asked.

🤳🏼 We’re Hiring:
Senior eCommerce Creative Strategist

Brick is looking for a Creative Strategist who can turn deep customer research and data into scroll-stopping ads for our 7–8 figure brands.

You’ll:

  • Own creative strategy from insight → angles → briefs → tests

  • Partner with media buying to scale what works

  • Lead structured testing and iteration cycles that move revenue, not just clicks

You’ve done:

  • 3–5+ years in DTC creative strategy (Meta/TikTok)

  • Proven track record of turning insights into profitable ad campaigns at scale

  • Comfortable mining reviews, mapping objections, and engineering hooks