More ads won't save you

The brands scaling fastest are producing less

Welcome back to the Brick by Brick Newsletter…

...where 7-8 figure brands learn how to scale efficiently.

If you're serious about growth and feel like you've hit a plateau, or you just want to put rocket fuel on your performance »You can apply for your audit here « and we'll map out exactly what's capping your scale.

Today's issue isn't about producing more.

It's about why most brands are producing a lot... and learning almost nothing.

Creative volume has become the default strategy for scaling on Meta.

More hooks, more formats, more UGC, more iterations.

The Slack channel stays busy. The shared drive keeps filling up.

And yet the account stays flat.

This week I'm breaking down:

  • Why volume is a crutch, not a strategy

  • What an actual creative test looks like vs what most brands are running

  • Why intelligent creative programmes need fewer briefs, not more

  • The output that actually matters (hint: it's not the ads)

Steal this. 👇

What testing actually requires

A test is only a test if it's designed to answer a specific question.

Not "let's try a UGC version." Not "let's see how a lifestyle angle does." Those aren't tests, they're hunches dressed up as strategy.

A real creative test isolates one variable, runs with enough budget to reach statistical significance, and produces a finding that changes how you build the next thing. It has a hypothesis before launch, not an interpretation after.

Most creative programmes skip this entirely. They ship, observe loosely, and move on. The "learnings" document is a graveyard of opinions.

Why volume became the default

Volume happened for a few reasons, none of them particularly good.

The first is that creative production is tangible. You can count it. 75 ads sounds better than 10, especially in an agency relationship or when reporting to a board. Volume creates the illusion of rigour.

The second is that Meta's auction genuinely rewards fresh creative, so the advice to "keep feeding the machine" got distorted into "produce as much as possible." The machine needs fuel, but it doesn't need waste.

The third is that thinking is harder than doing. Designing a proper test structure, isolating variables, building creative around a single question, and defining what a result actually means takes more time upfront than just briefing another batch of hooks.

So brands defaulted to output, and agencies defaulted to delivering it.

What creative intelligence looks like instead

It starts with fewer briefs, not more.

Before anything goes into production, there should be a question it's answering.

What do we not know about this audience?
What hook category haven't we properly stress-tested?
What's the gap between our best performer and everything else, and what does it tell us?

From that question, you build a creative that's designed to answer it, with as few moving parts as possible. One variable isolated. One clear hypothesis. A defined threshold for what a win or a loss actually means.

Then you ship it, read it properly, and let the findings change something.

That cycle, question, hypothesis, test, finding, change, is what separates brands that compound their creative knowledge from brands that just produce.

The output that actually matters

The deliverable isn't ads. It's understanding.

Every creative test should leave you with something concrete: a hook angle that outperforms its category, a proof point that resonates more than expected, an audience segment that responds differently to the same message.

Something that makes the next brief smarter than the last one.

That's what a creative programme is actually for. Not to fill the auction with content. To systematically build a picture of what moves your customer, and then exploit that picture at scale.

Brands that get this right don't need 150 ads a month. They need 30 good ones, built from what they already know, designed to push the frontier of what they don't.

The ones still chasing volume alone are working harder and learning less.

TLDR

  • Most brands are producing creative volume, not running creative tests. There's a big difference.

  • A real test has a question before launch, one isolated variable, and a finding that changes the next brief.

  • Volume became the default because it's tangible, billable, and easier than thinking. That doesn't make it right.

  • Intelligent creative programmes start with fewer briefs, built around what you don't yet know.

  • The deliverable isn't ads. It's compounding knowledge about what moves your customer.

  • Ten smart ads built on real learning will outperform forty that aren't.

Ready to Break Your Growth Ceiling?

If you're doing $200K+ monthly and feel like you've hit a wall, this is exactly the type of work we do at Brick.

We don't just run ads. We diagnose constraints, build systematic testing frameworks, and execute strategies that actually scale.

If you want to explore what this could look like for your brand:

Until next week,

Toby.