Is statistical significance relevant for Meta ads?
Speed over certainty with Meta creative testing
Hi there,
When it comes to statistically significant testing, I usually do things by the book. That said, I’ll admit it’s tougher in the startup world, so I’ve learned to be flexible and adjust the rules when needed.
But when we’re talking proper Meta ads testing, I still like to stick to the fundamentals—not dropping ads after a few clicks.
So when I interviewed Cedric Yarish, co-founder of AdManage.ai, about creative testing, it broke my heart to hear him say:
“Don’t wait for statistical significance, it’s going to limit how fast you can move.”
He said it so casually, as part of a broader conversation about common testing mistakes in Meta ad experiments.
I tried to collect myself as an interviewer while he continued:
“The top advertisers are launching hundreds of ads, so it’s worth giving up on significance and valuing the learnings overall, rather than trying to create a perfectly fair test setup. Even if you reupload an ad, [Meta] recognises what’s different and what’s not.”
It felt like the opposite of everything I’ve learned (and loved) about data. Statistical significance gives you confidence—the confidence to stand behind an experiment and trust its results.
But that isn’t where Cedric places his trust. He trusts Meta and the signals it provides.
And honestly? After hearing his reasoning—and seeing the data—I get it. Cedric has tested over 68,000 ads through AdManage.ai. He’s seen what works, when it works, and how to scale it.
Instead of obsessing over perfect splits or equal spend, Cedric focuses on relative performance. If a new ad comes in under your benchmark Cost per Acquisition? That’s a win. If it’s way over? Kill it and move on.
The most interesting part of our chat? Cedric walked me through seven different testing frameworks he’s used and refined over the years.
Yes, seven. From beginner-friendly setups to high-volume creative pipelines, here’s a quick overview:
Framework 1: Simple Advantage+ campaign with 5 ads (great for scrappy early testing)
Framework 2: One ad per ad set (control freaks, this one’s for you)
Framework 3: Meta’s A/B testing feature (cleanest tests, but slow and manual)
Framework 4: New campaign per experiment (messy, but works well for ecom brands)
Framework 5: Themed ad sets with 5 creatives each (structured, scalable)
Framework 6: 50 ads per ad set (yes, 50, for true volume testers)
Framework 7: Flexible Ads (formerly Dynamic, ideal for optimising small tweaks)
But what about eCommerce brands?
When Cedric broke down the frameworks, he was speaking primarily from an app testing perspective. But for my eCommerce readers, there are a few important nuances to keep in mind:
1. You can test inside Advantage+, but look at longer-term insights too
Cedric’s go-to setup for speed is an Advantage+ campaign with 5 ads. Meta auto-optimises, and you rotate creatives in and out weekly.
You can absolutely use this approach in eCommerce, but you must track performance outside Meta.
Advantage+ doesn’t give you clean breakdowns by ad, so pair it with tools like Triple Whale, Lifetimely, or Polar Analytics to assess post-click performance and deeper metrics.
2. Social proof is easier to keep in eCommerce
For apps, switching platforms (iOS to Android) often resets social proof—likes, comments, and shares disappear.
But in eCommerce, you can preserve it by keeping the post ID constant. This makes it easier to test quickly without sacrificing the momentum from a strong ad.
3. Flexible Ads (formerly Dynamic Creative) are great for a later stage
Cedric calls Flexible Ads a “micro-testing lab,” great for minor optimisations once you’ve found a winner.
For eCommerce brands, I recommend using this later in your process to tweak hooks, visuals, and other elements to stretch the lifespan of high-performing creatives.
4. Advantage Shopping Campaigns (ASC) tend to be for scale
Some eCom brands use ASC for testing, but I generally recommend against it. If you go with Framework 4 (new campaign per experiment), use manual campaigns to test, and only move winners into ASC for scaling.
That said, I have seen brands succeed running multiple ASCs for testing and scaling. Like many things in eCommerce, your mileage may vary.
Fun tip for encouraging creativity in creatives
Cedric mentioned that they’ve sometimes added names to ads to show who suggested them. This can add a fun sense of competition and pride—who can rack up the most winning creatives?
If you try this, make sure to also reward the ‘learning ads.’
For example, you could recognise the person who generated the most insights or learnings from the ads they proposed.
Recommendation
Definitely check out the full interview if you haven’t already.
I also did a great interview with Marcus Burke a while back on Meta ad trends for 2025—it’s definitely worth reading. Especially as he said traditional UGC was dead 🙊.
If you’re struggling with high volumes of ads, definitely also check out AdManage.ai, the platform Cedric built to test 50+ ads at scale without losing your mind.
I won’t completely let go of statistical significance and its importance—sorry, Ced—but I did learn a lot from our conversation.
At a certain point, when Meta has enough data, you have to trust why it isn’t giving an ad much attention or love. Sometimes, you just need to let it go and move on.
That said, even Cedric has his moments of doubt and occasionally retests ads, so there’s no need to let Meta run 100% wild either.
Till next week,
Daphne




