- NewForm AI
- Posts
- A Mathematical Exposé of AIGC

# A Mathematical Exposé of AIGC

## We did the math so you don't have to

We get asked all the time: “So you guys are making AI generated TikToks, right?ˮ So, selfishly, we wrote this to drop in the chat and save our sanity each time the question pops up.

**Organic ****vs**** Paid**

Ultimately, the efficacy of organic content lies in its ability to get distribution. Paid content, on the other hand, is judged on its ability to turn X dollars of ad spend into X + something. Vaguely, this can be understood as ROAS (return on ad spend).

With AIGC for organic content, thereʼs a good case for *why not*. The marginal opportunity cost is near zero (you can get around the cool down periods by having multiple accounts and creating emails is free). If an AIGC video costs a buck to make, and the average video gets 1,000 views, then you're already beating the market, as the average paid CPM these days is ~$10. Theoretically, you can print free money with AI videos. This doesn't happen in practice though. The biggest organic channels still rely on UGC for a reason, but that's a topic for another time.

The goal with paid content is maximizing efficacy. For any ad program running at scale, there will already be a “winner:ˮ a creative asset(s) that gets good amounts of spend and is effective at turning X dollars into X dollars + more. For this essay, letʼs focus on the simplest efficacy metric, ROAS - dollars in divided by dollars out.

Letʼs model the efficacy of organic vs paid content. For organic, the formula looks like:

Where RPM is Revenue Per Mille (effectively how many dollars you get from incremental 1,000 views). Now, AIGC is pretty revolutionary here. The average UGC video costs ~$100, AIGC takes that down to $10 even accounting for some basic human edits. With no opportunity cost and free distribution, cranking out production and doing it cheaper is going to fundamentally change your organic marketing function. Even if we assume RPM is lower by, say, 50%, the equation works out. TL;DR, each new asset posted has a positive EV when your cost to distribute is near zero.

For Paid, the formula looks very different:

There is no multiplier for the number of videos here. Thereʼs no reward for the sake of volume alone. Whether there is one ad or 100 ads running, it doesn't matter in the equation. The incremental ad costs money. Not just because it costs money to make content, but because it also costs to test it.

Just like each organic video is competing against each other, so are your paid ads. But instead of competing against millions of other organic videos from users globally, your ads are competing against themselves (and then the rest of the world, but that is often overstated). That incremental ad you just added? It’s taking away budget from your winning assets.

Looking at our ads across millions of spend, 1 in 30 assets drive *half* * *the results in an account. If you look at mature accounts, the number is closer to 1 in 50. Within those 1 in 50, there are winners of the winners, and the Pareto Principle keeps true as we zoom in.

**Normal distributions and the doom of AIGC**

If you model any efficacy metric (CPC, ROAS, CPI, etc) you see a pretty neat, normal distribution shaping up. This will be critical to our analysis.

To simplify this, let's define those “True Winnersˮ as 1/40 = top 2.5% of quality UGC based on our empirical observations. The top 2.5% of a normal distribution is two standard deviations above the mean, a neat number weʼll reference later.

Assume the average quality UGC ad has a 1x ROAS efficacy and the standard deviation is roughly .5x. So True Winners are defined as ads with a 2x ROAS. This is based on our own empirical observations, and we'd like to think we make okay ads. *If your **UGC** **ads** **are** **not** **good,** **this** **analysis** **fails.*

**Winning frequencies**

The frequency of a variable lying x std. deviations away from the mean in a normal distribution is

Where erf is the error function defined as:

But this takes into account low performers as well (i.e., ads with near 0x ROAS). So, you divide this number by two to get your true top winner frequency. In the case with quality UGC, you have:

#### The ultimate metric: cost per winner

Now we know 1 in 40 UGC ads will become a “top winner.ˮ Let's say a UGC creative costs you $100 to produce. But to test the video, you need to allocate ad- spend (remember with paid, distribution isnʼt free). Let's say youʼre optimizing for a high funnel event, say an install, to be lean with testing, and your cost/install (CPI) is $2. You need ~100 events to call this test statistically significant, so youʼre out $200 in spend to test. So we have:

Now, let’s look at AIGC. Letʼs say the average AI Ad takes $10 to generate (accounting for some human edits, revisions, etc.). Conservatively, letʼs assume AIGC is 30% less effective as a baseline compared to other ads and follows the same normal distribution, as there is no reason to assume otherwise (i.e., assume std deviation is equivalent). So, to get a 2x ROAS ad, we need to be:

Then the frequency of ads that get a 2x ROAS becomes:

So, to get a winner with AIGC, you're looking at:

A whole 4x more expensive to get a winner vs. focusing on quality UGC. Paradoxically, AIGC ads cost more.

#### The inescapable mathematics

The numbers are pretty hard to escape. Even if we assume you have a low base install cost/don't need that much to test (say 50/ad), the math becomes:

Given the aforementioned distribution of efficacy, your base efficacy matters a lot more than shots on target. Volume is cool, yes, but it means nothing if youʼre putting in trash.

To be a little more liberal here, let's say AI ads become really good and their base efficacy is only 15% lower than quality UGC and you can get statistical confidence within $50 worth of ad spend/creative asset. Running the frequency equation again, to get a 2x ROAS we need to be 1.15/.5 = 2.3 std deviations which gives a frequency of:

Run the numbers a few times, and you quickly learn that AIGC *only *makes sense if your base efficacy is not far from UGC and you don't have to battle SKAN restrictions as hard because you can get quick signal off small budgets. Our friends at Sociaaal do AIGC pretty well. Here's an example ad. It works because the UGC ads for entertainment apps wouldn't be far from the AIGC ones. And base install costs are rather low. They're the exception, not the rule.

For someone like Acorns, good luck trying to get AI ads nearly as effective as an ad like this. Plus, given the high base cost of actions in an industry like fintech, it ultimately means AIGC is mathematically doomed.

The math is stacked against low efficacy content. With the pesky ATT/SKAN restrictions we mentioned, you need more and more data to be confident in efficacy. If content costs ~$0 to test, then AIGC may just be the future. But it cost to test. And, with a greedy market vying for every scintilla of attention with a war- chest of compelling ads, maximization of content *efficacy *should be priority numero-uno. Trying to optimize the cost of content, in most cases, is being penny-wise and pound-foolish. Top ads can drive millions in revenue (we've made a few). So, focus on improving your base efficacy. Volume is a buzz word that doesn't matter if you're feeding in shitty ads.

# Appendix

Another thing to note: this testing budget isnʼt going down the drain (it’s generating your results/revenue), but let's think about the base efficacy of content. At 1x ROAS base efficacy, your testing loss is basically just the cost of your creatives. But, at .7x ROAS, youʼre actually losing money. Not just on the opportunity cost of not spending that incremental dollar on your top winner, but on negative marketing/media efficacy. And let's not forget about brand. We're a performance agency at heart, but there's something to be said about brand dilution if you're just pumping out low quality AIGC assets.