April 2026
I had a conversation this week with a team from our EF batch. They’re building AI-generated video - multi-scene, multi-character clips with production quality that would have required a studio two years ago. The product is genuinely impressive.
We sat across from each other describing our businesses and realized something interesting. We’re both trying to solve the same problem - how brands get video content at scale - but from completely opposite ends. They generate content with AI models. We distribute content through human creator networks.
What surprised both of us was the punchline: our human creators are currently multiple times cheaper per video than their AI-generated equivalents. And they convert better on organic channels.
That inversion isn’t a quirk of early-stage pricing. It reveals something structural about where value actually lives in the content stack. And the biggest story in AI this month - OpenAI shutting down Sora - makes the case even more clearly.
Content generation is being commoditized from every direction. The companies that win aren’t going to be the ones who make the cheapest video. They’re going to be the ones who own distribution.
On March 24, OpenAI shut down Sora. Not scaled it back. Killed it.
The numbers tell the story. Sora’s estimated inference cost was $15 million per day. Its total lifetime in-app revenue was $2.1 million. Downloads dropped 66% from their November 2025 peak within three months. Disney had signed a $1 billion licensing deal giving Sora access to over 200 copyrighted characters. No money ever changed hands. Sam Altman reportedly felt “terrible” telling Disney’s CEO - who found out less than an hour before the public announcement.
OpenAI is redirecting compute toward enterprise products and a coding model codenamed “Spud.” The message is clear: even the most well-funded company in AI couldn’t make consumer video generation economically viable.
This matters beyond OpenAI. For every company building the application layer on top of video models, one of the biggest players just exited. That means less competition on the model layer, less pricing pressure, and more regulatory uncertainty - not less.
But the deeper takeaway isn’t about Sora specifically. It’s about what Sora reveals about the content value chain. Making videos is a cost center. It always has been. The expensive part of content at scale was never production - it was coordination, distribution, and iteration. AI made production cheaper, but it didn’t touch the rest. And “the rest” is where the entire business model lives.
Here’s the thing about organic social in 2026: anyone can go viral. Zero followers, third video, 500K views. The algorithm doesn’t care who you are. It cares whether people watch, engage, and share.
That changes the game entirely. If virality is a function of volume and iteration rather than production value, then the competitive advantage isn’t making better videos. It’s testing more videos, faster, across more accounts, in more markets.
That’s an operations problem, not a generation problem.
When we run a campaign, we’re not producing a handful of polished clips and hoping one hits. We’re deploying dozens of creators across multiple geographies, each posting multiple times per week, each on freshly warmed accounts. We do social listening to inform the briefs. We track performance daily. We cut underperformers weekly and scale winners programmatically. Then we take the organic content that converts best and repurpose it into paid - with real creator accounts behind it, which means whitelisting rights and higher trust signals.
The result: our organic content delivers CPMs that are three to five times cheaper than running equivalent paid campaigns on Meta. Not because the videos are cinematic. Because they’re real, distributed through real accounts, and the platforms reward that.
One account manager now handles over 50 creator accounts. Five months ago that number was 15. That’s the kind of operational leverage that compounds - and it has nothing to do with how the video was made.
There’s a structural reason why distribution through human creators works better on organic channels, and it’s not going away.
Platforms are increasingly enforcing transparency around AI-generated content. TikTok removed over 51,000 synthetic media videos in the second half of 2025 - a 340% increase over the prior period. They now issue immediate strikes for unlabeled AI content. Meta introduced formal penalties for unoriginal content across Facebook and Instagram in mid-2025. And starting August 2026, the EU will mandate disclosure for all AI-generated content.
This isn’t a temporary crackdown. It reflects the platforms’ core incentive structure.
Think about it from TikTok’s perspective. If AI-generated content isn’t gated in some way, then in three years every video on the platform is synthetic. Every human creator disappears. That runs directly against the thesis of social media. Platforms need real people creating real content, because that’s what keeps other real people scrolling.
The data reflects this. According to Sprout Social’s research, 55% of consumers trust human-created content more than AI-generated content. Among Gen Z and Millennials, that number is two-thirds. The number one thing consumers said they want brands to prioritize in 2026? Human-generated content.
The data on paid campaigns tells a similar story. When users identify an ad as AI-generated, premium perception drops 17%, inspiration falls 19%, and purchase intent declines 14%. A 2024 Adobe Trust Report found that 64% of consumers said they would lose trust in a brand if they discovered its content was primarily AI-generated without disclosure. For organic social - where authenticity signals drive algorithmic distribution - the gap is wider still.
We’ve tested this ourselves. We ran AI-generated campaigns through our own accounts. Impressions were comparable. But click-through rates dropped meaningfully. The audience could tell, or the algorithm could tell, or both.
Back to the conversation with our EF batchmates. The interesting thing is that we’re not really competing. We’re working at different layers of the same stack.
They’re building at the generation layer - making it possible to produce high-quality video without a camera or a studio. That has real applications. Product visualization. Internal corporate content. Niche creative work that traditional production would never justify. As AI models improve, those use cases will expand.
We’re building at the distribution and operations layer - making it possible to get content in front of millions of people through authentic channels at scale. That requires creator networks, account management, social listening, performance optimization, and geographic reach. None of which a generation model can provide on its own.
The question for anyone in content right now is: which layer are you building at, and which layer captures the most value?
I wrote in a previous post about how AI is collapsing the middle of the value chain across software. The same pattern applies here. Content generation is the middle layer. It’s getting cheaper from every direction simultaneously - AI models from above, human creator networks from below. When something gets commoditized from both sides, you don’t want to be the one selling it. You want to be selling what sits on top of it: the orchestration, the distribution, and the outcomes.
I don’t think AI video is going away. The technology is real, and it will get better. But I think the market has been asking the wrong question for the past year.
The question was never “can AI make a good video?” It can. The question is what happens after the video exists. Who posts it, where, through what account, in what market, with what iteration speed, and with what feedback loop connecting organic performance to paid strategy. That entire chain is where the value lives - and none of it gets solved by a better model.
The companies that figure out distribution at scale - real accounts, real humans, real geographic reach, real performance data - are going to be the ones that matter in content. Not because AI content fails, but because content itself is becoming abundant. When supply is infinite, the bottleneck shifts. It shifts to attention, trust, and reach. It shifts to the layer that connects what gets made with who actually sees it.
That’s the layer we’re building at. And honestly, it’s the layer I think most people are underestimating right now.