Substack Recording Studio vs. Descript (2026)
CapCut just shipped Dreamina Seedance 2.0 AI video generation — and most of you can’t use it.
ByteDance started rolling out Dreamina Seedance 2.0 inside CapCut on March 26, 2026. Text-to-video, image-to-video, reference-clip-to-video. Up to 15 seconds. Six aspect ratios. Built into an app that’s already installed on hundreds of millions of devices.
But here’s the catch — and it’s a big one. The initial rollout covers Brazil, Indonesia, Malaysia, Mexico, Philippines, Thailand, and Vietnam. That’s it. The US and EU are excluded. Not because of a slow rollout schedule. Because ByteDance is still navigating the regulatory mess around TikTok, and launching new AI capabilities in the US right now would be, let’s say, strategically unwise.
So we have a free AI video generator inside the world’s most widely-used editing app, available in seven countries, competing with paid tools that charge $12 to $35 per month. And the largest English-speaking creator markets can’t touch it.
What You Need to Know
Detail Status Feature AI video generation (text, image, or reference clip input) Model Dreamina Seedance 2.0 (ByteDance) Max length 15 seconds per generation Aspect ratios 6 options (16:9, 9:16, 1:1, 4:3, 3:4, 21:9) Price Free (included in CapCut) Available Brazil, Indonesia, Malaysia, Mexico, Philippines, Thailand, Vietnam Not available US, EU, and everywhere else Why ByteDance regulatory situation with TikTok Who cares: Creators in available regions who want free AI-generated B-roll, transitions, or concept clips. Everyone else is watching from the sidelines.
Three input modes. You give it a text prompt and it generates a video clip. You give it a still image and it animates the scene. Or you give it a reference video clip and it generates something stylistically similar. All outputs max out at 15 seconds.
I haven’t been able to test it firsthand — I’m US-based, and VPN workarounds aren’t worth the hassle for a feature review. But based on demo footage coming out of Southeast Asian creator communities and ByteDance’s own Dreamina platform (where the model has been available independently since late 2025), here’s what we’re looking at.
The text-to-video output is comparable to what Runway Gen-3 was producing six months ago. Motion is mostly coherent. Human faces still hit uncanny valley at longer durations. Object persistence across the full 15 seconds gets wobbly. A coffee cup might shift position, a hand might gain a finger. Standard AI video limitations.
The image-to-video mode is where Seedance 2.0 seems strongest. Give it a product photo and it’ll create a slow camera orbit, add ambient movement, or animate elements within the scene. For short social content — a 5-second TikTok transition, an Instagram Story overlay — the quality is genuinely usable.
Reference clip mode is the most interesting and the least predictable. The idea: feed it a clip you like and it generates something with similar motion, pacing, and style. In practice, “similar” is doing heavy lifting. Sometimes it captures the essence of the reference. Sometimes it produces something adjacent but weird. The inconsistency is the kind of thing you’d notice after generating twenty clips and finding four you’d actually use.
The direct competitors charge real money.
| Tool | Price | Max Length | Input Types | Quality Level |
|---|---|---|---|---|
| CapCut (Seedance 2.0) | Free | 15 sec | Text, image, reference clip | Good for B-roll, inconsistent for hero content |
| Runway Gen-3 | $12-$35/mo | 10-16 sec | Text, image | Better coherence, especially faces |
| Adobe Firefly Video | Included with CC ($23-$60/mo) | 5 sec (current) | Text, image | More conservative but more controlled |
| Pika 2.1 | $8-$58/mo | 10 sec | Text, image | Strong stylization, weaker realism |
| Google Veo 2 | Via Vertex AI or YouTube tools | Variable | Text, image | Highest quality, limited access |
The gap between free and paid AI video has been narrowing for months. Seedance 2.0 inside CapCut is the first time a genuinely capable model has shipped at a zero-dollar price point inside an app people already have installed. That matters more than spec-sheet comparisons.
A creator in Jakarta who edits in CapCut can now generate AI B-roll without opening a browser tab, creating an account on another platform, or entering a credit card. That’s a different user experience than “sign up for Runway, pick a plan, learn a new interface, export, import into your editor.” The workflow integration is the product, not the model itself.
ByteDance can’t afford a new regulatory fire right now.
TikTok’s situation in the US has been an ongoing saga — potential bans, forced divestitures, congressional hearings. Launching a new AI capability that generates synthetic video content inside a ByteDance-owned app would hand critics fresh ammunition. “ByteDance is now generating deepfake-capable video content on American phones” writes itself as a headline.
So they launched in seven markets where ByteDance faces minimal regulatory friction and CapCut already has huge user bases. Smart business move. Frustrating if you’re a US-based creator who wanted free AI video generation inside your existing editor.
Will it come to the US? Probably, eventually, assuming ByteDance resolves the TikTok situation. But “eventually” could mean six months or two years. Nobody outside ByteDance’s legal team knows the timeline.
EU exclusion follows the same logic — the AI Act creates regulatory uncertainty around generative AI tools that ByteDance likely doesn’t want to navigate while also fighting fires in the US.
If you’re editing in CapCut in one of the seven launch countries, here’s what I’d actually try.
B-roll and transitions. This is the obvious use case. Need a 5-second establishing shot of a city skyline? A slow zoom into a product? An abstract transition between scenes? Generate it instead of hunting through stock footage libraries. At 15 seconds max, you’re not making hero content with this. You’re filling gaps.
Social content thumbnails-in-motion. Take your existing thumbnail or product photo and turn it into a short animated loop. Instagram Stories, TikTok opening hooks, YouTube Shorts intros. A static image that moves catches more attention than one that doesn’t. That’s just how the algorithm works.
Concept visualization. Trying to explain an abstract idea in a video essay? Generate a visual representation instead of spending an hour in After Effects building a motion graphic that’ll be on screen for four seconds.
What I wouldn’t use it for: anything where a human face needs to look right for more than a couple seconds. Anything that needs to cut together with real-world footage without looking off. Anything your audience will scrutinize. AI video is still obviously AI video when you push it.
I’ve written before about the free vs. paid tools calculation and it keeps getting more complicated. CapCut adding AI video generation resets the math for a specific slice of creators.
If you’re paying $12/month for Runway primarily to generate short B-roll clips, and CapCut’s free output is 70-80% as good for that specific use case, the subscription is hard to justify. That’s $144/year for a marginal quality improvement on clips that appear in your video for three seconds.
But if you’re doing serious AI video work — generating primary content, creating consistent characters across clips, building AI-assisted narratives — the paid tools still pull ahead on coherence, control, and consistency. Runway’s camera controls, Pika’s stylization options, and Adobe’s integration with the Creative Cloud ecosystem give you capabilities that “free model inside a mobile editor” can’t match.
The pattern is familiar. Free tools get you 80% of the way. The last 20% costs money. Whether that 20% matters depends entirely on what you’re making and who’s watching.
CapCut isn’t the only editor adding AI generation. The AI video tool space has been moving fast, and what you should use depends on your workflow.
If you’re a YouTube creator, Google’s own Veo tools are already rolling into YouTube Studio for Shorts. Those are free too, but limited to YouTube’s ecosystem.
If you’re doing long-form editing in a desktop NLE, Adobe’s Quick Cut and the Descript/Opus Clip comparison is more relevant to your workflow than anything happening in CapCut.
And if you’re building a full content pipeline as a solo creator, the question isn’t which AI video tool is best — it’s which one fits into your existing stack without adding friction.
CapCut’s advantage is that it’s already in the stack for millions of creators. No new app to learn. No new subscription to manage. The model just shows up where you’re already editing. That’s a distribution advantage, not a quality advantage. But distribution wins a lot of fights.
If you’re in the US or EU and wondering whether to hold off on a Runway or Pika subscription because CapCut’s free version is coming — don’t. There’s no announced timeline for US availability, and waiting months for a free tool that might be 80% as good as a $12/month tool you could use today is bad math if you need AI video now.
If you’re in one of the seven launch countries, try it. It’s free. It’s inside the editor you’re probably already using. Generate twenty clips, see how many are usable, and decide whether it replaces whatever stock footage or paid AI tool you’ve been using.
If you’re watching from a restricted market, the takeaway isn’t about CapCut specifically. It’s that free AI video generation has crossed the usability threshold. Seedance 2.0 inside CapCut is the first mass-market deployment, but it won’t be the last. The paid tools now have to justify their subscription against “free and decent” — and that pressure benefits everyone.
CapCut’s Dreamina Seedance 2.0 puts genuinely capable AI video generation inside a free app that hundreds of millions of people already use. The 15-second limit, seven-country rollout, and US exclusion are real constraints. The output quality is good enough for B-roll and social content, not good enough to replace dedicated AI video tools for serious production work.
The most important thing here isn’t the feature itself. It’s the price point. Free AI video generation inside a mainstream editor changes the floor for what creators expect to pay for synthetic content. Runway, Pika, and Adobe are now competing against zero dollars for the casual-use tier. Their response — whether that’s better quality, longer outputs, or more control — will shape what AI video tools look like for creators over the next year.
For now, most English-speaking creators are spectators. ByteDance’s regulatory situation turned what should have been a global feature launch into a regional one. When (if) it reaches the US, it’ll be worth testing immediately. Until then, the paid tools still own the market, and they know the clock is ticking.
ByteDance’s Dreamina Seedance 2.0 began rolling out inside CapCut on March 26, 2026. Initial availability: Brazil, Indonesia, Malaysia, Mexico, Philippines, Thailand, Vietnam. US and EU are excluded. Pricing, generation limits, and regional availability may change as the rollout expands.