The AI video generation space is heating up, and Alibaba is making a bold move. The Chinese tech giant has just open-sourced its latest text-to-video AI model, Wan 2.1, taking direct aim at OpenAI’s Sora—but with a major advantage: it’s free.
Why Open-Sourcing AI is a Game-Changer
One of the biggest reasons DeepSeek, an AI model from China, went viral was its open-source approach. By allowing anyone to download and install it for free, DeepSeek quickly gained massive traction.
Now, Alibaba is following the same strategy with Wan 2.1, giving developers and creators free access to a high-performing AI video model that rivals some of the best in the world.

How Wan 2.1 Competes With Sora
Wan 2.1 isn’t just another AI video generator—it’s a powerful tool that offers features on par with, or even better than, Sora in some aspects. Here’s what makes it stand out:
- Multimodal Input – Users can create videos using text, images, and even existing videos as input prompts.
- Unmatched Performance – Wan 2.1 is currently the top-ranked model on the VBench leaderboard, meaning it generates some of the most realistic AI videos available.
- Cinematic Quality – The AI promises movie-like visuals, complete with stylized effects and realistic textures.
- Advanced Motion & Physics Simulation – It can render complex body movements, realistic object interactions, and smooth scene transitions.
- Text Support in Videos – Wan 2.1 is the first AI video model to support both Chinese and English text within its videos.
- AI-Generated Sound & Music – The model can create matching sound effects and background music for its generated videos.
Incredible AI Video Examples
Alibaba’s demo clips prove just how powerful Wan 2.1 is. Some AI-generated examples include:
- A group of dogs riding bicycles
- Two cats engaged in a boxing match
- A team of dancers performing a choreographed routine
- A woman splashing out of the water
- An archer firing a bow
- A dog cutting tomatoes
These videos are so detailed and lifelike that some might even pass as real footage.
Easy Access for Everyone
Perhaps the biggest selling point? It’s free and open-source.
Unlike Sora, which is a closed model requiring access through OpenAI’s platform, Wan 2.1 is available for anyone to download and use. The main model, Wan2.1-T2V-14B, is the most powerful, but Alibaba also released a smaller model, Wan 2.1 T2V-1.3B, which requires just 8.19GB of VRAM. This means even users with a consumer-grade GPU can run it.
According to Alibaba, the smaller model can generate a 5-second 480P video on an RTX 4090 in about 4 minutes. That’s fast, even compared to some proprietary AI models.
Where to Get Wan 2.1
Developers and AI enthusiasts can access Wan 2.1 on:
- Hugging Face
- GitHub
If you know how to work with AI models, you can start generating your own AI videos today.
The Ethical Concerns of AI Video Generation
While this technology is impressive, it raises serious ethical concerns. AI-generated videos can be used for deepfakes, misinformation, and malicious content.
Unlike OpenAI, which has openly discussed AI safety measures, Alibaba’s website makes no mention of safety precautions. It’s also unclear whether AI-generated videos from Wan will be watermarked or labeled to indicate they’re synthetic.
Alibaba vs. ByteDance: The Battle for AI Video Dominance
Alibaba isn’t the only Chinese company working on next-gen AI video tools. ByteDance, the parent company of TikTok, recently introduced OmniHuman-1, another impressive AI-powered video model.
With multiple Chinese companies pushing the boundaries of AI-generated video, OpenAI and Western competitors may have to adjust their pricing and access models to stay competitive.
Final Thoughts
Wan 2.1 is a massive leap forward for open-source AI video generation. With its ability to create cinematic-quality videos, support realistic physics and movement, and run on consumer hardware, it could disrupt the AI video industry in a big way.
But as with all powerful AI tools, responsible use will be crucial.
Would you try Wan 2.1? Let me know your thoughts in the comments!
Leave a Reply