Alibaba's Tongyi Lab has unveiled Wan2.1, an open-source video generation suite that sets a new standard in AI-driven video creation. Outperforming state-of-the-art (SOTA) models like OpenAI's Sora, Wan2.1 delivers faster, higher-quality results while introducing groundbreaking features.
The flagship model, Wan2.1-T2V-14B, has claimed the top spot on the VBench leaderboard, excelling in complex motion dynamics, real-world physics simulation, and text generation. What makes Wan2.1 truly unique is its ability to render text in both English and Chinese, a first for video generation models.
Wan2.1 generates videos at 2.5x the speed of competing models, making it a practical choice for both professionals and hobbyists. Its ability to eliminate common AI-generated artifacts and choppy motion sets it apart from recent releases like Google's Veo 2.
Wan2.1 is a testament to Alibaba's commitment to open-source innovation. Following the success of Qwen, Alibaba continues to push the boundaries of AI, making cutting-edge technology accessible to all.
With Wan2.1, Alibaba is not just competing with industry leaders—it's redefining the future of video generation.