Diffusers
Safetensors
WanDMDPipeline
BrianChen1129 commited on
Commit
d6758c9
·
verified ·
1 Parent(s): c85a9be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -24,9 +24,10 @@ base_model:
24
 
25
  ## Introduction
26
 
27
- This model is jointly finetuned with [DMD](https://arxiv.org/pdf/2405.14867) and [VSA](https://arxiv.org/pdf/2505.13389), based on [Wan-AI/Wan2.1-T2V-14B-Diffusers](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B-Diffusers). It supports efficient 3-step inference and generates high-quality videos at **61×448×832** resolution. We adopt the [FastVideo 480P Synthetic Wan dataset](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x448x832_600k), consisting of 600k synthetic latents.
 
 
28
 
29
- ---
30
 
31
  ## Model Overview
32
 
 
24
 
25
  ## Introduction
26
 
27
+ We're excited to introduce the FastWan2.1 series—a new line of models finetuned with our novel **Sparse-distill** strategy. This approach jointly integrates DMD and VSA in 1 single training process, combining the benefits of both **distillation** to shorten diffusion steps and **sparse attention** to reduce attention computations, enabling even faster video generation.
28
+
29
+ FastWan2.1-T2V-14B-480P-Diffuserss is built upon Wan-AI/Wan2.1-T2V-14B-Diffusers. It supports efficient **3-step inference** and produces high-quality videos at 61×448×832 resolution. For training, we use the FastVideo 480P Synthetic Wan dataset, which contains 600k synthetic latents.
30
 
 
31
 
32
  ## Model Overview
33