Wan_datasets / README.md
nielsr's picture
nielsr HF Staff
Update dataset card with TurboDiffusion info, task category, and license
b71f28d verified
|
raw
history blame
2.51 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - text-to-video
size_categories:
  - n>1T

Wan-synthesized datasets for Video Diffusion Distillation

TurboDiffusion Paper | rCM Paper | Project Page | Code

This repository holds Wan-synthesized datasets used for training TurboDiffusion and rCM (Score-Regularized Continuous-Time Consistency Model).

These datasets consist of large-scale synthetic videos generated by Wan2.1 models (including the 1.3B and 14B variants) at various resolutions (480p, 720p). They are primarily used for diffusion distillation tasks like rCM training and Sparse-Linear Attention (SLA) alignment.

Dataset Structure

The data is provided in webdataset format, consisting of sharded .tar files. A typical directory structure looks like:

Wan2.1_14B_480p_16:9_Euler-step100_shift-3.0_cfg-5.0_seed-0_250K/
  shard00000.tar
  shard00001.tar
  ...

Usage

Downloading the data

You can download the dataset shards using git lfs:

# Make sure git lfs is installed
git clone https://huggingface.co/datasets/worstcoder/Wan_datasets

Loading in Training

In the TurboDiffusion/rCM training pipeline, the dataset can be accessed using a pattern-based loader:

# Example path pattern for training configuration
dataloader_train.tar_path_pattern="assets/datasets/Wan2.1_14B_480p_.../shard*.tar"

Citation

If you use this dataset, please cite the following works:

@article{zhang2025turbodiffusion,
  title={TurboDiffusion: Accelerating Video Diffusion Models by 100-200 Times},
  author={Zhang, Jintao and Zheng, Kaiwen and Jiang, Kai and Wang, Haoxu and Stoica, Ion and Gonzalez, Joseph E and Chen, Jianfei and Zhu, Jun},
  journal={arXiv preprint arXiv:2512.16093},
  year={2025}
}

@article{zheng2025rcm,
  title={Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency},
  author={Zheng, Kaiwen and Wang, Yuji and Ma, Qianli and Chen, Huayu and Zhang, Jintao and Balaji, Yogesh and Chen, Jianfei and Liu, Ming-Yu and Zhu, Jun and Zhang, Qinsheng},
  journal={arXiv preprint arXiv:2510.08431},
  year={2025}
}