Add metadata, paper link and citation
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,25 +1,47 @@
|
|
| 1 |
---
|
| 2 |
datasets:
|
| 3 |
- d3LLM/Ling-Coder-dParallel-merged-512-120k
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
tags:
|
| 5 |
- diffusion
|
| 6 |
-
- text-generation
|
| 7 |
- fast-inference
|
| 8 |
- d3llm
|
| 9 |
-
pipeline_tag: text-generation
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
| 13 |
# d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation π
|
| 14 |
|
|
|
|
|
|
|
| 15 |
## Model Description
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
## Usage
|
| 20 |
|
| 21 |
-
For detailed usage instructions, evaluation scripts,
|
| 22 |
|
| 23 |
-
|
| 24 |
-
- π Blog: **[https://hao-ai-lab.github.io/blogs/text-diffusion/](https://hao-ai-lab.github.io/blogs/text-diffusion/)**
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
datasets:
|
| 3 |
- d3LLM/Ling-Coder-dParallel-merged-512-120k
|
| 4 |
+
base_model: Dream-org/Dream-Coder-v0-Instruct-7B
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
library_name: transformers
|
| 7 |
+
license: apache-2.0
|
| 8 |
tags:
|
| 9 |
- diffusion
|
|
|
|
| 10 |
- fast-inference
|
| 11 |
- d3llm
|
|
|
|
| 12 |
---
|
| 13 |
|
|
|
|
| 14 |
# d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation π
|
| 15 |
|
| 16 |
+
**d3LLM-Dream-Coder** is an ultra-fast diffusion language model introduced in the paper [d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation](https://huggingface.co/papers/2601.07568). It is built on [Dream-org/Dream-Coder-v0-Instruct-7B](https://huggingface.co/Dream-org/Dream-Coder-v0-Instruct-7B).
|
| 17 |
+
|
| 18 |
## Model Description
|
| 19 |
|
| 20 |
+
d3LLM (pseuDo-Distilled Diffusion Large Language Model) is a framework designed to strike a balance between accuracy and parallelism in diffusion LLMs. It achieves up to 10Γ speedup over vanilla diffusion models like LLaDA/Dream and 5Γ speedup over autoregressive (AR) models.
|
| 21 |
+
|
| 22 |
+
The model utilizes two primary innovations:
|
| 23 |
+
- **Pseudo-Trajectory Distillation**: A training method that teaches the model which tokens can be decoded confidently at early steps.
|
| 24 |
+
- **Entropy-Based Multi-Block Decoding**: An inference strategy using a KV-cache refresh mechanism to maintain accuracy while maximizing parallelism.
|
| 25 |
+
|
| 26 |
+
## Resources
|
| 27 |
+
|
| 28 |
+
- **Paper**: [d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation](https://huggingface.co/papers/2601.07568)
|
| 29 |
+
- **Repository**: [https://github.com/hao-ai-lab/d3LLM](https://github.com/hao-ai-lab/d3LLM)
|
| 30 |
+
- **Blog**: [https://hao-ai-lab.github.io/blogs/text-diffusion/](https://hao-ai-lab.github.io/blogs/text-diffusion/)
|
| 31 |
+
- **Demo**: [https://d3llm-team.github.io/](https://d3llm-team.github.io/)
|
| 32 |
|
| 33 |
## Usage
|
| 34 |
|
| 35 |
+
For detailed usage instructions, evaluation scripts, and training code, please refer to the official GitHub repository. Since the model uses a custom architecture, ensure you have `transformers==4.49.0` installed and use `trust_remote_code=True` when loading the model.
|
| 36 |
|
| 37 |
+
## Citation
|
|
|
|
| 38 |
|
| 39 |
+
```bibtex
|
| 40 |
+
@article{arxiv'26:d3llm,
|
| 41 |
+
title = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation},
|
| 42 |
+
author = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang},
|
| 43 |
+
journal = {ArXiv preprint},
|
| 44 |
+
volume = {arXiv:2601.07568},
|
| 45 |
+
year = {2026}
|
| 46 |
+
}
|
| 47 |
+
```
|