File size: 4,626 Bytes
f95af2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a2870f8
f95af2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cedfe4f
f95af2e
 
 
 
 
 
cedfe4f
f95af2e
 
 
 
 
 
 
 
 
 
 
fa5d845
 
 
 
 
f95af2e
cedfe4f
f95af2e
cedfe4f
fa5d845
 
 
 
 
f95af2e
 
 
97c0875
fa5d845
 
 
 
 
97c0875
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
pipeline_tag: any-to-any
library_name: transformers
tags:
- text-to-image
- image-editing
- image-understanding
- vision-language
- multimodal
- unified-model
- teacher-model
- diffusion
license: mit
---

## 🌌 UniPic3-Teacher-Model
<div align="center">   
  <img src="logo.png" alt="Skywork Logo" width="500"> 
</div>

<p align="center">
  <a href="https://github.com/SkyworkAI/UniPic">
    <img src="https://img.shields.io/badge/GitHub-UniPic-blue?logo=github" alt="GitHub Repo">
  </a>
  <a href="https://github.com/SkyworkAI/UniPic/stargazers">
    <img src="https://img.shields.io/github/stars/SkyworkAI/UniPic?style=social" alt="GitHub Stars">
  </a>
  <a href="https://github.com/SkyworkAI/UniPic/network/members">
    <img src="https://img.shields.io/github/forks/SkyworkAI/UniPic?style=social" alt="GitHub Forks">
  </a>
</p>

## 📖 Introduction
<div align="center"> <img src="unipic3.png" alt="Model Teaser" width="720"> </div>

**UniPic3-Teacher-Model** is the **high-quality teacher diffusion model** used in the UniPic 3.0 framework.
It is trained with **full multi-step diffusion sampling** and optimized for **maximum perceptual quality, semantic consistency, and realism**.

This model serves as the **teacher backbone** for:
- **Distribution Matching Distillation (DMD)**
- **Consistency / trajectory distillation**
- **Few-step student model training**

Rather than being optimized for fast inference, the teacher model prioritizes **generation fidelity and stability**, providing a strong and reliable supervision signal for downstream distilled models.

---

## 🧠 Model Characteristics

- **Role**: Teacher model (not a distilled student)
- **Sampling**: Multi-step diffusion (high-fidelity)
- **Architecture**: Unified UniPic3 Transformer
- **Tasks Supported**:
  - Single-image editing
  - Multi-image composition (2–6 images)
  - Human–Object Interaction (HOI)
- **Resolution**: Flexible, within pixel budget constraints
- **Training Objective**:
  - Flow Matching / Diffusion loss
  - Used as teacher for DMD & consistency training

---

## 📊 Benchmarks
<div align="center"> <img src="unipic3_eval.png" alt="Model Teaser" width="720"> </div>

This teacher model achieves **state-of-the-art performance** on:
- Image editing benchmarks
- Multi-image composition benchmarks

It provides **high-quality supervision targets** for distilled UniPic3 student models.

---

## ⚠️ Important Note

> **This repository hosts the teacher model.**  
> It is **not optimized for few-step inference**.

If you are looking for:
-**4–8 step fast inference**
- 🚀 **Deployment-friendly distilled models**

please refer to the **UniPic3-DMD / distilled checkpoints** instead.

---

## 🧠 Usage (Teacher Model)

### 1. Clone the Repository
```bash
git clone https://github.com/SkyworkAI/UniPic
cd UniPic-3
```

### 2. Set Up the Environment
```bash
conda create -n unipic python=3.10
conda activate unipic3
pip install -r requirements.txt
```


### 3.Batch Inference
```bash
transformer_path = "Skywork/Unipic3"

python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 --use_env \
    qwen_image_edit_fast/batch_inference.py \
    --jsonl_path data/val.jsonl \
    --output_dir work_dirs/output \
    --distributed \
    --num_inference_steps 50 \
    --true_cfg_scale 4.0 \
    --transformer transformer_path \
    --skip_existing
```

## 📄 License
This model is released under the MIT License.

## Citation
If you use Skywork-UniPic in your research, please cite:
```
@article{wang2025skywork,
  title={Skywork unipic: Unified autoregressive modeling for visual understanding and generation},
  author={Wang, Peiyu and Peng, Yi and Gan, Yimeng and Hu, Liang and Xie, Tianyidan and Wang, Xiaokun and Wei, Yichen and Tang, Chuanxin and Zhu, Bo and Li, Changshi and others},
  journal={arXiv preprint arXiv:2508.03320},
  year={2025}
}
```

```
@article{wei2025skywork,
  title={Skywork unipic 2.0: Building kontext model with online rl for unified multimodal model},
  author={Wei, Hongyang and Xu, Baixin and Liu, Hongbo and Wu, Cyrus and Liu, Jie and Peng, Yi and Wang, Peiyu and Liu, Zexiang and He, Jingwen and Xietian, Yidan and others},
  journal={arXiv preprint arXiv:2509.04548},
  year={2025}
}
```

```
@article{wei2026skywork,
  title={Skywork UniPic 3.0: Unified Multi-Image Composition via Sequence Modeling},
  author={Wei, Hongyang and Liu, Hongbo and Wang, Zidong and Peng, Yi and Xu, Baixin and Wu, Size and Zhang, Xuying and He, Xianglong and Liu, Zexiang and Wang, Peiyu and others},
  journal={arXiv preprint arXiv:2601.15664},
  year={2026}
}
```