File size: 1,700 Bytes
70bbf64 865a7e1 9b83608 fddd741 4247a5e 9b83608 865a7e1 fddd741 9b83608 fddd741 9b83608 fddd741 9b83608 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: mit
---
# Unique3d-Normal-Diffuser Model Card
[🌟GitHub](https://github.com/TingtingLiao/unique3d_diffuser) | [🦸 Project Page](https://wukailu.github.io/Unique3D/) | [🔋MVImage Diffuser](https://huggingface.co/Luffuly/unique3d-mvimage-diffuser)

## Example
Note the input image is suppose to be **white background**.

```bash
import torch
import numpy as np
from PIL import Image
from pipeline import Unique3dDiffusionPipeline
# opts
seed = -1
generator = torch.Generator(device='cuda').manual_seed(-1)
forward_args = dict(
width=512,
height=512,
width_cond=512,
height_cond=512,
generator=generator,
guidance_scale=1.5,
num_inference_steps=30,
num_images_per_prompt=1,
)
# load
pipe = Unique3dDiffusionPipeline.from_pretrained(
"Luffuly/unique3d-normal-diffuser",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
).to("cuda")
# load image
image = Image.open('image.png').convert("RGB")
# forward
out = pipe(image, **forward_args).images
out[0].save(f"out.png")
```
## Citation
```bash
@misc{wu2024unique3d,
title={Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image},
author={Kailu Wu and Fangfu Liu and Zhihan Cai and Runjie Yan and Hanyang Wang and Yating Hu and Yueqi Duan and Kaisheng Ma},
year={2024},
eprint={2405.20343},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |