Commit
·
f579cef
1
Parent(s):
4ace6a2
Update README.md
Browse files
README.md
CHANGED
|
@@ -46,10 +46,54 @@ One single flow of Versatile Diffusion contains a VAE, a diffuser, and a context
|
|
| 46 |
}
|
| 47 |
```
|
| 48 |
|
|
|
|
|
|
|
| 49 |
You can use the model both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [SHI-Labs Versatile Diffusion codebase](https://github.com/SHI-Labs/Versatile-Diffusion).
|
| 50 |
|
| 51 |
-
|
| 52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
```py
|
| 54 |
from diffusers import VersatileDiffusionTextToImagePipeline
|
| 55 |
import torch
|
|
|
|
| 46 |
}
|
| 47 |
```
|
| 48 |
|
| 49 |
+
# Usage
|
| 50 |
+
|
| 51 |
You can use the model both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [SHI-Labs Versatile Diffusion codebase](https://github.com/SHI-Labs/Versatile-Diffusion).
|
| 52 |
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
## 🧨 Diffusers
|
| 56 |
+
|
| 57 |
+
🧨 Diffusers let's you both use a unified and more memory-efficient, task-specific pipelines.
|
| 58 |
+
|
| 59 |
+
## VersatileDiffusionPipeline
|
| 60 |
+
|
| 61 |
+
To use Versatile Diffusion for all tasks, it is recommend to use the [`VersatileDiffusionPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/versatile_diffusion#diffusers.VersatileDiffusionPipeline)
|
| 62 |
+
|
| 63 |
+
```py
|
| 64 |
+
from diffusers import VersatileDiffusionPipeline
|
| 65 |
+
import torch
|
| 66 |
+
import requests
|
| 67 |
+
from io import BytesIO
|
| 68 |
+
from PIL import Image
|
| 69 |
+
|
| 70 |
+
# let's download an initial image
|
| 71 |
+
url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg"
|
| 72 |
+
|
| 73 |
+
response = requests.get(url)
|
| 74 |
+
image = Image.open(BytesIO(response.content)).convert("RGB")
|
| 75 |
+
|
| 76 |
+
pipe = VersatileDiffusionPipeline.from_pretrained(
|
| 77 |
+
"shi-labs/versatile-diffusion", torch_dtype=torch.float16
|
| 78 |
+
)
|
| 79 |
+
pipe = pipe.to("cuda")
|
| 80 |
+
|
| 81 |
+
generator = torch.Generator(device="cuda").manual_seed(0)
|
| 82 |
+
image = pipe.image_variation(image, generator=generator).images[0]
|
| 83 |
+
image.save("./car_variation.png")
|
| 84 |
+
|
| 85 |
+
#similarly with
|
| 86 |
+
# pipe.text_to_image(...)
|
| 87 |
+
# pipe.dual_guided(...)
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
### Task Specific
|
| 91 |
+
|
| 92 |
+
The task specific pipelines load only the weights that are needed onto GPU.
|
| 93 |
+
You can find all task specific pipelines [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/versatile_diffusion#versatilediffusion).
|
| 94 |
+
|
| 95 |
+
You can use them as follows:
|
| 96 |
+
### Text to Image
|
| 97 |
```py
|
| 98 |
from diffusers import VersatileDiffusionTextToImagePipeline
|
| 99 |
import torch
|