Upload tmp_lora using SD-Hub
Browse files- .gitattributes +3 -0
- tmp_lora/Instant_Loss_2Koma__Bad_End_2Koma_PonyILSDSDXL.json +5 -0
- tmp_lora/Instant_Loss_2Koma__Bad_End_2Koma_PonyILSDSDXL.preview.png +3 -0
- tmp_lora/Instant_Loss_2Koma__Bad_End_2Koma_PonyILSDSDXL.safetensors +3 -0
- tmp_lora/WSSKX_WAI.json +1 -4
- tmp_lora/anatomy_helper.json +5 -0
- tmp_lora/anatomy_helper.preview.png +3 -0
- tmp_lora/anatomy_helper.safetensors +3 -0
- tmp_lora/highresbodyfix_v1.json +5 -0
- tmp_lora/highresbodyfix_v1.preview.png +3 -0
- tmp_lora/highresbodyfix_v1.safetensors +3 -0
- tmp_lora/sd_xl_dpo_lora_v1.json +1 -4
- tmp_lora/spo_sdxl_10ep_4k-data_lora_webui.json +1 -4
.gitattributes
CHANGED
|
@@ -356,3 +356,6 @@ Lora/omni-09.preview.png filter=lfs diff=lfs merge=lfs -text
|
|
| 356 |
Lora/omniverse_gwen.preview.png filter=lfs diff=lfs merge=lfs -text
|
| 357 |
Lora/ovgwen-10.preview.png filter=lfs diff=lfs merge=lfs -text
|
| 358 |
Lora/sgb_ilxl_v1.preview.png filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
| 356 |
Lora/omniverse_gwen.preview.png filter=lfs diff=lfs merge=lfs -text
|
| 357 |
Lora/ovgwen-10.preview.png filter=lfs diff=lfs merge=lfs -text
|
| 358 |
Lora/sgb_ilxl_v1.preview.png filter=lfs diff=lfs merge=lfs -text
|
| 359 |
+
tmp_lora/Instant_Loss_2Koma__Bad_End_2Koma_PonyILSDSDXL.preview.png filter=lfs diff=lfs merge=lfs -text
|
| 360 |
+
tmp_lora/anatomy_helper.preview.png filter=lfs diff=lfs merge=lfs -text
|
| 361 |
+
tmp_lora/highresbodyfix_v1.preview.png filter=lfs diff=lfs merge=lfs -text
|
tmp_lora/Instant_Loss_2Koma__Bad_End_2Koma_PonyILSDSDXL.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"sha256": "8485059B015201CA6DFB9CDB636B8F4423C652C0652C8A3CC974EFE4CFD01822",
|
| 3 |
+
"modelId": 1622099,
|
| 4 |
+
"modelVersionId": 1835826
|
| 5 |
+
}
|
tmp_lora/Instant_Loss_2Koma__Bad_End_2Koma_PonyILSDSDXL.preview.png
ADDED
|
Git LFS Details
|
tmp_lora/Instant_Loss_2Koma__Bad_End_2Koma_PonyILSDSDXL.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8485059b015201ca6dfb9cdb636b8f4423c652c0652c8a3cc974efe4cfd01822
|
| 3 |
+
size 228486148
|
tmp_lora/WSSKX_WAI.json
CHANGED
|
@@ -1,8 +1,5 @@
|
|
| 1 |
{
|
| 2 |
"sha256": "E059009092DC8F24B19073E538D83090FC6E3D9FD9B0D6028814F6CBCD19837D",
|
| 3 |
"modelId": 1116233,
|
| 4 |
-
"modelVersionId": 1261280
|
| 5 |
-
"activation text": "",
|
| 6 |
-
"description": "!!! LoRA can be difficult to use !!!This is the WAI/ILLUSTRIOUS version of these loras https://civitai.com/models/1063903/hls-styles-noobai-xl.All information about each lora can be found in the \u201cAbout this version\u201d section.Many thanks to @wwHEY https://civitai.com/user/wwHEY \u2014 I couldn\u2019t have done it without her advice!But she's not responsible for the crappy quality of my LoRAs. x)",
|
| 7 |
-
"sd version": "Other"
|
| 8 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"sha256": "E059009092DC8F24B19073E538D83090FC6E3D9FD9B0D6028814F6CBCD19837D",
|
| 3 |
"modelId": 1116233,
|
| 4 |
+
"modelVersionId": 1261280
|
|
|
|
|
|
|
|
|
|
| 5 |
}
|
tmp_lora/anatomy_helper.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"sha256": "BF6A950036B7599212A2C68D65F3BA07B28689067E167915D2A0ECB2018C26CA",
|
| 3 |
+
"modelId": 1171869,
|
| 4 |
+
"modelVersionId": 1318504
|
| 5 |
+
}
|
tmp_lora/anatomy_helper.preview.png
ADDED
|
Git LFS Details
|
tmp_lora/anatomy_helper.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bf6a950036b7599212a2c68d65f3ba07b28689067e167915d2a0ecb2018c26ca
|
| 3 |
+
size 228473940
|
tmp_lora/highresbodyfix_v1.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"sha256": "9103B24124B9C359027A3EA53835DBB126EC2C2A41D81A116C45019D8F011564",
|
| 3 |
+
"modelId": 1082538,
|
| 4 |
+
"modelVersionId": 1215512
|
| 5 |
+
}
|
tmp_lora/highresbodyfix_v1.preview.png
ADDED
|
Git LFS Details
|
tmp_lora/highresbodyfix_v1.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9103b24124b9c359027a3ea53835dbb126ec2c2a41d81a116c45019d8f011564
|
| 3 |
+
size 109347768
|
tmp_lora/sd_xl_dpo_lora_v1.json
CHANGED
|
@@ -1,8 +1,5 @@
|
|
| 1 |
{
|
| 2 |
"sha256": "C100EC5708865A649C68912CE0E541FC69CB1973FE6543310B9B81A42E15ADA3",
|
| 3 |
"modelId": 242825,
|
| 4 |
-
"modelVersionId": 273996
|
| 5 |
-
"activation text": "",
|
| 6 |
-
"description": "What is DPO?DPO is Direct Preference Optimization, the name given to the process whereby a diffusion model is finetuned based on human-chosen images. Meihua Dang et. al. have trained Stable Diffusion 1.5 and Stable Diffusion XL using this method and the Pick-a-Pic v2 Dataset, which can be found at https://huggingface.co/datasets/yuvalkirstain/pickapic_v2 https://huggingface.co/datasets/yuvalkirstain/pickapic_v2, and wrote a paper about it at https://huggingface.co/papers/2311.12908 https://huggingface.co/papers/2311.12908.What does it Do?The trained DPO models have been observed to produce higher quality images than their untuned counterparts, with a significant emphasis on the adherence of the model to your prompt. These LoRA can bring that prompt adherence to other fine-tuned Stable Diffusion models.Who Trained This?These LoRA are based on the works of Meihua Dang (https://huggingface.co/mhdang https://huggingface.co/mhdang) athttps://huggingface.co/mhdang/dpo-sdxl-text2image-v1 https://huggingface.co/mhdang/dpo-sdxl-text2image-v1 and https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1 https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1, licensed under OpenRail++.How were these LoRA Made?They were created using Kohya SS by extracting them from other OpenRail++ licensed checkpoints on CivitAI and HuggingFace.1.5: https://civitai.com/models/240850/sd15-direct-preference-optimization-dpo https://civitai.com/models/240850/sd15-direct-preference-optimization-dpo extracted from https://huggingface.co/fp16-guy/Stable-Diffusion-v1-5_fp16_cleaned/blob/main/sd_1.5.safetensors https://huggingface.co/fp16-guy/Stable-Diffusion-v1-5_fp16_cleaned/blob/main/sd_1.5.safetensors.XL: https://civitai.com/models/238319/sd-xl-dpo-finetune-direct-preference-optimization https://civitai.com/models/238319/sd-xl-dpo-finetune-direct-preference-optimization extracted from https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensorsThese are also hosted on HuggingFace at https://huggingface.co/benjamin-paine/sd-dpo-offsets/ https://huggingface.co/benjamin-paine/sd-dpo-offsets/",
|
| 7 |
-
"sd version": "SDXL"
|
| 8 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"sha256": "C100EC5708865A649C68912CE0E541FC69CB1973FE6543310B9B81A42E15ADA3",
|
| 3 |
"modelId": 242825,
|
| 4 |
+
"modelVersionId": 273996
|
|
|
|
|
|
|
|
|
|
| 5 |
}
|
tmp_lora/spo_sdxl_10ep_4k-data_lora_webui.json
CHANGED
|
@@ -1,8 +1,5 @@
|
|
| 1 |
{
|
| 2 |
"sha256": "B6C2C16F3EF579885F10E94468D8F7196D09464002D116C115432207F4B1F8AB",
|
| 3 |
"modelId": 510261,
|
| 4 |
-
"modelVersionId": 567119
|
| 5 |
-
"activation text": "",
|
| 6 |
-
"description": "Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step PreferenceArxiv Paper https://arxiv.org/abs/2406.04314Github Code https://github.com/RockeyCoss/SPOProject Page https://rockeycoss.github.io/spo.github.io/AbstractGenerating visually appealing images is fundamental to modern text-to-image generation models. A potential solution to better aesthetics is direct preference optimization (DPO), which has been applied to diffusion models to improve general image quality including prompt alignment and aesthetics. Popular DPO methods propagate preference labels from clean image pairs to all the intermediate steps along the two generation trajectories. However, preference labels provided in existing datasets are blended with layout and aesthetic opinions, which would disagree with aesthetic preference. Even if aesthetic labels were provided (at substantial cost), it would be hard for the two-trajectory methods to capture nuanced visual differences at different steps.To improve aesthetics economically, this paper uses existing generic preference data and introduces step-by-step preference optimization (SPO) that discards the propagation strategy and allows fine-grained image details to be assessed. Specifically, at each denoising step, we 1) sample a pool of candidates by denoising from a shared noise latent, 2) use a step-aware preference model to find a suitable win-lose pair to supervise the diffusion model, and 3) randomly select one from the pool to initialize the next denoising step. This strategy ensures that diffusion models focus on the subtle, fine-grained visual differences instead of layout aspect. We find that aesthetic can be significantly enhanced by accumulating these improved minor differences.When fine-tuning Stable Diffusion v1.5 and SDXL, SPO yields significant improvements in aesthetics compared with existing DPO methods while not sacrificing image-text alignment compared with vanilla models. Moreover, SPO converges much faster than DPO methods due to the step-by-step alignment of fine-grained visual details. Code and model: https://rockeycoss.github.io/spo.github.io/ https://rockeycoss.github.io/spo.github.io/Model DescriptionThis model is fine-tuned from stable-diffusion-xl-base-1.0 https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0. It has been trained on 4,000 prompts for 10 epochs. This checkpoint is a LoRA checkpoint. For more information, please visit here https://huggingface.co/SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep_LoRACitationIf you find our work useful, please consider giving us a star and citing our work.@article{liang2024step,\n title={Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step Preference Optimization},\n author={Liang, Zhanhao and Yuan, Yuhui and Gu, Shuyang and Chen, Bohan and Hang, Tiankai and Cheng, Mingxi and Li, Ji and Zheng, Liang},\n journal={arXiv preprint arXiv:2406.04314},\n year={2024}\n}",
|
| 7 |
-
"sd version": "SDXL"
|
| 8 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"sha256": "B6C2C16F3EF579885F10E94468D8F7196D09464002D116C115432207F4B1F8AB",
|
| 3 |
"modelId": 510261,
|
| 4 |
+
"modelVersionId": 567119
|
|
|
|
|
|
|
|
|
|
| 5 |
}
|