Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
|
@@ -70,16 +70,16 @@ You may reuse the base model text encoder for inference.
|
|
| 70 |
|
| 71 |
## Training settings
|
| 72 |
|
| 73 |
-
- Training epochs:
|
| 74 |
-
- Training steps:
|
| 75 |
- Learning rate: 0.0001
|
| 76 |
- Learning rate schedule: constant
|
| 77 |
- Warmup steps: 0
|
| 78 |
- Max grad value: 2.0
|
| 79 |
-
- Effective batch size:
|
| 80 |
- Micro-batch size: 1
|
| 81 |
- Gradient accumulation steps: 1
|
| 82 |
-
- Number of GPUs:
|
| 83 |
- Gradient checkpointing: True
|
| 84 |
- Prediction type: epsilon (extra parameters=['training_scheduler_timestep_spacing=trailing', 'inference_scheduler_timestep_spacing=trailing'])
|
| 85 |
- Optimizer: bnb-lion8bit
|
|
@@ -98,7 +98,7 @@ You may reuse the base model text encoder for inference.
|
|
| 98 |
|
| 99 |
### antelope-data
|
| 100 |
- Repeats: 0
|
| 101 |
-
- Total number of images:
|
| 102 |
- Total number of aspect buckets: 1
|
| 103 |
- Resolution: 1.048576 megapixels
|
| 104 |
- Cropped: True
|
|
|
|
| 70 |
|
| 71 |
## Training settings
|
| 72 |
|
| 73 |
+
- Training epochs: 2
|
| 74 |
+
- Training steps: 50
|
| 75 |
- Learning rate: 0.0001
|
| 76 |
- Learning rate schedule: constant
|
| 77 |
- Warmup steps: 0
|
| 78 |
- Max grad value: 2.0
|
| 79 |
+
- Effective batch size: 1
|
| 80 |
- Micro-batch size: 1
|
| 81 |
- Gradient accumulation steps: 1
|
| 82 |
+
- Number of GPUs: 1
|
| 83 |
- Gradient checkpointing: True
|
| 84 |
- Prediction type: epsilon (extra parameters=['training_scheduler_timestep_spacing=trailing', 'inference_scheduler_timestep_spacing=trailing'])
|
| 85 |
- Optimizer: bnb-lion8bit
|
|
|
|
| 98 |
|
| 99 |
### antelope-data
|
| 100 |
- Repeats: 0
|
| 101 |
+
- Total number of images: 24
|
| 102 |
- Total number of aspect buckets: 1
|
| 103 |
- Resolution: 1.048576 megapixels
|
| 104 |
- Cropped: True
|