Spaces:
Runtime error
Runtime error
| title: Controlnet Depth Generation | |
| emoji: π | |
| colorFrom: red | |
| colorTo: purple | |
| sdk: gradio | |
| sdk_version: 5.38.0 | |
| app_file: app.py | |
| pinned: false | |
| license: mit | |
| short_description: Interior design using controlnet depth model | |
| Stable Diffusion ControlNet Depth Demo | |
| This Space demonstrates a Stable Diffusion model combined with a ControlNet model fine-tuned for depth, and includes automatic depth map estimation from your input image. | |
| How to use: | |
| Upload an Input Image: Provide any photo (e.g., of a room, an object, a scene). The app will automatically estimate its depth map. | |
| Enter a Text Prompt: Describe the image you want to generate. The model will try to apply your prompt while respecting the structure derived from the depth map. | |
| Adjust Parameters: Experiment with "Inference Steps" and "Guidance Scale" for different results. | |
| Click "Submit" to generate the image. | |
| Model Details: | |
| Base Diffusion Model: runwayml/stable-diffusion-v1-5 (downloaded from Hugging Face Hub) | |
| ControlNet Model: Fine-tuned for depth (uploaded as ./Output_ControlNet_Finetune) | |
| Depth Estimator: Intel/dpt-hybrid-midas (downloaded from Hugging Face Hub) | |
| Note: This model is quite large, so the first generation after a "cold start" (when the Space wakes up) might take a few minutes to load the models. Subsequent generations will be faster. | |
| Enjoy! | |
| Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference | |