| # Fine-tuning for semantic segmentation using LoRA and π€ PEFT | |
| [](https://colab.research.google.com/github/huggingface/peft/blob/main/examples/semantic_segmentation/semantic_segmentation_peft_lora.ipynb) | |
| We provide a notebook (`semantic_segmentation_peft_lora.ipynb`) where we learn how to use [LoRA](https://huggingface.co/papers/2106.09685) from π€ PEFT to fine-tune an semantic segmentation by ONLY using **14%%** of the original trainable parameters of the model. | |
| LoRA adds low-rank "update matrices" to certain blocks in the underlying model (in this case the attention blocks) and ONLY trains those matrices during fine-tuning. During inference, these update matrices are _merged_ with the original model parameters. For more details, check out the [original LoRA paper](https://huggingface.co/papers/2106.09685). | |