# torch.compile In PEFT, [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) works for some but not all features. The reason why it won't always work is because PEFT is highly dynamic in certain places (loading and switching between multiple adapters, for instance), which can cause trouble for `torch.compile`. In other places, `torch.compile` may work, but won't be as fast as expected because of graph breaks. If you don't see an error, it doesn't necessarily mean that `torch.compile` worked correctly. It might give you an output, but the output is incorrect. This guide describes what works with `torch.compile` and what doesn't. For your own testing, we recommend using the latest PyTorch version, as `torch.compile` is constantly being improved. > [!TIP] > Unless indicated otherwise, the default `torch.compile` settings were used. ## Training and inference with `torch.compile` These features **work** with `torch.compile`. Everything listed below was tested with a causal LM: - Training with `Trainer` from 🤗 transformers - Training with a custom PyTorch loop - Inference - Generation The following adapters were tested successfully: - AdaLoRA - BOFT - Bone - IA³ - Layer Norm Tuning - LoHa - LoKr - LoRA - LoRA + DoRA - LoRA applied to embedding layers - OFT - VeRA - HRA ## Advanced PEFT features with `torch.compile` Below are some of the more advanced PEFT features that **work**. They were all tested with LoRA. - `modules_to_save` (i.e. `config = LoraConfig(..., modules_to_save=...)`) - Merging adapters (one or multiple) - Merging multiple adapters into one adapter (i.e. calling `model.add_weighted_adapter(...)`) - Using PEFT adapters with quantization (bitsandbytes) - Disabling adapters (i.e. using `with model.disable_adapter()`) - Unloading (i.e. calling `model.merge_and_unload()`) - Mixed adapter batches (i.e. calling `model(batch, adapter_names=["__base__", "default", "other", ...])`) - Inference with multiple adapters (i.e. using `model.add_adapter` or `model.load_adapter` to load more than 1 adapter); for this, only call `torch.compile` _after_ loading all adapters Generally, we can expect that if a feature works correctly with LoRA and is also supported by other adapter types, it should also work for that adapter type. ## Test cases All the use cases listed above are tested inside of [`peft/tests/test_torch_compile.py`](https://github.com/huggingface/peft/blob/main/tests/test_torch_compile.py). If you want to check in more detail how we tested a certain feature, please go to that file and check the test that corresponds to your use case. > [!TIP] > If you have another use case where you know that `torch.compile` does or does not work with PEFT, please contribute by letting us know or by opening a PR to add this use case to the covered test cases.