Bordoglor's picture
Upload folder using huggingface_hub
f472b08 verified

WaveFT: Wavelet Fine-Tuning

WaveFT is a novel parameter-efficient fine-tuning (PEFT) method that introduces sparse updates in the wavelet domain of residual matrices. Unlike LoRA, which is constrained by discrete low-rank choices, WaveFT enables fine-grained control over the number of trainable parameters by directly learning a sparse set of coefficients in the transformed space. These coefficients are then mapped back to the weight domain via the Inverse Discrete Wavelet Transform (IDWT), producing high-rank updates without incurring inference overhead.

WaveFT currently has the following constraint:

  • Only nn.Linear layers are supported.

The abstract from the paper is:

Efficiently adapting large foundation models is critical, especially with tight compute and memory budgets. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA offer limited granularity and effectiveness in few-parameter regimes. We propose Wavelet Fine-Tuning (WaveFT), a novel PEFT method that learns highly sparse updates in the wavelet domain of residual matrices. WaveFT allows precise control of trainable parameters, offering fine-grained capacity adjustment and excelling with remarkably low parameter count, potentially far fewer than LoRA’s minimum—ideal for extreme parameter-efficient scenarios. Evaluated on personalized text-to-image generation using Stable Diffusion XL as baseline, WaveFT significantly outperforms LoRA and other PEFT methods, especially at low parameter counts; achieving superior subject fidelity, prompt alignment, and image diversity.

WaveFTConfig

[[autodoc]] tuners.waveft.config.WaveFTConfig

WaveFTModel

[[autodoc]] tuners.waveft.model.WaveFTModel