File size: 1,855 Bytes
7b21803
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
712fe5b
7b21803
5a8a691
7b21803
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05c4885
7b21803
 
2940310
7b21803
 
 
 
 
 
 
 
 
 
 
5a8a691
7b21803
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
license: mit
language:
- en
- zh
- ja
tags:
- speech
- singing
- singing voice
- audio
- music
- vocoder
- codec
- pytorch
---

## Aliasing-Free Neural Audio Synthesis

This is the official Hugging Face model repository for the paper **"[Aliasing-Free Neural Audio Synthesis](https://arxiv.org/abs/2512.20211)"**, which is the first work to achieve simple and efficient aliasing-free upsampling-based neural audio generation in the entire field of neural vocoders and codecs.

For more details, please visit our [GitHub Repository](https://github.com/sizigi/AliasingFreeNeuralAudioSynthesis).

## Model Checkpoints

This repository contains the following checkpoints:

| Model Name        | Directory                    | Description                                       |
| ----------------- | ---------------------------- | ------------------------------------------------- |
| **Pupu-Vocoder_Small**            | `./pupuvocoder/*`            | 14M parameter small version of Pupu-Vocoder.   |
| **Pupu-Vocoder_Large** | `./pupuvocoder_large/*` | 122M parameter large version of Pupu-Vocoder.   |
| **Pupu-Codec_Small**       | `./pupucodec/*`       | 32M parameter small version of Pupu-Codec. |
| **Pupu-Codec_Large**       | `./pupucodec_large/*`       | 119M parameter large version of Pupu-Codec. |

## How to use

You need to put the pretrained models in:

```bash
  AliasingFreeNeuralAudioSynthesis/experiments
```

of our official repository, and then follow the instructions written in the repository to resume, finetune, and inference our pretrained checkpoints.

## Citation

```bibtex
@article{afgen,
  title        = {Aliasing Free Neural Audio Synthesis},
  author       = {Yicheng Gu and Junan Zhang and Chaoren Wang and Jerry Li and Zhizheng Wu and Lauri Juvela},
  year         = {2025},
  journal      = {arXiv:2512.20211},
}
```