Hallo4: High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization
๐ธ Showcase
โ๏ธ Installation
- System requirement: Ubuntu 20.04/Ubuntu 22.04, Cuda 12.1
- Tested GPUs: H100
Download the codes:
git clone https://github.com/fudan-generative-vision/hallo4
cd hallo4
Create conda environment:
conda create -n hallo python=3.10
conda activate hallo
Install packages with pip
pip install -r requirements.txt
Besides, ffmpeg is also needed:
apt-get install ffmpeg
๐ฅ Download Pretrained Models
You can easily get all pretrained models required by inference from our HuggingFace repo.
Using huggingface-cli to download the models:
cd $ProjectRootDir
pip install "huggingface_hub[cli]"
huggingface-cli download fudan-generative-ai/hallo4 --local-dir ./pretrained_models
Finally, these pretrained models should be organized as follows:
./pretrained_models/
|-- hallo4
| `-- model_weight.pth
|-- Wan2.1_Encoders
|-- Wan2.1_VAE.pth
|-- models_t5_umt5-xxl-enc-bf16.pth
|-- audio_separator/
| |-- download_checks.json
| |-- mdx_model_data.json
| |-- vr_model_data.json
| `-- Kim_Vocal_2.onnx
|-- wav2vec/
`-- wav2vec2-base-960h/
|-- config.json
|-- feature_extractor_config.json
|-- model.safetensors
|-- preprocessor_config.json
|-- special_tokens_map.json
|-- tokenizer_config.json
`-- vocab.json
๐ ๏ธ Prepare Inference Data
Hallo4 have some specicial requirements on inference data due to limitation of our training:
- Reference image should have aspect ratio between 1:1 and 480:832.
- Driving audio must be in WAV format.
- Audio must be in English since our training datasets are only in this language.
- Ensure the vocals of audio are clear; background music is acceptable.
๐ฎ Run Inference
To run a simple demo, just use our provided shell bash inf.sh
โ ๏ธ Social Risks and Mitigations
The development of portrait image animation technologies driven by audio inputs poses social risks, such as the ethical implications of creating realistic portraits that could be misused for deepfakes. To mitigate these risks, it is crucial to establish ethical guidelines and responsible use practices. Privacy and consent concerns also arise from using individuals' images and voices. Addressing these involves transparent data usage policies, informed consent, and safeguarding privacy rights. By addressing these risks and implementing mitigations, the research aims to ensure the responsible and ethical development of this technology.
๐ค Acknowledgements
This model is a fine-tuned derivative version based on the WAN2.1-1.3B model. WAN is an open-source video generation model developed by the WAN team. Its original code and model parameters are governed by the WAN LICENSE.
As a derivative work of WAN, the use, distribution, and modification of this model must comply with the license terms of WAN.