You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Hallo4: High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization

Jiahao Cui1*โ€ƒ Baoyou Chen1*โ€ƒ Mingwang Xu1*โ€ƒ Hanlin Shang1โ€ƒ Yuxuan Chen1โ€ƒ
Yun Zhan1โ€ƒ Zilong Dong5โ€ƒ Yao Yao4โ€ƒ Jingdong Wang2โ€ƒ Siyu Zhu1,3โœ‰๏ธโ€ƒ
1Fudan Universityโ€ƒ 2Baidu Incโ€ƒ 3Shanghai Innovative Instituteโ€ƒ
4Nanjing Universityโ€ƒ 5Alibaba Groupโ€ƒ


๐Ÿ“ธ Showcase

โš™๏ธ Installation

  • System requirement: Ubuntu 20.04/Ubuntu 22.04, Cuda 12.1
  • Tested GPUs: H100

Download the codes:

  git clone https://github.com/fudan-generative-vision/hallo4
  cd hallo4

Create conda environment:

  conda create -n hallo python=3.10
  conda activate hallo

Install packages with pip

  pip install -r requirements.txt

Besides, ffmpeg is also needed:

  apt-get install ffmpeg

๐Ÿ“ฅ Download Pretrained Models

You can easily get all pretrained models required by inference from our HuggingFace repo.

Using huggingface-cli to download the models:

cd $ProjectRootDir
pip install "huggingface_hub[cli]"
huggingface-cli download fudan-generative-ai/hallo4 --local-dir ./pretrained_models

Finally, these pretrained models should be organized as follows:

./pretrained_models/
|-- hallo4
|   `-- model_weight.pth
|-- Wan2.1_Encoders
    |-- Wan2.1_VAE.pth
    |-- models_t5_umt5-xxl-enc-bf16.pth
|-- audio_separator/
|   |-- download_checks.json
|   |-- mdx_model_data.json
|   |-- vr_model_data.json
|   `-- Kim_Vocal_2.onnx
|-- wav2vec/
    `-- wav2vec2-base-960h/
        |-- config.json
        |-- feature_extractor_config.json
        |-- model.safetensors
        |-- preprocessor_config.json
        |-- special_tokens_map.json
        |-- tokenizer_config.json
        `-- vocab.json

๐Ÿ› ๏ธ Prepare Inference Data

Hallo4 have some specicial requirements on inference data due to limitation of our training:

  1. Reference image should have aspect ratio between 1:1 and 480:832.
  2. Driving audio must be in WAV format.
  3. Audio must be in English since our training datasets are only in this language.
  4. Ensure the vocals of audio are clear; background music is acceptable.

๐ŸŽฎ Run Inference

To run a simple demo, just use our provided shell bash inf.sh

โš ๏ธ Social Risks and Mitigations

The development of portrait image animation technologies driven by audio inputs poses social risks, such as the ethical implications of creating realistic portraits that could be misused for deepfakes. To mitigate these risks, it is crucial to establish ethical guidelines and responsible use practices. Privacy and consent concerns also arise from using individuals' images and voices. Addressing these involves transparent data usage policies, informed consent, and safeguarding privacy rights. By addressing these risks and implementing mitigations, the research aims to ensure the responsible and ethical development of this technology.

๐Ÿค— Acknowledgements

This model is a fine-tuned derivative version based on the WAN2.1-1.3B model. WAN is an open-source video generation model developed by the WAN team. Its original code and model parameters are governed by the WAN LICENSE.

As a derivative work of WAN, the use, distribution, and modification of this model must comply with the license terms of WAN.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support