# ERNIE-4.5-VL-424B-A47B-Base ## ERNIE 4.5 Highlights The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations: - **Multimodal MoE Pretraining:** Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a heterogeneous MoE structure, incorporated three-dimensional rotary embeddings, and employed router orthogonal loss and multimodal token-balanced loss. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training. - **Scaling-Efficient Architecture and Infrastructure:** To train the large multimodal MoE models efficiently, we introduce a novel heterogeneous hybrid parallelism and multi-level load balancing strategy for efficient training of ERNIE 4.5 models. By using on-device expert parallelism, memory-efficient pipeline scheduling, and FP8 mixed precision, we achieve ideal pre-training performance. For inference, we propose a quantization method with collaborative parallelism among multiple experts to achieve lossless quantization. Built on PaddlePaddle, ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms. - **Modality-Specific Post-training:** To meet the diverse requirements of real-world applications, we fine-tuned variants of the pretrained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visual-language understanding and supports both thinking and no-thinking mode. Each model employed a combination of Supervised Fine-tuning (SFT), Direct Preference Optimization (DPO) or a modified reinforcement learning method named Unified Preference Optimization (UPO) for post-training, using targeted datasets aligned with its intended usage scenario. To ensure the stability of multimodal joint training, we adopt a staged training strategy. In the first and second stage, we train only the text-related parameters, enabling the model to develop strong fundamental language understanding as well as long-text processing capabilities. The final multimodal stage extends capabilities to images and videos by introducing additional parameters including a ViT for image feature extraction, an adapter for feature transformation, and visual experts for multimodal understanding. At this stage, text and visual modalities mutually enhance each other. After pretraining trillions tokens, we obtained ERNIE-4.5-VL-424B-A47B-Base. ## Model Overview ERNIE-4.5-VL-424B-A47B-Base is a multimodal MoE Base model, with 424B total parameters and 47B activated parameters for each token. The following are the model configuration details: | Key | Value | | --------------------------------- | ------------- | | Modality | Text & Vision | | Training Stage | Pretraining | | Params(Total / Activated) | 424B / 47B | | Layers | 54 | | Heads(Q/KV) | 64 / 8 | | Text Experts(Total / Activated) | 64 / 8 | | Vision Experts(Total / Activated) | 64 / 8 | | Context Length | 131072 | ## Benchmark | Capability | Benchmark | ERNIE-4.5-VL-424B-A47B-Base | GPT-4.1 | | ----------------- | ------------------- | --------------------------- | ------- | | Average | | | | | Visual Perception | CVBench | | 82.49 | | | CountBench | | | | | RealWorldQA | | 77.25 | | | VLMAreBlind | | | | Knowledge | CCBench | | 78.65 | | Chart&Doc&OCR | OCRBench | | 83.00 | | | TableVQA | | 72.13 | | | ChartQA | | 82.56 | | | DocVQA(val) | | 87.84 | | | ChartXiv-Reasoning | | 58.30 | | Vision-Reasoning | VisualPuzzle | | 45.63 | | | Logicvista | | | | STEM | OlympiadBench | | 39.95 | | | MathVista(testmini) | | 70.90 | | | MathVerse | | 62.46 | | | MMMU(val) | | 73.07 | | | AI2D | | 95.34 | | | MathVision | | 50.46 | | Video | MVBench | | 64.15 | | | VideoMME w/o subs | | 74.49 | | | VideoMME w/ subs | | 78.90 | | | MLVU | | 73.33 | | | LongVideoBench | | 63.47 | ## Quickstart ### Using `transformers` library Here is an example of how to use the transformers library for inference: ```bash from transformers import AutoProcessor, AutoTokenizer, AutoModelForCausalLM model_path = 'Baidu/ERNIE-4.5-VL-424B-A47B-Base' model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype=torch.bfloat16) processor = AutoProcessor.from_pretrained(model_path) processor.eval() model.add_image_preprocess(processor) messages = [ { "role": "user", "content": [ {"type": "text", "text": "Describe the image."}, {"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example1.jpg}}, ] }, ] texts, images, videos = processor.pre_process(messages) inputs = processor(texts, images, videos) device = next(model.parameters()).device inputs = inputs.to(device) generated_ids = model.generate( inputs=inputs['input_ids'].to(device), **inputs, max_new_tokens=128 ) output_text = processor.decode(generated_ids[0]) print(output_text) ``` ### vLLM inference vLLM is currently being adapted, priority can be given to using our fork repository [vllm](https://github.com/CSWYF3634076/vllm/tree/ernie) ```bash # 80G * 16 GPU vllm serve Baidu/ERNIE-4.5-VL-424B-A47B-Base --trust-remote-code ``` ## License The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright © 2025 Baidu, Inc. All Rights Reserved. ## Citation If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report: ```bibtex @misc{ernie2025technicalreport, title={ERNIE 4.5 Technical Report}, author={Baidu ERNIE Team}, year={2025}, eprint={}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={} } ```