---license: Apache License 2.0 language:- Multilingual- Chinese- English tasks:- ERNIE Large Models- Large Language Models- Multimodal Models- Image-Text-to-Text model_features:- 128k Context training_framework: ERNIEKit inference_framework: FastDeploy base_model:- PaddlePaddle/ERNIE-4.5-VL-28B-A3B-Base-Paddle model_lineage: finetune---<div align="center" style="line-height: 1;"> <a href="https://ernie.baidu.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖_Chat-ERNIE_Bot-blue" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/baidu" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Baidu-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/PaddlePaddle/ERNIE" target="_blank" style="margin: 2px;"> <img alt="Github" src="https://img.shields.io/badge/GitHub-ERNIE-000?logo=github&color=0000FF" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://ernie.baidu.com/blog/ernie4.5" target="_blank" style="margin: 2px;"> <img alt="Blog" src="https://img.shields.io/badge/🖖_Blog-ERNIE4.5-A020A0" style="display: inline-block; vertical-align: middle;"/> </a></div><div align="center" style="line-height: 1;"> <a href="#license" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-Apache2.0-A5de54" style="display: inline-block; vertical-align: middle;"/> </a></div># ERNIE-4.5-VL-28B-A3B> [!NOTE]> Note: "**-Paddle**" models use [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) weights, while "**-PT**" models use Transformer-style PyTorch weights.## ERNIE 4.5 HighlightsThe advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations:1. **Multimodal Heterogeneous MoE Pre-Training:** Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text understanding and generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a *heterogeneous MoE structure*, incorporated *modality-isolated routing*, and employed *router orthogonal loss* and *multimodal token-balanced loss*. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training.2. **Scaling-Efficient Infrastructure:** We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and finegrained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose *multi-expert parallel collaboration* method and *convolutional code quantization* algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle), ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms.3. **Modality-Specific Post-Training:** To meet the diverse requirements of real-world applications, we fine-tuned variants of the pre-trained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visuallanguage understanding and supports both thinking and non-thinking modes. Each model employed a combination of *Supervised Fine-tuning (SFT)*, *Direct Preference Optimization (DPO)* or a modified reinforcement learning method named *Unified Preference Optimization (UPO)* for post-training.During the fine-tuning stage of a vision-language model, the deep integration between vision and language plays a decisive role in the model’s performance across complex tasks such as understanding, reasoning, and generation. To enhance the generalization and adaptability of the model on multimodal tasks, we focused on three core capabilities—image understanding, task-specific fine-tuning, and multimodal chain-of-thought reasoning—and carried out systematic data construction and training strategy optimization. Additionally, we use RLVR(Reinforcement Learning with Verifiable Rewards) to further improve alignment and performance. After the SFT and RL stages, we obtained ERNIE-4.5-VL-28B-A3B.## Model OverviewERNIE-4.5-VL-28B-A3B is a multimodal MoE Chat model, with 28B total parameters and 3B activated parameters for each token. The following are the model configuration details:| Key | Value || --------------------------------- | ------------- || Modality | Text & Vision || Training Stage | Posttraining || Params(Total / Activated) | 28B / 3B || Layers | 28 || Heads(Q/KV) | 20 / 4 || Text Experts(Total / Activated) | 64 / 6 || Vision Experts(Total / Activated) | 64 / 6 || Shared Experts | 2 || Context Length | 131072 |## Quickstart### FastDeploy InferenceQuickly deploy services using FastDeploy as shown below. For more detailed usage, refer to the [FastDeploy GitHub Repository](https://github.com/PaddlePaddle/FastDeploy).**Note**: For single-card deployment, at least 80GB of GPU memory is required.```bashpython -m fastdeploy.entrypoints.openai.api_server \ --model baidu/ERNIE-4.5-VL-28B-A3B-Paddle \ --port 8180 \ --metrics-port 8181 \ --engine-worker-queue-port 8182 \ --max-model-len 32768 \ --enable-mm \ --reasoning-parser ernie-45-vl \ --max-num-seqs 32```The ERNIE-4.5-VL model supports enabling or disabling thinking mode through request parameters.#### Enable Thinking Mode```bashcurl -X POST "http://0.0.0.0:8180/v1/chat/completions" \-H "Content-Type: application/json" \-d '{ "messages": [ {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}}, {"type": "text", "text": "Descript this image"} ]} ], "metadata": {"enable_thinking": true}}'```#### Disable Thinking Mode```bashcurl -X POST "http://0.0.0.0:8180/v1/chat/completions" \-H "Content-Type: application/json" \-d '{ "messages": [ {"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}}, {"type": "text", "text": "Descript this image"} ]} ], "metadata": {"enable_thinking": false}}'```## LicenseThe ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved.## CitationIf you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report:```bibtex@misc{ernie2025technicalreport, title={ERNIE 4.5 Technical Report}, author={Baidu ERNIE Team}, year={2025}, eprint={}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={}}```