PyTorch
llama
yuhuili commited on
Commit
f020b64
·
verified ·
1 Parent(s): 7694341

Upload 5 files

Browse files
Files changed (6) hide show
  1. .gitattributes +3 -0
  2. LICENSE +13 -0
  3. README.md +117 -3
  4. figs/e3.gif +3 -0
  5. figs/eagle3r.jpg +3 -0
  6. figs/logo.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ figs/e3.gif filter=lfs diff=lfs merge=lfs -text
37
+ figs/eagle3r.jpg filter=lfs diff=lfs merge=lfs -text
38
+ figs/logo.png filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright 2025 SafeAI Lab (SAIL)
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
README.md CHANGED
@@ -1,3 +1,117 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ <img src="figs/logo.png" alt="EAGLE" width="220" align="left"><div align="center"><h1>&nbsp;EAGLE</h1></div>
5
+
6
+ <p align="center">
7
+ | <a href="https://arxiv.org/pdf/2401.15077.pdf"><b>EAGLE</b></a> |
8
+ <a href="https://arxiv.org/pdf/2406.16858"><b>EAGLE-2</b></a> |
9
+ <a href="https://arxiv.org/pdf/2503.01840"><b>EAGLE-3</b></a> |
10
+ <a href="https://sites.google.com/view/
11
+ eagle-llm"><b>Blog</b></a> |
12
+ </p>
13
+
14
+
15
+ <p align="center">
16
+ <a href="">
17
+ <img src="https://img.shields.io/badge/Version-v3.0.0-orange.svg" alt="Version">
18
+ </a>
19
+ <a href="https://opensource.org/licenses/Apache-2.0">
20
+ <img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License">
21
+ </a>
22
+ <a href="https://github.com/SafeAILab/EAGLE/issues">
23
+ <img src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" alt="Maintenance">
24
+ </a>
25
+ <a href="https://github.com/SafeAILab/EAGLE/pulls">
26
+ <img src="https://img.shields.io/badge/Contributions-welcome-brightgreen.svg?style=flat" alt="Contributions welcome">
27
+ </a>
28
+ </p>
29
+
30
+ ##
31
+
32
+ <p align="center">
33
+ <img src="./figs/eagle3r.jpg" alt="benchmark" width="790">
34
+ </p>
35
+
36
+ EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) is a new baseline for fast decoding of Large Language Models (LLMs) with provable performance maintenance. This approach involves extrapolating the second-top-layer contextual feature vectors of LLMs, enabling a significant boost in generation efficiency.
37
+
38
+ - EAGLE is:
39
+ - certified by the <a href="https://github.com/hemingkx/Spec-Bench/blob/main/Leaderboard.md"><b>third-party</b></a> evaluation as the **fastest** speculative method so far.
40
+ - achieving **2x** speedup on <a href="https://github.com/pytorch-labs/gpt-fast"><b>gpt-fast</b></a>.
41
+ - **3x** faster than vanilla decoding (13B).
42
+ - **2x** faster than <a href="https://lmsys.org/blog/2023-11-21-lookahead-decoding/"><b>Lookahead</b></a> (13B).
43
+ - **1.6x** faster than <a href="https://sites.google.com/view/medusa-llm"><b>Medusa</b></a> (13B).
44
+ - provably maintaining the consistency with vanilla decoding in the distribution of generated texts.
45
+ - trainable (within 1-2 days) and testable on 8x RTX 3090 GPUs. So even the GPU poor can afford it.
46
+ - combinable with other parallelled techniques such as vLLM, DeepSpeed, Mamba, FlashAttention, quantization, and hardware optimization.
47
+
48
+ EAGLE-2 uses the confidence scores from the draft model to approximate acceptance rates, dynamically adjusting the draft tree structure, which further enhances performance.
49
+
50
+ - EAGLE-2 is:
51
+ - **4x** faster than vanilla decoding (13B).
52
+ - **1.4x** faster than EAGLE-1 (13B).
53
+
54
+ EAGLE-3 removes the feature prediction constraint in EAGLE and simulates this process during training using training-time testing. Considering that top-layer features are limited to next-token prediction, EAGLE-3 replaces them with a fusion of low-, mid-, and high-level semantic features.
55
+ EAGLE-3 further improves generation speed while ensuring lossless performance.
56
+
57
+ - EAGLE-3 is:
58
+ - **5.6** faster than vanilla decoding (13B).
59
+ - **1.8x** faster than EAGLE-1 (13B).
60
+
61
+ <p align="center">
62
+ <img src="./figs/e3.gif" alt="demogif" width="600">
63
+ </p>
64
+
65
+ _Inference is conducted on 2x RTX 3090 GPUs at fp16 precision using the Vicuna 13B model._
66
+
67
+
68
+ [//]: # ()
69
+ [//]: # ()
70
+ [//]: # (Using EAGLE-2, the inference speed on 2 RTX 3060 GPUs can be faster than vanilla autoregressive decoding on an A100 GPU.)
71
+
72
+ ## Support
73
+ EAGLE has been merged in the following mainstream LLM serving frameworks (listed in alphabetical order).
74
+
75
+ - <a href="https://rocm.docs.amd.com/en/latest/">AMD ROCm</a>
76
+ - <a href="https://angelslim.readthedocs.io/zh-cn/latest/features/speculative_decoding/eagle.html">AngelSlim</a>
77
+ - <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/feature-guide.html#eagle-speculative-decoding">AWS NeuronX Distributed Core</a>
78
+ - <a href="https://github.com/OpenBMB/CPM.cu">CPM.cu</a>
79
+ - <a href="https://github.com/intel/intel-extension-for-transformers/pull/1504">Intel® Extension for Transformers</a>
80
+ - <a href="https://github.com/intel-analytics/ipex-llm/pull/11104">Intel® LLM Library for PyTorch</a>
81
+ - <a href="https://llm.mlc.ai/docs/deploy/rest.html">MLC-LLM</a>
82
+ - <a href="https://docs.nvidia.com/nemo-framework/user-guide/latest/model-optimization/speculative/speculative.html">NVIDIA NeMo Framework</a>
83
+ - <a href="https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/eagle">NVIDIA TensorRT-LLM</a>
84
+ - <a href="https://nvidia.github.io/TensorRT-Model-Optimizer/guides/7_speculative_decoding.html">NVIDIA TensorRT Model Optimizer</a>
85
+ - <a href="https://paddlenlp.readthedocs.io/en/latest/llm/docs/predict/speculative_decoding.html">PaddleNLP</a>
86
+ - <a href="https://docs.sglang.ai/advanced_features/speculative_decoding.html">SGLang</a>
87
+ - <a href="https://github.com/sgl-project/SpecForge">SpecForge</a>
88
+ - <a href="https://github.com/vllm-project/vllm/pull/16937">vLLM</a>
89
+
90
+
91
+
92
+
93
+ ## Reference
94
+ For technical details and full experimental results, please check [the paper of EAGLE](https://arxiv.org/pdf/2401.15077.pdf), [the paper of EAGLE-2](https://arxiv.org/pdf/2406.16858), and [the paper of EAGLE-3](https://arxiv.org/pdf/2503.01840).
95
+ ```
96
+ @inproceedings{li2024eagle,
97
+ author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
98
+ title = {{EAGLE}: Speculative Sampling Requires Rethinking Feature Uncertainty},
99
+ booktitle = {International Conference on Machine Learning},
100
+ year = {2024}
101
+ }
102
+ @inproceedings{li2024eagle2,
103
+ author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
104
+ title = {{EAGLE-2}: Faster Inference of Language Models with Dynamic Draft Trees},
105
+ booktitle = {Empirical Methods in Natural Language Processing},
106
+ year = {2024}
107
+ }
108
+ @misc{li2025eagle3scalinginferenceacceleration,
109
+ title={{EAGLE-3}: Scaling up Inference Acceleration of Large Language Models via Training-Time Test},
110
+ author={Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},
111
+ year={2025},
112
+ eprint={2503.01840},
113
+ archivePrefix={arXiv},
114
+ primaryClass={cs.CL},
115
+ url={https://arxiv.org/abs/2503.01840},
116
+ }
117
+ ```
figs/e3.gif ADDED

Git LFS Details

  • SHA256: ec19fcac60fdd37ca3de969b919d2e411fe782b084902e668f755c89855c14c6
  • Pointer size: 133 Bytes
  • Size of remote file: 14.2 MB
figs/eagle3r.jpg ADDED

Git LFS Details

  • SHA256: 5e404ac75809d8125e1c360c36054c2f90ebd72453a46a352609e538d91f6ba1
  • Pointer size: 131 Bytes
  • Size of remote file: 533 kB
figs/logo.png ADDED

Git LFS Details

  • SHA256: efb8aec4952905335983eaddb078036e1d286e0bf3a62ebae83eff81b27870bc
  • Pointer size: 131 Bytes
  • Size of remote file: 745 kB