Update README.md
Browse files
README.md
CHANGED
|
@@ -28,7 +28,9 @@ base_model:
|
|
| 28 |
**OpenResearcher** is a fully open agentic large language model (30B-A3B) designed for **long-horizon deep research** scenarios. It achieves an impressive **54.8%** accuracy on [BrowseComp-Plus](https://huggingface.co/spaces/Tevatron/BrowseComp-Plus), surpassing performance of `GPT-4.1`, `Claude-Opus-4`, `Gemini-2.5-Pro`, `DeepSeek-R1` and `Tongyi-DeepResearch`. It also demonstrates **leading performance** across a range of deep research benchmarks, including BrowseComp, GAIA, WebWalkerQA, and xbench-DeepSearch. We **fully open-source** the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research.
|
| 29 |
|
| 30 |
## OpenResearcher-30B-A3B-GGUF
|
| 31 |
-
|
|
|
|
|
|
|
| 32 |
| Quantization | File Size | BPW | PPL | +/- | Tokens/sec |
|
| 33 |
|---|---|---|---|---|---|
|
| 34 |
| BF16 | 58.84 GiB | 16.00 | 8.4522 | 0.06489 | 4,117.90 |
|
|
|
|
| 28 |
**OpenResearcher** is a fully open agentic large language model (30B-A3B) designed for **long-horizon deep research** scenarios. It achieves an impressive **54.8%** accuracy on [BrowseComp-Plus](https://huggingface.co/spaces/Tevatron/BrowseComp-Plus), surpassing performance of `GPT-4.1`, `Claude-Opus-4`, `Gemini-2.5-Pro`, `DeepSeek-R1` and `Tongyi-DeepResearch`. It also demonstrates **leading performance** across a range of deep research benchmarks, including BrowseComp, GAIA, WebWalkerQA, and xbench-DeepSearch. We **fully open-source** the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research.
|
| 29 |
|
| 30 |
## OpenResearcher-30B-A3B-GGUF
|
| 31 |
+
**Note: For the best performance, we recommend using [OpenResearcher-30B-A3B](https://huggingface.co/OpenResearcher/OpenResearcher-30B-A3B).**
|
| 32 |
+
|
| 33 |
+
To support efficient deployment, we release several quantized versions of [OpenResearcher-30B-A3B](https://huggingface.co/OpenResearcher/OpenResearcher-30B-A3B), including `Q4_K_M`, `Q5_0`, `Q5_K_M`, `Q6_K`, and `Q8_0`.
|
| 34 |
| Quantization | File Size | BPW | PPL | +/- | Tokens/sec |
|
| 35 |
|---|---|---|---|---|---|
|
| 36 |
| BF16 | 58.84 GiB | 16.00 | 8.4522 | 0.06489 | 4,117.90 |
|