Update README.md
Browse files
README.md
CHANGED
|
@@ -2,9 +2,9 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
|
| 5 |
-
# Model Card for
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
Our ZAYA1 base model benchmark performance is extremely competitive with the SoTA Qwen3 series of models of comparable scale, and outperforms comparable western open-source models such as SmolLM3, and Phi4. ZAYA1-Base excels especially at complex and challenging mathematical and STEM reasoning tasks, nearly matching the performance of SoTA Qwen3 thinking models under high pass@k settings even prior to explicit post-training for reasoning, and exceeds other strong reasoning models such as Phi4-reasoning, and Deepseek-R1-Distill.
|
| 10 |
|
|
@@ -15,10 +15,10 @@ This version of the model has undergone an additional 1T tokens of reasoning-foc
|
|
| 15 |
|
| 16 |
## Model Details
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
- **Compressed Convolutional Attention (CCA)**: [This novel attention](-/TODO) mechanism performs attention entirely in the latent space enabling significant reductions in parameter count, prefill compute, and KV cache size compared to alternative attention mechanisms, while also being more performant in loss/flop.
|
| 21 |
-
- **
|
| 22 |
- **Residual Scaling**: We add learnable scalar gates and biases to the residual stream and the outputs of each block. This provides a lightweight method to allow the model to carefully control its own norm and degree of forgetting across depth.
|
| 23 |
|
| 24 |
|
|
@@ -28,13 +28,13 @@ ZAYA1-reasoning-base uses the [Gemma3](https://ai.google.dev/gemma/terms) tokeni
|
|
| 28 |
|
| 29 |
## Performance
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |

|
| 34 |
|
| 35 |

|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |

|
| 40 |
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
|
| 5 |
+
# Model Card for ZAYA1-reasoning-base
|
| 6 |
|
| 7 |
+
ZAYA1 is an 800m active/8.3B total parameter MoE model, and the first trained entirely end-to-end on AMD’s hardware, software, and networking stack.
|
| 8 |
|
| 9 |
Our ZAYA1 base model benchmark performance is extremely competitive with the SoTA Qwen3 series of models of comparable scale, and outperforms comparable western open-source models such as SmolLM3, and Phi4. ZAYA1-Base excels especially at complex and challenging mathematical and STEM reasoning tasks, nearly matching the performance of SoTA Qwen3 thinking models under high pass@k settings even prior to explicit post-training for reasoning, and exceeds other strong reasoning models such as Phi4-reasoning, and Deepseek-R1-Distill.
|
| 10 |
|
|
|
|
| 15 |
|
| 16 |
## Model Details
|
| 17 |
|
| 18 |
+
ZAYA1's architecture includes several innovations developed at Zyphra. These include
|
| 19 |
|
| 20 |
- **Compressed Convolutional Attention (CCA)**: [This novel attention](-/TODO) mechanism performs attention entirely in the latent space enabling significant reductions in parameter count, prefill compute, and KV cache size compared to alternative attention mechanisms, while also being more performant in loss/flop.
|
| 21 |
+
- **ZAYA1 Router**: The ZAYA1 router makes fundamental improvements to the linear router used in almost all existing large-scale MoE models. The ZAYA1 router replaces the linear with a downprojection followed by a depth-mixing EDA layer then a three-layer MLP per expert to add significant nonlinear expressivity to the router.
|
| 22 |
- **Residual Scaling**: We add learnable scalar gates and biases to the residual stream and the outputs of each block. This provides a lightweight method to allow the model to carefully control its own norm and degree of forgetting across depth.
|
| 23 |
|
| 24 |
|
|
|
|
| 28 |
|
| 29 |
## Performance
|
| 30 |
|
| 31 |
+
ZAYA1 base performs extremely competitively against other base models of a similar and even greater scale.
|
| 32 |
|
| 33 |

|
| 34 |
|
| 35 |

|
| 36 |
|
| 37 |
+
ZAYA1-reasoning-base also performs extremely strongly on many extremely challenging mathematics and coding tasks which require long chain of thought reasoning, despite not yet having undergone full RL post-training. Matching or exceeding the strong Qwen3-4b-thinking models at many of these tasks.
|
| 38 |
|
| 39 |

|
| 40 |
|