Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,9 @@ language:
|
|
| 8 |
|
| 9 |
## Mixtral Experts with DeepSeek-MoE Architecture
|
| 10 |
|
|
|
|
|
|
|
|
|
|
| 11 |
This is a direct extraction of the 8 experts from [Mixtral-8x7b-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), and a transfer of them into the DeepSeek-MoE Architecture.
|
| 12 |
|
| 13 |
- **Expert Configuration:** It is 2 experts per token.
|
|
|
|
| 8 |
|
| 9 |
## Mixtral Experts with DeepSeek-MoE Architecture
|
| 10 |
|
| 11 |
+
[](https://discord.gg/cognitivecomputations)
|
| 12 |
+
Discord: https://discord.gg/cognitivecomputations
|
| 13 |
+
|
| 14 |
This is a direct extraction of the 8 experts from [Mixtral-8x7b-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), and a transfer of them into the DeepSeek-MoE Architecture.
|
| 15 |
|
| 16 |
- **Expert Configuration:** It is 2 experts per token.
|