Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- text-generation-inference
|
| 4 |
+
- transformers
|
| 5 |
+
- unsloth
|
| 6 |
+
- olmo2
|
| 7 |
+
license: apache-2.0
|
| 8 |
+
language:
|
| 9 |
+
- en
|
| 10 |
+
datasets:
|
| 11 |
+
- Pinkstack/roblox-luau-corpus-text
|
| 12 |
+
- Roblox/luau_corpus
|
| 13 |
+
- boatbomber/roblox-info-dump
|
| 14 |
+
- wikimedia/wikipedia
|
| 15 |
+
pipeline_tag: text-generation
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
+
# print("Before we start")
|
| 21 |
+
We are not related to Roblox in any way, any mention of Roblox is purely to help people understand what the model is about.
|
| 22 |
+
As per the [Roblox website](https://create.roblox.com/docs/assistant/guide), they use Meta's Llama 3 (we assume 70B) for their AI assistant. This model, while powerful, cannot come close to the performance of a 70B model.
|
| 23 |
+
|
| 24 |
+
# print("Stages of pre-training")
|
| 25 |
+
|
| 26 |
+
This model was pre-trained in 3 stages.
|
| 27 |
+
|
| 28 |
+
- Stage 1: Pre-training on the Pinkstack/roblox-luau-corpus-text & Roblox/luau_corpus on 4096 tokens (the maximum olmo 2 can usually reach)
|
| 29 |
+
|
| 30 |
+
- Stage 2: Pre-training on the boatbomber/roblox-info-dump with rope scaling set to 4, so stage 2 was for expanding the context of the model to **16384**.
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
!stage 3 and onwards were with added layers. the model started with 16 layers, then we merged another 20 to make the model bigger and deeper!
|
| 34 |
+
- Stage 3: Training on a mix of Pinkstack/roblox-luau-corpus-text & Roblox/luau_corpus + wikimedia/wikipedia with rope scaling set to 8, aka **32768** tokens of context. We mixed the wikimedia/wikipedia to hopefully improve the general text and knowledge of the model.
|
| 35 |
+
|
| 36 |
+
This repo contains the stage 3 pre-trained/base model.
|