Update config.json
Browse filesHi, first, thank you for making this model :)
When using LLM Compressor to quantize in FP8, I got the following error when first loading the model:
> model = AutoModelForCausalLM.from_pretrained(MODEL_ID, dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="cuda", local_files_only=True)
`rope_scaling`'s beta_fast field must be a float, got 32
`rope_scaling`'s beta_slow field must be a float, got 1
It might be my local environment only (my transformers version is "4.57.3"), but just in case you observe the same.
- config.json +2 -2
config.json
CHANGED
|
@@ -52,8 +52,8 @@
|
|
| 52 |
"rms_norm_eps": 1e-06,
|
| 53 |
"rope_scaling": {
|
| 54 |
"attention_factor": 1.2079441541679836,
|
| 55 |
-
"beta_fast": 32,
|
| 56 |
-
"beta_slow": 1,
|
| 57 |
"factor": 8.0,
|
| 58 |
"original_max_position_embeddings": 8192,
|
| 59 |
"rope_type": "yarn"
|
|
|
|
| 52 |
"rms_norm_eps": 1e-06,
|
| 53 |
"rope_scaling": {
|
| 54 |
"attention_factor": 1.2079441541679836,
|
| 55 |
+
"beta_fast": 32.0,
|
| 56 |
+
"beta_slow": 1.0,
|
| 57 |
"factor": 8.0,
|
| 58 |
"original_max_position_embeddings": 8192,
|
| 59 |
"rope_type": "yarn"
|