Text Generation
Transformers
Safetensors
step3p5
conversational
custom_code
Eval Results
WinstonDeng commited on
Commit
a9197e1
·
verified ·
1 Parent(s): ec9ad06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -305,10 +305,11 @@ print(output_text)
305
  - Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
306
  - Recommended: 128GB unified memory
307
  #### Steps
308
- 1. Use llama.cpp:
 
309
  ```bash
310
- git clone git@github.com:stepfun-ai/Step-3.5-Flash.git
311
- cd Step-3.5-Flash/llama.cpp
312
  ```
313
  2. Build llama.cpp on Mac:
314
  ```bash
 
305
  - Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
306
  - Recommended: 128GB unified memory
307
  #### Steps
308
+ 1. Use official llama.cpp:
309
+ > the folder `Step-3.5-Flash/tree/main/llama.cpp` is **obsolete**
310
  ```bash
311
+ git clone https://github.com/ggml-org/llama.cpp
312
+ cd llama.cpp
313
  ```
314
  2. Build llama.cpp on Mac:
315
  ```bash