Adarsh-Iyer commited on
Commit
bc6c040
·
verified ·
1 Parent(s): fe52af9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -10,14 +10,12 @@ This repo contains the model weights for **Instinct**, [Continue](https://contin
10
 
11
  ## Serving the model
12
 
13
- We've released a [Q4_K_M GGUF quantization of Instinct](https://huggingface.co/continuedev/instinct-GGUF), for efficient local inference.
14
 
15
- [Ollama Instructions coming soon]
16
 
17
- Besides Ollama, there are many ways to plug a local model into Continue; we internally used an endpoint served by [SGLang](https://github.com/sgl-project/sglang), which is one of the options below. Quantizing for faster inference is also an option that worked well for us.
18
-
19
- * SGLang: `python3 -m sglang.launch_server --model-path continuedev/instinct --load-format safetensors`
20
- * vLLM : `vllm serve continuedev/instinct --served-model-name instinct --load-format safetensors`
21
 
22
  ## Learn more
23
 
 
10
 
11
  ## Serving the model
12
 
13
+ **Ollama**: We've released a [Q4_K_M GGUF quantization of Instinct](https://huggingface.co/continuedev/instinct-GGUF) for efficient local inference. Try it with [Continue's Ollama integration](https://docs.continue.dev/guides/ollama-guide).
14
 
15
+ Besides Ollama, there are many ways to plug a local model into Continue; we internally used an endpoint served by [SGLang](https://github.com/sgl-project/sglang), which is one of the options below. Quantizing for faster inference is also an option that worked well for us. Serve the model using either of the below options, then [connect it with Continue](https://docs.continue.dev/guides/how-to-self-host-a-model).
16
 
17
+ SGLang: `python3 -m sglang.launch_server --model-path continuedev/instinct --load-format safetensors`
18
+ <br>vLLM : `vllm serve continuedev/instinct --served-model-name instinct --load-format safetensors`
 
 
19
 
20
  ## Learn more
21