Update README.md
Browse files
README.md
CHANGED
|
@@ -71,6 +71,9 @@ This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from
|
|
| 71 |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
|
| 72 |
|
| 73 |
## How To Use
|
|
|
|
|
|
|
|
|
|
| 74 |
### Reproduce the model
|
| 75 |
Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model:
|
| 76 |
|
|
|
|
| 71 |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
|
| 72 |
|
| 73 |
## How To Use
|
| 74 |
+
|
| 75 |
+
Context length for this model: 8192 tokens (same as [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
|
| 76 |
+
|
| 77 |
### Reproduce the model
|
| 78 |
Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model:
|
| 79 |
|