| | --- |
| | base_model: |
| | - mistralai/Mistral-7B-v0.1 |
| | - cognitivecomputations/dolphin-2.8-mistral-7b-v02 |
| | - NousResearch/Hermes-2-Pro-Mistral-7B |
| | library_name: transformers |
| | tags: |
| | - mergekit |
| | - merge |
| | license: apache-2.0 |
| | --- |
| | # ExperimentOne (Mistral-Hermes-Dolphin-7b) |
| |
|
| | This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
| |
|
| | ## Merge Details |
| | ### Merge Method |
| |
|
| | This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. |
| |
|
| | ### Models Merged |
| |
|
| | The following models were included in the merge: |
| | * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) |
| | * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) |
| |
|
| | ### Configuration |
| |
|
| | The following YAML configuration was used to produce this model: |
| |
|
| | ```yaml |
| | models: |
| | - model: mistralai/Mistral-7B-v0.1 |
| | - model: NousResearch/Hermes-2-Pro-Mistral-7B |
| | - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 |
| | merge_method: model_stock |
| | base_model: mistralai/Mistral-7B-v0.1 |
| | dtype: bfloat16 |
| | ``` |