Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,8 @@ tags: []
|
|
| 8 |
## Model Description
|
| 9 |
|
| 10 |
FuseLLM is a project that aims to combine the knowledge and strengths of large language models with different transformer architectures into a single model.
|
| 11 |
-
* https://github.com/fanqiwan/FuseLLM
|
|
|
|
| 12 |
The source models used for adapting this idea into Korean in this repository are Orion (Base), OPEN-SOLAR-KO-10.7B, and Yi-Ko-6B (Sources).
|
| 13 |
|
| 14 |
### Model Architecture
|
|
@@ -65,7 +66,7 @@ As with any language model, FuseLLM may exhibit biases present in the training d
|
|
| 65 |
|
| 66 |
## Acknowledgments
|
| 67 |
|
| 68 |
-
We would like to thank Sionic AI for providing the computational resources needed for training the FuseLLM model.
|
| 69 |
|
| 70 |
## Contact
|
| 71 |
|
|
|
|
| 8 |
## Model Description
|
| 9 |
|
| 10 |
FuseLLM is a project that aims to combine the knowledge and strengths of large language models with different transformer architectures into a single model.
|
| 11 |
+
* The original idea is suggested by https://github.com/fanqiwan/FuseLLM
|
| 12 |
+
|
| 13 |
The source models used for adapting this idea into Korean in this repository are Orion (Base), OPEN-SOLAR-KO-10.7B, and Yi-Ko-6B (Sources).
|
| 14 |
|
| 15 |
### Model Architecture
|
|
|
|
| 66 |
|
| 67 |
## Acknowledgments
|
| 68 |
|
| 69 |
+
We would like to thank Sionic AI (https://sionic.ai) for providing the computational A100 x8 resources needed for training the FuseLLM model.
|
| 70 |
|
| 71 |
## Contact
|
| 72 |
|