Update README.md
Browse files
README.md
CHANGED
|
@@ -5,14 +5,14 @@ license_link: https://huggingface.co/LiquidAI/LFM2-350M/blob/main/LICENSE
|
|
| 5 |
datasets:
|
| 6 |
- Archi-medes/LabGuide_Preview
|
| 7 |
base_model:
|
| 8 |
-
- LiquidAI/LFM2-
|
| 9 |
pipeline_tag: question-answering
|
| 10 |
---
|
| 11 |
# LabGuide Preview Model
|
| 12 |
|
| 13 |
## Model Summary
|
| 14 |
The **LabGuide Preview Model** is a demonstration release built entirely with **Madlab**, using its synthetic dataset generator and training workflow.
|
| 15 |
-
It is based on [LiquidAI/LFM2-
|
| 16 |
|
| 17 |
This model illustrates how applications can leverage Madlab to train their own assistants in a reproducible and accessible way.
|
| 18 |
It is **not intended for production use**, but rather as a preview for contributors, collaborators, and community feedback.
|
|
@@ -28,7 +28,7 @@ It is **not intended for production use**, but rather as a preview for contribut
|
|
| 28 |
|
| 29 |
## Training Process
|
| 30 |
- **Framework**: Madlab training pipeline.
|
| 31 |
-
- **Base Model**: [LiquidAI/LFM2-
|
| 32 |
- **Workflow**: Synthetic dataset generation → Madlab training loop → Magic Judge Evaluation → Preview model release.
|
| 33 |
- **Objective**: Demonstrate Madlab’s integrated workflow for building application-specific assistants.
|
| 34 |
|
|
@@ -56,5 +56,5 @@ It is **not intended for production use**, but rather as a preview for contribut
|
|
| 56 |
---
|
| 57 |
|
| 58 |
## Acknowledgements
|
| 59 |
-
- Base model: [LiquidAI/LFM2-
|
| 60 |
- Built and trained with [**Madlab**](https://github.com/archimedes1618/madlab).
|
|
|
|
| 5 |
datasets:
|
| 6 |
- Archi-medes/LabGuide_Preview
|
| 7 |
base_model:
|
| 8 |
+
- LiquidAI/LFM2-700M
|
| 9 |
pipeline_tag: question-answering
|
| 10 |
---
|
| 11 |
# LabGuide Preview Model
|
| 12 |
|
| 13 |
## Model Summary
|
| 14 |
The **LabGuide Preview Model** is a demonstration release built entirely with **Madlab**, using its synthetic dataset generator and training workflow.
|
| 15 |
+
It is based on [LiquidAI/LFM2-700M](https://huggingface.co/LiquidAI/LFM2-700M), adapted to showcase Madlab’s end-to-end capabilities for dataset creation, model training, and assistant deployment.
|
| 16 |
|
| 17 |
This model illustrates how applications can leverage Madlab to train their own assistants in a reproducible and accessible way.
|
| 18 |
It is **not intended for production use**, but rather as a preview for contributors, collaborators, and community feedback.
|
|
|
|
| 28 |
|
| 29 |
## Training Process
|
| 30 |
- **Framework**: Madlab training pipeline.
|
| 31 |
+
- **Base Model**: [LiquidAI/LFM2-700M](https://huggingface.co/LiquidAI/LFM2-700M).
|
| 32 |
- **Workflow**: Synthetic dataset generation → Madlab training loop → Magic Judge Evaluation → Preview model release.
|
| 33 |
- **Objective**: Demonstrate Madlab’s integrated workflow for building application-specific assistants.
|
| 34 |
|
|
|
|
| 56 |
---
|
| 57 |
|
| 58 |
## Acknowledgements
|
| 59 |
+
- Base model: [LiquidAI/LFM2-700M](https://huggingface.co/LiquidAI/LFM2-700M).
|
| 60 |
- Built and trained with [**Madlab**](https://github.com/archimedes1618/madlab).
|