File size: 4,548 Bytes
4d4cb4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---

base_model: HuggingFaceTB/SmolLM-360M-Instruct
language:
  - en
library_name: transformers
license: apache-2.0
datasets:
  - LSXPrime/ProseFlow-Actions-v1
tags:
  - text-generation
  - instruction
  - proseflow
  - unsloth
  - smollm
  - writing-assistant
---


# ProseFlow-v1-360M-Instruct

**ProseFlow-v1-360M-Instruct** is a lightweight, experimental instruction-tuned model created for the [ProseFlow desktop application](https://github.com/LSXPrime/ProseFlow). This model is a fine-tune of HuggingFace's [**SmolLM-360M-Instruct**](https://huggingface.co/HuggingFaceTB/SmolLM-360M-Instruct) and was created to explore the capabilities of smaller language models on a diverse set of text-processing tasks.

The model was fine-tuned on the [**ProseFlow-Actions-v1**](https://huggingface.co/datasets/LSXPrime/ProseFlow-Actions-v1) dataset.

**Note:** This model is provided for research and experimental purposes and low-resource devices. For the best user experience in the ProseFlow application, the larger and more capable [`ProseFlow-v1-1.5B-Instruct`](https://huggingface.co/LSXPrime/ProseFlow-v1-1.5B-Instruct) model is strongly recommended.

## Model Description

ProseFlow is a universal AI text processor that allows users to create and execute custom AI "Actions" on text in any application. This model was an experiment to see if a ~360M parameter model could reliably perform the wide range of tasks defined in the training dataset.

### Performance and Capabilities

Evaluations show that while this model is extremely fast and has very low resource requirements, its capabilities are limited.

#### Strengths:
*   **Extremely Lightweight:** Can run on devices with very limited RAM and computational power.
*   **Strict Formatting Adherence (sometimes):** In some cases where it understands the task, it can follow rigid formatting instructions (like creating a bulleted list) more strictly than its larger counterpart.
*   **Simple Data Extraction:** It shows some capability in basic data extraction and formatting tasks, such as creating Markdown tables or extracting contact information.

#### Weaknesses & Limitations:
*   **Poor Reasoning:** The model struggles significantly with tasks that require logical reasoning, inference, or multi-step problem-solving. It often fails on word problems and logical puzzles.
*   **Limited Creativity:** It is not effective at creative writing tasks like continuing a story or generating novel content. Its outputs are often repetitive or nonsensical.
*   **Instructional Failures:** The model frequently violates the "no extra text" rule by adding conversational chatter. In many cases, it fails the task entirely and repeats the input verbatim.
*   **Hallucination:** On some tasks (e.g., `To Paragraph`), the model hallucinates content completely unrelated to the input.
*   **Unreliable for Complex Tasks:** It is not suitable for complex tasks like code refactoring, bug finding, or drafting professional business correspondence.

### Intended Use

This model is intended for **experimental use** and for users on **extremely resource-constrained systems** who are willing to accept a significant trade-off in performance and reliability. It may be suitable for a very limited subset of simple, repetitive text-formatting tasks.

It is designed to be used within the **ProseFlow desktop application**, but it is **not the recommended model for general use**.

## How to Use in ProseFlow

1.  [Download and install the ProseFlow application](https://github.com/LSXPrime/ProseFlow/releases).
2.  Navigate to the **Providers -> Local Provider** tab.
3.  Click "Manage Models..." and download `ProseFlow-v1-360M-Instruct` from the "Available for Download" list.
4.  Once downloaded, select it from the "My Models" list.
5.  Set your "Primary Service Type" in ProseFlow to **Local**.
6.  Be aware of the limitations described above when executing actions.

## Training Details

*   **Base Model:** [HuggingFaceTB/SmolLM-360M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-360M-Instruct)
*   **Dataset:** [LSXPrime/ProseFlow-Actions-v1](https://huggingface.co/datasets/LSXPrime/ProseFlow-Actions-v1)
*   **Fine-tuning Library:** [Unsloth](https://github.com/unslothai/unsloth)
*   **Fine-tuning Method:** Supervised fine-tuning on a dataset of structured instruction-input-output triplets.

## License

This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).