| | --- |
| | license: agpl-3.0 |
| | language: |
| | - en |
| | base_model: |
| | - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B |
| | library_name: predacons |
| | tags: |
| | - 'reasoning ' |
| | - chain of thought |
| | - problem solving |
| | --- |
| | |
| | ## Model Details |
| |
|
| | ### Model Description |
| |
|
| | Predacon/Pico-R1-1.5b |
| | Model Overview: Predacon/Pico-R1-1.5b is a highly efficient and accurate language model fine-tuned on the “deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B” base model. Despite its compact size of just 0.99GB, it delivers exceptional performance, particularly in tasks requiring logical reasoning and structured thought processes. |
| |
|
| |
|
| | - **Developed by:** [Shourya Shashank](https://huggingface.co/shouryashashank) |
| | - **Model type:** Transformer-based Language Model |
| | - **Language(s) (NLP):** English |
| | - **License:** AGPL-3.0 |
| | - **Finetuned from model [optional]:** deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B |
| |
|
| |
|
| | #### Key Features: |
| |
|
| | * **Compact Size**: At only 1.6GB, it is lightweight and easy to deploy, making it suitable for environments with limited computational resources. |
| | * **High Accuracy**: The model’s training on a specialized chain of thought and reasoning dataset enhances its ability to perform complex reasoning tasks with high precision. |
| | * **Fine-Tuned on Qwen1.5-1.8B**: Leveraging the robust foundation of the “deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B” model, it inherits strong language understanding and generation capabilities. |
| |
|
| | #### Applications: |
| |
|
| | * **Educational Tools**: Ideal for developing intelligent tutoring systems that require nuanced understanding and explanation of concepts. |
| | * **Customer Support**: Enhances automated customer service systems by providing accurate and contextually relevant responses. |
| | * **Research Assistance**: Assists researchers in generating hypotheses, summarizing findings, and exploring complex datasets. |
| |
|
| |
|
| |
|
| | ## Uses |
| | * Lightweight: The software is designed to be extremely lightweight, ensuring it can run efficiently on any system without requiring extensive resources. |
| | * Natural Language Understanding: Ideal for applications requiring human-like text understanding and generation, such as chatbots, virtual assistants, and content generation tools. |
| | * Small Size: Despite its compact size of just 0.99GB, it packs a powerful punch, making it easy to download and install. |
| | * High Reliability: The reliability is significantly enhanced due to the chain-of-thought approach integrated into its design, ensuring consistent and accurate performance. |
| | ### Direct Use |
| |
|
| | * Problem Explanation: Generate detailed descriptions and reasoning for various problems, useful in educational contexts, customer support, and automated troubleshooting. |
| | * Natural Language Understanding: Ideal for applications requiring human-like text understanding and generation, such as chatbots, virtual assistants, and content generation tools. |
| | * Compact Deployment: Suitable for environments with limited computational resources due to its small size and 4-bit quantization. |
| |
|
| |
|
| | ### Downstream Use [optional] |
| |
|
| | * Educational Tools: Fine-tune the model on educational datasets to provide detailed explanations and reasoning for academic subjects. |
| | * Customer Support: Fine-tune on customer service interactions to enhance automated support systems with accurate and context-aware responses. |
| |
|
| |
|
| |
|
| | ## Bias, Risks, and Limitations |
| |
|
| | ### Limitations |
| |
|
| | **Predacon/Pico-R1-1.5b** is a compact model designed for efficiency, but it comes with certain limitations: |
| |
|
| | 3. **Limited Context Understanding**: |
| | - With a smaller parameter size, the model may have limitations in understanding and generating contextually rich and nuanced responses compared to larger models. |
| |
|
| | 4. **Bias and Fairness**: |
| | - Like all language models, Predacon/Pico-R1-1.5b may exhibit biases present in the training data. Users should be cautious of potential biases in the generated outputs. |
| |
|
| | 5. **Resource Constraints**: |
| | - While the model is designed to be efficient, it still requires a GPU for optimal performance. Users with limited computational resources may experience slower inference times. |
| |
|
| |
|
| | ### Example Usage: |
| | ```python |
| | import predacons |
| | |
| | # Load the model and tokenizer |
| | model_path = "Predacon/Pico-R1-1.5b" |
| | model = predacons.load_model(model_path) |
| | tokenizer = predacons.load_tokenizer(model_path) |
| | |
| | # Example usage |
| | chat = [ |
| | {"role": "user", "content": "A train travelling at a speed of 60 km/hr is stopped in 15 seconds by applying the brakes. Determine its retardation."}, |
| | ] |
| | res = predacons.chat_generate(model = model, |
| | sequence = chat, |
| | max_length = 5000, |
| | tokenizer = tokenizer, |
| | trust_remote_code = True, |
| | do_sample=True, |
| | |
| | ) |
| | |
| | print(res) |
| | ``` |
| |
|
| | This example demonstrates how to load the `Predacon/Pico-R1-1.5b` model and use it to generate an explanation for a given query, keeping in mind the limitations mentioned above. |
| |
|
| |
|
| |
|
| | ## Model Card Authors [optional] |
| |
|
| | [Shourya Shashank](https://huggingface.co/shouryashashank) |