--- license: apache-2.0 tags: - mistral - Uncensored - text-generation-inference - transformers - unsloth - trl - roleplay - conversational - rp datasets: - N-Bot-Int/Iris-Uncensored-R2 - N-Bot-Int/Millie-R1_DPO - N-Bot-Int/Millia-R1_DPO language: - en base_model: - unsloth/mistral-7b-instruct-v0.3-bnb-4bit pipeline_tag: text-generation library_name: transformers metrics: - character --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/kR8JIdSZl9x9RCUeqg1ZW.png) # Official Quants are Uploaded By Us - [MistThena7BV2-GGUF](https://huggingface.co/N-Bot-Int/MistThena7BV2-GGUF) # Support us on Ko-Fi! - [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV) # MistThena7B - V2. - Introducing our Mindboggling MistThena7B **V2**, This Version Offer an Upgraded RP experience, beyond Other AI model We've made, Outcompetting our 3B, 1B, MythoMax, Deepseek and Hermes for Roleplaying! - **MistThena7B-V2** Offer an expanded Roleplay capabilities, using our EmojiEmulsifyer Program to Train MistThena7B To Use Emojis, expanding the Roleplaying Immersiveness and Actionsets MistThena7B can do! - Activate MistThena's Expanded Actions, by mirroring it(ie using Emoji on your own prompts), to ensure MistThena's Use of Emoji or Actions! - MistThena7B-V2 is also trained on 160K Examples from our Latest Corpus **IRIS_UNCENSORED_R2**!, This shows that Iris_Uncensored_R2 shows huge potential to produce good outputs, and reveals MistThena7B's Good Roleplaying capabilities, Which were obtained through High and rigorous training, combined with Preventive measure to overfitting - MistThena7B contains more Fine-tuned Dataset so please Report any issues found through our email [nexus.networkinteractives@gmail.com](mailto:nexus.networkinteractives@gmail.com) about any overfitting, or improvements for the future Model **V3**, Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations - MistThena is - **Developed by:** N-Bot-Int - **License:** apache-2.0 - **Finetuned from model:** unsloth/mistral-7b-instruct-v0.3-bnb-4bit - **Sequential Trained from Model:** N-Bot-Int/OpenElla3-Llama3.2A - **Dataset Combined Using:** Mosher-R1(Propietary Software) - Comparison Metric Score ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/UV2z1eygZ38Pa710Bjbf3.png) - Metrics Made By **ItsMeDevRoland** Which compares: - **MistThena7B-V1** - **MistThena7B-V2 : 60 STEP VERSION** Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab), To Properly Showcase the differences and strength of the Models --- # πŸŒ€ MistThema-7B V2: Slower Beats, Stronger Bonds β€” A Roleplay Revival > "She may not win the speed race, but when it comes to presence and performance β€” she owns the stage." ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/d27tdTbSyTopKL5pAn_bA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/-IxmfIDUJkkNPhYNDl8xB.png) --- # MistThema-7B V2 isn’t just an upgrade β€” she’s a reinvention. Built on V1’s storytelling roots, V2 shifts her focus inward: - **longer scenes, deeper characters, and dialogue that breathes.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/24kHzq3gksyL2Of0RuwgI.png) - πŸ’¬ **Roleplay Evaluation** - ✍️ **Length Score**: 0.34 β†’ **1.00** (πŸš€) - 🧠 **Character Consistency**: 0.20 β†’ **0.53** - 🌌 **Immersion**: 0.00 β†’ **0.47** - 🎭 **Overall RP Score**: 0.17 β†’ **0.67** > She no longer just responds β€” she *inhabits*. MistThema-7B V2 is the method actor of models, channeling roles with vivid coherence and creative depth. --- # βš™οΈ The Cost of Craft: Time for Thought - πŸ•’ **Inference Time**: 114s β†’ **179s** (↑) - ⚑ **Tokens/sec**: 1.51 β†’ **1.28** (↓) > Yes, she’s slower β€” but that’s not a bug. That’s intention. Every word is more considered, every output more deliberate. --- # πŸ“ Traditional Metrics? A Trade-off - πŸ“˜ **BLEU Score**: 0.43 β†’ **0.18** - πŸ“• **ROUGE-L**: 0.60 β†’ **0.32** > While V1 outperforms on surface-level matching, V2 is optimized for *experiential fidelity*, not rigid overlap. --- # 🎯 Reimagined for Realness MistThema-7B V2 isn’t trying to mimic β€” she’s trying to *immerse*. Designed to tell stories, embody roles, and hold character in long-form exchanges. - 🧩 Tailored for: - Narrative-heavy use cases - Emotional continuity and consistency - Richer, longer interactions --- > β€œMistThema-7B V2 trades benchmarks for believability. Less about matching β€” more about meaning. Less polished β€” more *present*.” # MistThema-7B V2 is where slower feels *stronger*. --- - # Notice - **For a Good Experience, Please use** - ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/Md874IFaDkTJdykqpn-Jo.png) - {FOR KOBOLDCPP USERS, PLEASE USE COHERENT CREATIVITY LEGACY} - TL:DR I'm still looking for ways to improve a good settings, if you have any good settings, feel free to put them in the community! i'll credit ya out! - # Detail card: - Parameter - 7 Billion Parameters - (Please visit your GPU Vendor if you can Run 7B models) - Training - 250 Steps - N-Bot-Int/Iris_Uncensored_R2 - 60 Steps - N-Bot-Int/Millie_DPO - Finetuning tool: - Unsloth AI - This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) - Fine-tuned Using: - Google Colab