--- language: - en library_name: transformers pipeline_tag: text-generation tags: - esper - esper-3.1 - esper-3 - valiant - valiant-labs - mistral3 - mistral - mistral-common - ministral-3-8b - ministral - reasoning - code - code-instruct - python - javascript - dev-ops - jenkins - terraform - ansible - docker - jenkins - kubernetes - helm - grafana - prometheus - shell - bash - azure - aws - gcp - cloud - scripting - powershell - problem-solving - architect - engineer - developer - creative - analytical - expert - rationality - conversational - chat - instruct base_model: mistralai/Ministral-3-8B-Reasoning-2512 datasets: - sequelbox/Titanium3-DeepSeek-V3.1-Terminus - sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus - sequelbox/Tachibana3-Part2-DeepSeek-V3.2 - sequelbox/Mitakihara-DeepSeek-R1-0528 license: apache-2.0 --- **[Support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/qdicXwrO_XOKRTjOu2yBF.jpeg) Esper 3.1: [Ministral-3-3B-Reasoning-2512](https://huggingface.co/ValiantLabs/Ministral-3-3B-Reasoning-2512-Esper3.1), [Qwen3-4B-Thinking-2507](https://huggingface.co/ValiantLabs/Qwen3-4B-Thinking-2507-Esper3.1), [Ministral-3-8B-Reasoning-2512](https://huggingface.co/ValiantLabs/Ministral-3-8B-Reasoning-2512-Esper3.1), [Ministral-3-14B-Reasoning-2512](https://huggingface.co/ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1), [gpt-oss-20b](https://huggingface.co/ValiantLabs/gpt-oss-20b-Esper3.1) Esper 3.1 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3. - Your dedicated DevOps expert: Esper 3.1 maximizes DevOps and architecture helpfulness, powered by [high-difficulty DevOps and architecture data](https://huggingface.co/datasets/sequelbox/Titanium3-DeepSeek-V3.1-Terminus) generated with DeepSeek-V3.1-Terminus! - Improved coding performance: challenging code-reasoning datasets stretch [DeepSeek-V3.1-Terminus](https://huggingface.co/datasets/sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus) and [DeepSeek-V3.2-Exp](https://huggingface.co/datasets/sequelbox/Tachibana3-Part2-DeepSeek-V3.2) to the limits, allowing Esper 3.1 to tackle harder coding tasks! - AI to build AI: our [high-difficulty AI expertise data](https://huggingface.co/datasets/sequelbox/Mitakihara-DeepSeek-R1-0528) boosts Esper 3.1's MLOps, AI architecture, AI research, and general reasoning skills. - Small model sizes allow running on local desktop and mobile, plus super-fast server inference! ## Prompting Guide Esper 3.1 uses the [Ministral-3-8B-Reasoning-2512](https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512) prompt format. Example inference script to get started: ```python import torch from transformers import Mistral3ForConditionalGeneration, MistralCommonBackend model_id = "ValiantLabs/Ministral-3-8B-Reasoning-2512-Esper3.1" tokenizer = MistralCommonBackend.from_pretrained(model_id) model = Mistral3ForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto" ) user_prompt = "In a master-detail interface, you have a list of customer names. To improve perceived performance, you want to prefetch a customer's detailed data when a user hovers their mouse over the customer's name in the list. Implement this behavior using React Query's queryClient.prefetchQuery method within an onMouseEnter event handler." #user_prompt = "The core learning mechanism in Soar, chunking, creates new production rules by compiling the results of successful subgoal resolution. Explain the precise mechanism by which the dependency graph of working memory elements that contributed to the subgoal's result determines the conditions of the new chunk. What are the implications of this mechanism for creating overly specific or overly general rules, and how can an architect guide the chunking process?" #user_prompt = "Write a Haskell program that models a bank with multiple accounts. Use Haskell's Software Transactional Memory (STM) library to implement a thread-safe transfer function that moves funds from one account to another. The function must execute atomically, ensuring that the total amount of money in the system remains constant even when multiple transfers are attempted concurrently from different threads." system_prompt = ( "# HOW YOU SHOULD THINK AND ANSWER\n\n" "First draft your thinking process (inner monologue) until you arrive at a response. " "Format your response using Markdown, and use LaTeX for any mathematical equations. " "Write both your thoughts and the response in the same language as the input.\n\n" "Your thinking process must follow the template below:" "[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. " "Be as casual and as long as you want until you are confident to generate the response to the user.[/THINK]" "Here, provide a self-contained response." ) messages = [ { "role": "system", "content": system_prompt }, { "role": "user", "content": [ { "type": "text", "text": user_prompt, }, ], }, ] tokenized = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True) tokenized = {k: v.to("cuda") for k, v in tokenized.items() if hasattr(v, "to")} output = model.generate( **tokenized, max_new_tokens=20000, )[0] decoded_output = tokenizer.decode(output[len(tokenized["input_ids"][0]):]) print(decoded_output) ``` ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) Esper 3.1 is created by [Valiant Labs.](http://valiantlabs.ca/) [Check out our HuggingFace page to see all of our models!](https://huggingface.co/ValiantLabs) We care about open source. For everyone to use.