we've added 14b and 3b as well - we'd like to specifically recommend the 14b for everyone to try: https://huggingface.co/ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
t.d.a.g. PRO
sequelbox
AI & ML interests
open source, infinite games. (they/them)
Recent Activity
liked
a model
1 day ago
mradermacher/Qwen3-14B-DAG-Reasoning-i1-GGUF
liked
a model
1 day ago
mradermacher/Ministral-3-14B-Reasoning-2512-Esper3.1-i1-GGUF
Organizations
replied to
their
post
4 days ago
reacted to
danielhanchen's
post with ๐ฅ
5 days ago
Post
3144
Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF
๐ฑ Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF
๐ฑ Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3
reacted to
sergiopaniego's
post with ๐
5 days ago
Post
2107
ICYMI, transformers v5 is out!
Grab a coffee โ and go read the announcement blog https://huggingface.co/blog/transformers-v5
Grab a coffee โ and go read the announcement blog https://huggingface.co/blog/transformers-v5
posted
an
update
5 days ago
Post
1948
NEW RELEASE: Esper 3.1 for Ministral 3 14b, 8b, and 3b!
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Get Esper 3.1 now in all 3 Ministral 3 sizes! (We recommend 14b for general use.)
14b: ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
8b: ValiantLabs/Ministral-3-8B-Reasoning-2512-Esper3.1
3b: ValiantLabs/Ministral-3-3B-Reasoning-2512-Esper3.1
We'll be bringing more models to Ministral soon, including Shining Valiant 3 :)
We're currently working hard on a big release in a new specialty - hoping to have that up on Valiant Labs before the end of the year! We'll keep pushing the boundaries of what personal-sized AI can do for you.
See our Experimental Reasoning models and open-source datasets: @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
with love,
allegra
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Get Esper 3.1 now in all 3 Ministral 3 sizes! (We recommend 14b for general use.)
14b: ValiantLabs/Ministral-3-14B-Reasoning-2512-Esper3.1
8b: ValiantLabs/Ministral-3-8B-Reasoning-2512-Esper3.1
3b: ValiantLabs/Ministral-3-3B-Reasoning-2512-Esper3.1
We'll be bringing more models to Ministral soon, including Shining Valiant 3 :)
We're currently working hard on a big release in a new specialty - hoping to have that up on Valiant Labs before the end of the year! We'll keep pushing the boundaries of what personal-sized AI can do for you.
See our Experimental Reasoning models and open-source datasets: @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
with love,
allegra
reacted to
danielhanchen's
post with โค๏ธ
9 days ago
Post
8190
Qwen3-Next can now be Run locally! (30GB RAM)
Instruct GGUF: unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF
The models come in Thinking and Instruct versions and utilize a new architecture, allowing it to have ~10x faster inference than Qwen32B.
๐ Step-by-step Guide: https://docs.unsloth.ai/models/qwen3-next
Thinking GGUF: unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF
Instruct GGUF: unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF
The models come in Thinking and Instruct versions and utilize a new architecture, allowing it to have ~10x faster inference than Qwen32B.
๐ Step-by-step Guide: https://docs.unsloth.ai/models/qwen3-next
Thinking GGUF: unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF
posted
an
update
11 days ago
Post
326
We strongly disagree with Hugging Face's decision to remove the Epstein files dataset. As an open source community, it is imperative that we support free access to important information.
Torrents remain available for those looking to use the information, but ease of access matters too. The datasets library provides legitimate value to users; it matters to be able to access content here.
We'd like to encourage everyone to retain local copies of anything on Hugging Face that's important to you.
Torrents remain available for those looking to use the information, but ease of access matters too. The datasets library provides legitimate value to users; it matters to be able to access content here.
We'd like to encourage everyone to retain local copies of anything on Hugging Face that's important to you.
posted
an
update
26 days ago
Post
2527
NEW RELEASE: UML Generator is here!
- Our newest Experimental Reasoning release: create Unified Modeling Language diagrams to provide analysis and insight into your queries and situations!
- Multi-step reasoning reliably identifies diagram structure before a user response of XMI 2.5.1 code containing the UML diagram. Load the diagram into the UML tool of your choice!
- Trained in a variety of subjects for flexible analysis: software architecture, software development, business processes, systems engineering, data modeling, microservices, reverse engineering and more!
UML Generator available for multiple sizes of gpt-oss and Qwen 3, to provide increased flexibility to the user:
gpt-oss-120b: sequelbox/gpt-oss-120b-UML-Generator
gpt-oss-20b: sequelbox/gpt-oss-20b-UML-Generator
Qwen3-14B: sequelbox/Qwen3-14B-UML-Generator
Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-UML-Generator
You can also get the UML Generator dataset, to train your own models to use UML Generator Format: sequelbox/UML-Generator-Dataset-DeepSeek-V3.2
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
See our other Experimental Reasoning models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
with love,
allegra
- Our newest Experimental Reasoning release: create Unified Modeling Language diagrams to provide analysis and insight into your queries and situations!
- Multi-step reasoning reliably identifies diagram structure before a user response of XMI 2.5.1 code containing the UML diagram. Load the diagram into the UML tool of your choice!
- Trained in a variety of subjects for flexible analysis: software architecture, software development, business processes, systems engineering, data modeling, microservices, reverse engineering and more!
UML Generator available for multiple sizes of gpt-oss and Qwen 3, to provide increased flexibility to the user:
gpt-oss-120b: sequelbox/gpt-oss-120b-UML-Generator
gpt-oss-20b: sequelbox/gpt-oss-20b-UML-Generator
Qwen3-14B: sequelbox/Qwen3-14B-UML-Generator
Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-UML-Generator
You can also get the UML Generator dataset, to train your own models to use UML Generator Format: sequelbox/UML-Generator-Dataset-DeepSeek-V3.2
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
See our other Experimental Reasoning models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
with love,
allegra
reacted to
salma-remyx's
post with ๐ฅ
about 2 months ago
Post
3306
We've built over 10K containerized reproductions of papers from arXiv!
Instead of spending all day trying to build an environment to test that new idea, just pull the Docker container from the Remyx registry.
And with Remyx, you can start experimenting faster by generating a test PR in your codebase based on the ideas found in your paper of choice.
Hub: https://hub.docker.com/u/remyxai
Remyx docs: https://docs.remyx.ai/resources/ideate
Coming soon, explore reproduced papers with AG2 + Remyx: https://github.com/ag2ai/ag2/pull/2141
Instead of spending all day trying to build an environment to test that new idea, just pull the Docker container from the Remyx registry.
And with Remyx, you can start experimenting faster by generating a test PR in your codebase based on the ideas found in your paper of choice.
Hub: https://hub.docker.com/u/remyxai
Remyx docs: https://docs.remyx.ai/resources/ideate
Coming soon, explore reproduced papers with AG2 + Remyx: https://github.com/ag2ai/ag2/pull/2141
replied to
their
post
about 2 months ago
reacted to
umarbutler's
post with ๐
about 2 months ago
Post
2957
I'm excited to announce the release of Kanon 2 Embedder, the world's best legal embedding model, ranked first on the Massive Legal Embedding Benchmark ๐
This model is the product of quite literally months of painstaking work alongside @abdurrahmanbutler collecting, cleaning, and processing terabytes of data as well as coming up with novel improvements to the standard embedder training recipe to push the limits of what's possible.
Kanon 2 Embedder is my most advanced model to date. On MLEB, it benchmarks as 9% more accurate than OpenAI's best embedding model and 30% faster.
Even when truncated from 1,792 to 768 dimensions, Kanon 2 Embedder continues to hold the number one spot on MLEB.
Importantly, Kanon 2 Embedder is also privacy and security friendly โ unlike Voyage, Cohere and Jina, none of your data is used to train our models by default.
Kanon 2 Embedder can also be self-hosted for enterprises with heightened security or reliability requirements.
You can read the full announcement on our blog to learn how we did it and how you can get started using Kanon 2 Embedder to embed your own legal documents: https://isaacus.com/blog/introducing-kanon-2-embedder
This model is the product of quite literally months of painstaking work alongside @abdurrahmanbutler collecting, cleaning, and processing terabytes of data as well as coming up with novel improvements to the standard embedder training recipe to push the limits of what's possible.
Kanon 2 Embedder is my most advanced model to date. On MLEB, it benchmarks as 9% more accurate than OpenAI's best embedding model and 30% faster.
Even when truncated from 1,792 to 768 dimensions, Kanon 2 Embedder continues to hold the number one spot on MLEB.
Importantly, Kanon 2 Embedder is also privacy and security friendly โ unlike Voyage, Cohere and Jina, none of your data is used to train our models by default.
Kanon 2 Embedder can also be self-hosted for enterprises with heightened security or reliability requirements.
You can read the full announcement on our blog to learn how we did it and how you can get started using Kanon 2 Embedder to embed your own legal documents: https://isaacus.com/blog/introducing-kanon-2-embedder
reacted to
tomaarsen's
post with ๐
about 2 months ago
Post
4149
๐ค Sentence Transformers is joining Hugging Face! ๐ค This formalizes the existing maintenance structure, as I've personally led the project for the past two years on behalf of Hugging Face! Details:
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
reacted to
AdinaY's
post with ๐
about 2 months ago
Post
2646
HunyuanWorld Mirror๐ฅa versatile feed forward model for universal 3D world reconstruction by Tencent
tencent/HunyuanWorld-Mirror
โจ Any prior in โ 3D world out
โจ Mix camera, intrinsics, depth as priors
โจ Predict point clouds, normals, Gaussians & more in one pass
โจ Unified architecture for all 3D task
tencent/HunyuanWorld-Mirror
โจ Any prior in โ 3D world out
โจ Mix camera, intrinsics, depth as priors
โจ Predict point clouds, normals, Gaussians & more in one pass
โจ Unified architecture for all 3D task
reacted to
paulml's
post with ๐ฅ
about 2 months ago
Post
3591
Qwen3-VL-4B is incredibly easy to fine-tune!
We've trained the first DSE model based on this model, and it's already performing at the same level as Jina v4!
While Jina Embeddings v4 is built on Qwen2.5-VL-3B (which has a non-commercial license), our model is based on Qwen3-VL-4B and released under Apache 2.0โmaking it fully commercially permissive.
Check out our DSE model here:
racineai/QwenAmann-4B-dse
We've trained the first DSE model based on this model, and it's already performing at the same level as Jina v4!
While Jina Embeddings v4 is built on Qwen2.5-VL-3B (which has a non-commercial license), our model is based on Qwen3-VL-4B and released under Apache 2.0โmaking it fully commercially permissive.
Check out our DSE model here:
racineai/QwenAmann-4B-dse
posted
an
update
about 2 months ago
Post
2532
NEW RELEASE: Esper 3.1 for gpt-oss-20b!
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
GET IT NOW, FOR EVERYONE: ValiantLabs/gpt-oss-20b-Esper3.1
We'll have more releases of Esper coming up, plus more experimental open-source releases :) find open source datasets and experimental models at @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
more to come soon!
allegra
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
GET IT NOW, FOR EVERYONE: ValiantLabs/gpt-oss-20b-Esper3.1
We'll have more releases of Esper coming up, plus more experimental open-source releases :) find open source datasets and experimental models at @sequelbox
Help us keep working for open source AI with a donation: sequelbox/SupportOpenSource
more to come soon!
allegra
posted
an
update
2 months ago
Post
1507
NEW RELEASE: Esper 3.1!
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Our first release in the Esper 3.1 series is built on Qwen3-4B-Thinking-2507. GET IT NOW, FOR EVERYONE: ValiantLabs/Qwen3-4B-Thinking-2507-Esper3.1
We'll be bringing Esper 3.1 to more, larger models as soon as we can; you can help this happen faster with a donation: sequelbox/SupportOpenSource
We're really happy about this one; let us know how Esper 3.1 works for you!
Support open source. It's our only hope for an AI future you'll actually want to live in.
More to come soon!
with our love and appreciation,
allegra
- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528
Our first release in the Esper 3.1 series is built on Qwen3-4B-Thinking-2507. GET IT NOW, FOR EVERYONE: ValiantLabs/Qwen3-4B-Thinking-2507-Esper3.1
We'll be bringing Esper 3.1 to more, larger models as soon as we can; you can help this happen faster with a donation: sequelbox/SupportOpenSource
We're really happy about this one; let us know how Esper 3.1 works for you!
Support open source. It's our only hope for an AI future you'll actually want to live in.
More to come soon!
with our love and appreciation,
allegra
posted
an
update
3 months ago
Post
2606
NEW EXPERIMENTAL RELEASE: DES Reasoning is here!
- Our newest Experimental Reasoning Modality release: create Discrete Event Simulations using SimPy to provide analysis and insight into your queries and situations!
- Multi-step analysis identifies the structure of the situation and the goal of simulation before proceeding to analysis and creating SimPy simulation code and analysis chat.
- DES Reasoning Format provides clear, readable Python code that is easy to read and modify; easy to use for running simulations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, energy, finance, law, logistics, management, manufacturing, operations, supply chain and more!
DES Reasoning available for gpt-oss-20b and Qwen3-4B-Thinking-2507:
gpt-oss-20b: sequelbox/gpt-oss-20b-DES-Reasoning
Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-DES-Reasoning
You can also get the DES Reasoning dataset, to train your own models to use DES Reasoning Format: sequelbox/DES-Reasoning-DeepSeek-V3.1
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
with love,
allegra
- Our newest Experimental Reasoning Modality release: create Discrete Event Simulations using SimPy to provide analysis and insight into your queries and situations!
- Multi-step analysis identifies the structure of the situation and the goal of simulation before proceeding to analysis and creating SimPy simulation code and analysis chat.
- DES Reasoning Format provides clear, readable Python code that is easy to read and modify; easy to use for running simulations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, energy, finance, law, logistics, management, manufacturing, operations, supply chain and more!
DES Reasoning available for gpt-oss-20b and Qwen3-4B-Thinking-2507:
gpt-oss-20b: sequelbox/gpt-oss-20b-DES-Reasoning
Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-DES-Reasoning
You can also get the DES Reasoning dataset, to train your own models to use DES Reasoning Format: sequelbox/DES-Reasoning-DeepSeek-V3.1
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
with love,
allegra
reacted to
codelion's
post with ๐ฅ
3 months ago
Post
6181
I recently worked on a LoRA that improves tool use in LLM. Thought the approach might interest folks here.
The issue I have had when trying to use some of the local LLMs with coding agents is this:
Me: "Find all API endpoints with authentication in this codebase"
LLM: "You should look for @app .route decorators and check if they have auth middleware..."
But I often want it to search the files and show me but the LLM doesn't trigger a tool use call.
To fine-tune it for tool use I combined two data sources:
1. Magpie scenarios - 5000+ diverse tasks (bug hunting, refactoring, security audits)
2. Real execution - Ran these on actual repos (FastAPI, Django, React) to get authentic tool responses
This ensures the model learns both breadth (many scenarios) and depth (real tool behavior).
Tools We Taught:
-
-
-
-
-
-
Improvements:
- Tool calling accuracy: 12% โ 80%
- Correct parameters: 8% โ 87%
- Multi-step tasks: 3% โ 78%
- End-to-end completion: 5% โ 80%
- Tools per task: 0.2 โ 3.8
The LoRA really improves on intential tool call as an example consider the query: "Find ValueError in payment module"
The response proceeds as follows:
1. Calls
2. Gets 4 matches across 3 files
3. Calls
4. Analyzes context
5. Reports: "Found 3 ValueError instances: payment/processor.py:47 for invalid amount, payment/validator.py:23 for unsupported currency..."
Resources:
- Colab notebook https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_3_Enhanced_Tool_Calling_and_Code_Understanding.ipynb
- Model - codelion/Llama-3.2-1B-Instruct-tool-calling-lora
- GitHub - https://github.com/codelion/ellora
The issue I have had when trying to use some of the local LLMs with coding agents is this:
Me: "Find all API endpoints with authentication in this codebase"
LLM: "You should look for @app .route decorators and check if they have auth middleware..."
But I often want it to search the files and show me but the LLM doesn't trigger a tool use call.
To fine-tune it for tool use I combined two data sources:
1. Magpie scenarios - 5000+ diverse tasks (bug hunting, refactoring, security audits)
2. Real execution - Ran these on actual repos (FastAPI, Django, React) to get authentic tool responses
This ensures the model learns both breadth (many scenarios) and depth (real tool behavior).
Tools We Taught:
-
read_file - Actually read file contents-
search_files - Regex/pattern search across codebases-
find_definition - Locate classes/functions-
analyze_imports - Dependency tracking-
list_directory - Explore structure-
run_tests - Execute test suitesImprovements:
- Tool calling accuracy: 12% โ 80%
- Correct parameters: 8% โ 87%
- Multi-step tasks: 3% โ 78%
- End-to-end completion: 5% โ 80%
- Tools per task: 0.2 โ 3.8
The LoRA really improves on intential tool call as an example consider the query: "Find ValueError in payment module"
The response proceeds as follows:
1. Calls
search_files with pattern "ValueError"2. Gets 4 matches across 3 files
3. Calls
read_file on each match4. Analyzes context
5. Reports: "Found 3 ValueError instances: payment/processor.py:47 for invalid amount, payment/validator.py:23 for unsupported currency..."
Resources:
- Colab notebook https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_3_Enhanced_Tool_Calling_and_Code_Understanding.ipynb
- Model - codelion/Llama-3.2-1B-Instruct-tool-calling-lora
- GitHub - https://github.com/codelion/ellora
reacted to
danielhanchen's
post with โค๏ธ
4 months ago
Post
6454
Run DeepSeek-V3.1 locally on 170GB RAM with Dynamic 1-bit GGUFs!๐
GGUFs: unsloth/DeepSeek-V3.1-GGUF
The 715GB model gets reduced to 170GB (-80% size) by smartly quantizing layers.
The 1-bit GGUF passes all our code tests & we fixed the chat template for llama.cpp supported backends.
Guide: https://docs.unsloth.ai/basics/deepseek-v3.1
GGUFs: unsloth/DeepSeek-V3.1-GGUF
The 715GB model gets reduced to 170GB (-80% size) by smartly quantizing layers.
The 1-bit GGUF passes all our code tests & we fixed the chat template for llama.cpp supported backends.
Guide: https://docs.unsloth.ai/basics/deepseek-v3.1
reacted to
Akhil-Theerthala's
post with โค๏ธ
4 months ago
Post
2234
I'm excited to announce that I've just released the newest versions of my Kuvera models and the expanded Personal Finance Reasoning dataset on Hugging Face!
What's new:
I've expanded the Personal Finance Reasoning Dataset, which now includes 18.9k samples of real-world financial questions paired with detailed, empathetic answers. The previous generation pipeline was also streamlined with better psychological context and response validations.
I've also released new Kuvera models trained on this improved dataset:
- Kuvera-4B & 8B: These are my upgraded non-reasoning models, fine-tuned to provide practical financial advice. I've specifically trained the 8B model to better understand the user's emotional context.
- Kuvera-12B: A first experimental reasoning model focused on the query resolution.
As the sole person working on this project, this release is a noticeable step forward from my previous work, offering more powerful and nuanced tools for financial AI.
I am actively looking to collaborate with others who are passionate about analyzing and improving the quality of personal finance advice generated by large language models. If this sounds like you, please reach out!
You can check these out on the following links:
Models:
- Akhil-Theerthala/Kuvera-8B-qwen3-v0.2.1
- Akhil-Theerthala/Kuvera-4B-unsloth-gemma3
- Akhil-Theerthala/kuvera-12B-v0.2.0-unsloth-gemma3
Dataset:
- Akhil-Theerthala/Kuvera-PersonalFinance-V2.1
P.S. The paper on the framework used to generate these models along with the detailed evaluation of the main 8B model's responses is going to be released soon!
What's new:
I've expanded the Personal Finance Reasoning Dataset, which now includes 18.9k samples of real-world financial questions paired with detailed, empathetic answers. The previous generation pipeline was also streamlined with better psychological context and response validations.
I've also released new Kuvera models trained on this improved dataset:
- Kuvera-4B & 8B: These are my upgraded non-reasoning models, fine-tuned to provide practical financial advice. I've specifically trained the 8B model to better understand the user's emotional context.
- Kuvera-12B: A first experimental reasoning model focused on the query resolution.
As the sole person working on this project, this release is a noticeable step forward from my previous work, offering more powerful and nuanced tools for financial AI.
I am actively looking to collaborate with others who are passionate about analyzing and improving the quality of personal finance advice generated by large language models. If this sounds like you, please reach out!
You can check these out on the following links:
Models:
- Akhil-Theerthala/Kuvera-8B-qwen3-v0.2.1
- Akhil-Theerthala/Kuvera-4B-unsloth-gemma3
- Akhil-Theerthala/kuvera-12B-v0.2.0-unsloth-gemma3
Dataset:
- Akhil-Theerthala/Kuvera-PersonalFinance-V2.1
P.S. The paper on the framework used to generate these models along with the detailed evaluation of the main 8B model's responses is going to be released soon!
posted
an
update
4 months ago
Post
507
We've brought DAG Reasoning to gpt-oss-20b and Qwen3-4B-Thinking-2507!
- DAG Reasoning is the first model in our Experimental Reasoning Modalities series: use it to create structured, analytical Directed Acyclic Graphs to provide insight into your queries and situations!
- Multi-step analysis identifies causal relationships, produces confidence measurements, and forms a single structured graph object.
- DAG Reasoning Format provides clear, readable JSON containing structured, useful information; easy to use for creating visualizations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, finance, law, logistics, management, and more!
Our newest versions of DAG Reasoning are available now!
Get gpt-oss-20b: sequelbox/gpt-oss-20b-DAG-Reasoning
Get Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning
You can also get the DAG Reasoning dataset, to train your own models to use DAG Reasoning Format: sequelbox/DAG-Reasoning-DeepSeek-R1-0528
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
Our upcoming releases, coming soon with your support:
- bringing Shining Valiant 3 to the Qwen 3 2507 series!
- our next release in the Experimental Reasoning Modalities series - we're hard at work on this right now!
- we'll be upgrading the Esper line with Esper 3.1 - newer and better datasets, asking tougher and deeper coding, DevOps, and architecture questions, plus improvements to general chat!
with love and appreciation,
allegra
- DAG Reasoning is the first model in our Experimental Reasoning Modalities series: use it to create structured, analytical Directed Acyclic Graphs to provide insight into your queries and situations!
- Multi-step analysis identifies causal relationships, produces confidence measurements, and forms a single structured graph object.
- DAG Reasoning Format provides clear, readable JSON containing structured, useful information; easy to use for creating visualizations, doing analysis, or further conversation with your assistant.
- Trained in a variety of subjects for flexible analysis: programming, science, business, economics, finance, law, logistics, management, and more!
Our newest versions of DAG Reasoning are available now!
Get gpt-oss-20b: sequelbox/gpt-oss-20b-DAG-Reasoning
Get Qwen3-4B-Thinking-2507: sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning
You can also get the DAG Reasoning dataset, to train your own models to use DAG Reasoning Format: sequelbox/DAG-Reasoning-DeepSeek-R1-0528
Support our experimental open-source research efforts, models and datasets: sequelbox/SupportOpenSource
Our upcoming releases, coming soon with your support:
- bringing Shining Valiant 3 to the Qwen 3 2507 series!
- our next release in the Experimental Reasoning Modalities series - we're hard at work on this right now!
- we'll be upgrading the Esper line with Esper 3.1 - newer and better datasets, asking tougher and deeper coding, DevOps, and architecture questions, plus improvements to general chat!
with love and appreciation,
allegra