YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

base_model: - Qwen/Qwen3-30B-A3B-Thinking-2507

MarsRL

Advancing Multi-Agent Reasoning System via Reinforcement Learning with Agentic Pipeline Parallelism
Paper | GitHub

Trilogix1/Hugston-forestliutcMarsRL-f16 pipeline_tag: text-generation tags:

Thinking

Coder

Hugston

Trilogix1/Hugston-forestliutcMarsRL-f16


Original weights at: https://huggingface.co/forestliutc/MarsRL

This is an converted and quantized version by Hugston Team created with Quanta (see Github to know more about it). This is a crude, proof-of-concept implementation to convert and quantize a .safetensor llm model in GGUF.

Screenshot 2025-11-21 114116

Quantization was performed using an automatic and faster method, which leads to less time and faster results.

This model was made possible by: https://Hugston.com

You can use the model with HugstonOne Enterprise Edition

Tested and ecountered small precision errors in coding tasks but the model is quite impressive for the size. We see the model fit for non precision tasks (like game vibecoding coding and general tasks).

Screenshot 2025-11-22 133409


Watch HugstonOne coding and preview in action:

https://vimeo.com/1121493834?share=copy&fl=sv&fe=ci

-Download App HugstonOne at Hugston.com or at https://github.com/Mainframework

-Download model from https://hugston.com/explore?folder=llm_models or Huggingface

-If you already have the Llm Model downloaded chose it by clicking pick model in HugstonOne -Then click Load model in Cli or Server


-For multimodal use you need a VL/multimodal LLM model with the Mmproj file in the same folder. -Select model and select mmproj.


-Note: if the mmproj is inside the same folder with other models non multimodal, the non model will not load unless the mmproj is moved from folder.

Downloads last month
93
GGUF
Model size
31B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support