Error when converting Qwen3 lora

#6
by clecho52 - opened

Error converting to GGUF F16: b'INFO:lora-to-gguf:Loading base model from Hugging Face: unsloth/qwen3-14b-unsloth-bnb-4bit\nTraceback (most recent call last):\n File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1034, in from_pretrained\n config_class = CONFIG_MAPPING[config_dict["model_type"]]\n File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 736, in __getitem__\n raise KeyError(key)\nKeyError: 'qwen3'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_lora_to_gguf.py", line 334, in \n hparams = load_hparams_from_hf(model_id)\n File "/home/user/app/llama.cpp/convert_lora_to_gguf.py", line 282, in load_hparams_from_hf\n config = AutoConfig.from_pretrained(hf_model_id)\n File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1036, in from_pretrained\n raise ValueError(\nValueError: The checkpoint you are trying to load has model type qwen3 but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.\n'

I'll pay 50$ on paypal to whoever can fix this ๐Ÿ˜ข

if you duplicate this space, it will use the latest transformers version and will work with qwen3. Maintainer here should just need to re-build the container and it should work.

For example:
https://huggingface.co/rockerBOO/qwen3-4b-roleplay-lora-F16-GGUF

Sign up or log in to comment