text
stringlengths
0
3.84k
File "/tmp/.cache/uv/environments-v2/d63d34ff64d6f2c0/lib/python3.13/site-packages/huggingface_hub/utils/_http.py", line 426, in hf_raise_for_status
raise _format(GatedRepoError, message, response) from e
huggingface_hub.errors.GatedRepoError: 403 Client Error. (Request ID: Root=1-6889b14c-48562b630e97e6b6468ad43c;9290a64d-9123-47c2-8ebd-1707b6e806ba)
Cannot access gated repo for url https://huggingface.co/arcee-ai/AFM-4.5B/resolve/main/config.json.
Access to model arcee-ai/AFM-4.5B is restricted and you are not in the authorized list. Visit https://huggingface.co/arcee-ai/AFM-4.5B to ask for access.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/arcee-ai_AFM-4.5B_1kBQ2OH.py", line 13, in <module>
tokenizer = AutoTokenizer.from_pretrained("arcee-ai/AFM-4.5B")
File "/tmp/.cache/uv/environments-v2/d63d34ff64d6f2c0/lib/python3.13/site-packages/transformers/models/auto/tokenization_auto.py", line 1067, in from_pretrained
config = AutoConfig.from_pretrained(
pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs
)
File "/tmp/.cache/uv/environments-v2/d63d34ff64d6f2c0/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1244, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/.cache/uv/environments-v2/d63d34ff64d6f2c0/lib/python3.13/site-packages/transformers/configuration_utils.py", line 649, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/.cache/uv/environments-v2/d63d34ff64d6f2c0/lib/python3.13/site-packages/transformers/configuration_utils.py", line 708, in _get_config_dict
resolved_config_file = cached_file(
pretrained_model_name_or_path,
...<10 lines>...
_commit_hash=commit_hash,
)
File "/tmp/.cache/uv/environments-v2/d63d34ff64d6f2c0/lib/python3.13/site-packages/transformers/utils/hub.py", line 318, in cached_file
file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs)
File "/tmp/.cache/uv/environments-v2/d63d34ff64d6f2c0/lib/python3.13/site-packages/transformers/utils/hub.py", line 540, in cached_files
raise OSError(
...<2 lines>...
) from e
OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/arcee-ai/AFM-4.5B.
403 Client Error. (Request ID: Root=1-6889b14c-48562b630e97e6b6468ad43c;9290a64d-9123-47c2-8ebd-1707b6e806ba)
Cannot access gated repo for url https://huggingface.co/arcee-ai/AFM-4.5B/resolve/main/config.json.
Access to model arcee-ai/AFM-4.5B is restricted and you are not in the authorized list. Visit https://huggingface.co/arcee-ai/AFM-4.5B to ask for access.
No suitable GPU found for baidu/ERNIE-4.5-21B-A3B-Thinking | 105.70 GB VRAM requirement
No suitable GPU found for baidu/ERNIE-4.5-21B-A3B-Thinking | 105.70 GB VRAM requirement
No suitable GPU found for baidu/ERNIE-4.5-VL-28B-A3B-PT | 142.38 GB VRAM requirement
No suitable GPU found for baidu/ERNIE-4.5-VL-28B-A3B-PT | 142.38 GB VRAM requirement
No suitable GPU found for baidu/ERNIE-4.5-VL-424B-A47B-PT | 1025.54 GB VRAM requirement
No suitable GPU found for baidu/ERNIE-4.5-VL-424B-A47B-PT | 1025.54 GB VRAM requirement
Traceback (most recent call last):
File "/tmp/baidu_Qianfan-VL-3B_08wyLXL.py", line 16, in <module>
pipe = pipeline("image-text-to-text", model="baidu/Qianfan-VL-3B", trust_remote_code=True)
File "/tmp/.cache/uv/environments-v2/81fc7cfba71ddfe6/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1028, in pipeline
framework, model = infer_framework_load_model(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
adapter_path if adapter_path is not None else model,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
**model_kwargs,
^^^^^^^^^^^^^^^
)
^
File "/tmp/.cache/uv/environments-v2/81fc7cfba71ddfe6/lib/python3.13/site-packages/transformers/pipelines/base.py", line 333, in infer_framework_load_model
raise ValueError(
f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
)
ValueError: Could not load model baidu/Qianfan-VL-3B with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors:
while loading with AutoModelForImageTextToText, an error is thrown:
Traceback (most recent call last):
File "/tmp/.cache/uv/environments-v2/81fc7cfba71ddfe6/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/tmp/.cache/uv/environments-v2/81fc7cfba71ddfe6/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
raise ValueError(
...<2 lines>...
)
ValueError: Unrecognized configuration class <class 'transformers_modules.baidu.Qianfan-VL-3B.067c5a6df567f8420b34f10ca3aa3ae63dc26e1b.configuration_qianfanvl_chat.QianfanVLChatConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Kosmos2_5Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, Ovis2Config, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/.cache/uv/environments-v2/81fc7cfba71ddfe6/lib/python3.13/site-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model
model = model_class.from_pretrained(model, **fp32_kwargs)
File "/tmp/.cache/uv/environments-v2/81fc7cfba71ddfe6/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
raise ValueError(
...<2 lines>...
)
ValueError: Unrecognized configuration class <class 'transformers_modules.baidu.Qianfan-VL-3B.067c5a6df567f8420b34f10ca3aa3ae63dc26e1b.configuration_qianfanvl_chat.QianfanVLChatConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Kosmos2_5Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, Ovis2Config, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
Traceback (most recent call last):
File "/tmp/baidu_Qianfan-VL-3B_16a1gMk.py", line 15, in <module>
model = AutoModel.from_pretrained("baidu/Qianfan-VL-3B", trust_remote_code=True, torch_dtype="auto")
File "/tmp/.cache/uv/environments-v2/899a4ca09b916084/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 586, in from_pretrained
model_class = get_class_from_dynamic_module(
class_ref, pretrained_model_name_or_path, code_revision=code_revision, **hub_kwargs, **kwargs
)
File "/tmp/.cache/uv/environments-v2/899a4ca09b916084/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 569, in get_class_from_dynamic_module
final_module = get_cached_module_file(
repo_id,