text
stringlengths
0
3.84k
~~~~~~~~~~~~~^^^^
File "/tmp/.cache/uv/environments-v2/c55f2438beac7672/lib/python3.13/site-packages/torch/nn/modules/module.py", line 928, in _apply
module._apply(fn)
~~~~~~~~~~~~~^^^^
[Previous line repeated 2 more times]
File "/tmp/.cache/uv/environments-v2/c55f2438beac7672/lib/python3.13/site-packages/torch/nn/modules/module.py", line 955, in _apply
param_applied = fn(param)
File "/tmp/.cache/uv/environments-v2/c55f2438beac7672/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1355, in convert
return t.to(
~~~~^
device,
^^^^^^^
dtype if t.is_floating_point() or t.is_complex() else None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
non_blocking,
^^^^^^^^^^^^^
)
^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 4.69 MiB is free. Process 26621 has 22.29 GiB memory in use. Of the allocated memory 22.05 GiB is allocated by PyTorch, and 1.86 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Everything was good in inclusionAI_Ring-mini-2.0_1.txt
No suitable GPU found for inclusionAI/Ring-mini-linear-2.0 | 79.54 GB VRAM requirement
No suitable GPU found for inclusionAI/Ring-mini-linear-2.0 | 79.54 GB VRAM requirement
No suitable GPU found for inclusionAI/Ring-mini-sparse-2.0-exp | 78.72 GB VRAM requirement
No suitable GPU found for inclusionAI/Ring-mini-sparse-2.0-exp | 78.72 GB VRAM requirement
Traceback (most recent call last):
File "/tmp/internlm_Intern-S1-mini_0NL30jY.py", line 13, in <module>
pipe = pipeline("image-text-to-text", model="internlm/Intern-S1-mini", trust_remote_code=True)
File "/tmp/.cache/uv/environments-v2/5f1fd6334c296f4f/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1028, in pipeline
framework, model = infer_framework_load_model(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
adapter_path if adapter_path is not None else model,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
**model_kwargs,
^^^^^^^^^^^^^^^
)
^
File "/tmp/.cache/uv/environments-v2/5f1fd6334c296f4f/lib/python3.13/site-packages/transformers/pipelines/base.py", line 333, in infer_framework_load_model
raise ValueError(
f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
)
ValueError: Could not load model internlm/Intern-S1-mini with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors:
while loading with AutoModelForImageTextToText, an error is thrown:
Traceback (most recent call last):
File "/tmp/.cache/uv/environments-v2/5f1fd6334c296f4f/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/tmp/.cache/uv/environments-v2/5f1fd6334c296f4f/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
raise ValueError(
...<2 lines>...
)
ValueError: Unrecognized configuration class <class 'transformers_modules.internlm.Intern-S1-mini.206cd5f9c9f1b0ebcb31934be986416ab754c5da.configuration_interns1.InternS1Config'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Kosmos2_5Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, Ovis2Config, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
Everything was good in internlm_Intern-S1-mini_1.txt
No suitable GPU found for internlm/Intern-S1 | 582.86 GB VRAM requirement
No suitable GPU found for internlm/Intern-S1 | 582.86 GB VRAM requirement
Traceback (most recent call last):
File "/tmp/jet-ai_Jet-Nemotron-4B_0ecxkup.py", line 16, in <module>
pipe = pipeline("text-generation", model="jet-ai/Jet-Nemotron-4B", trust_remote_code=True)
File "/tmp/.cache/uv/environments-v2/c006b618eef85cf6/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1027, in pipeline
framework, model = infer_framework_load_model(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
adapter_path if adapter_path is not None else model,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
**model_kwargs,
^^^^^^^^^^^^^^^
)
^
File "/tmp/.cache/uv/environments-v2/c006b618eef85cf6/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/tmp/.cache/uv/environments-v2/c006b618eef85cf6/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 586, in from_pretrained
model_class = get_class_from_dynamic_module(
class_ref, pretrained_model_name_or_path, code_revision=code_revision, **hub_kwargs, **kwargs
)
File "/tmp/.cache/uv/environments-v2/c006b618eef85cf6/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 604, in get_class_from_dynamic_module
final_module = get_cached_module_file(
repo_id,
...<8 lines>...
repo_type=repo_type,
)
File "/tmp/.cache/uv/environments-v2/c006b618eef85cf6/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 467, in get_cached_module_file
get_cached_module_file(
~~~~~~~~~~~~~~~~~~~~~~^
pretrained_model_name_or_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
_commit_hash=commit_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/tmp/.cache/uv/environments-v2/c006b618eef85cf6/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 427, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File "/tmp/.cache/uv/environments-v2/c006b618eef85cf6/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 260, in check_imports
raise ImportError(
...<2 lines>...
)