ariG23498 HF Staff commited on
Commit
ecaa48b
·
verified ·
1 Parent(s): a3cb48e

Upload zai-org_GLM-ASR-Nano-2512_0.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. zai-org_GLM-ASR-Nano-2512_0.txt +16 -59
zai-org_GLM-ASR-Nano-2512_0.txt CHANGED
@@ -2,74 +2,31 @@
2
  # Use a pipeline as a high-level helper
3
  from transformers import pipeline
4
 
5
- pipe = pipeline("automatic-speech-recognition", model="zai-org/GLM-ASR-Nano-2512", trust_remote_code=True)
6
  ```
7
 
8
  ERROR:
9
  Traceback (most recent call last):
10
- File "/tmp/zai-org_GLM-ASR-Nano-2512_0HT3lEa.py", line 26, in <module>
11
- pipe = pipeline("automatic-speech-recognition", model="zai-org/GLM-ASR-Nano-2512", trust_remote_code=True)
12
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1027, in pipeline
13
- framework, model = infer_framework_load_model(
14
- ~~~~~~~~~~~~~~~~~~~~~~~~~~^
15
- adapter_path if adapter_path is not None else model,
16
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
17
- ...<5 lines>...
18
- **model_kwargs,
19
- ^^^^^^^^^^^^^^^
20
- )
21
- ^
22
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 333, in infer_framework_load_model
23
- raise ValueError(
24
- f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
25
- )
26
- ValueError: Could not load model zai-org/GLM-ASR-Nano-2512 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCTC'>, <class 'transformers.models.auto.modeling_auto.AutoModelForSpeechSeq2Seq'>). See the original errors:
27
-
28
- while loading with AutoModelForCTC, an error is thrown:
29
- Traceback (most recent call last):
30
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
31
- model = model_class.from_pretrained(model, **kwargs)
32
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
33
- raise ValueError(
34
- ...<2 lines>...
35
- )
36
- ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.91967eab799804ab256a3819a085b92378906eb2.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForCTC.
37
- Model type should be one of Data2VecAudioConfig, HubertConfig, MCTCTConfig, ParakeetCTCConfig, SEWConfig, SEWDConfig, UniSpeechConfig, UniSpeechSatConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig.
38
 
39
  During handling of the above exception, another exception occurred:
40
 
41
  Traceback (most recent call last):
42
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model
43
- model = model_class.from_pretrained(model, **fp32_kwargs)
44
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
45
- raise ValueError(
46
- ...<2 lines>...
47
- )
48
- ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.91967eab799804ab256a3819a085b92378906eb2.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForCTC.
49
- Model type should be one of Data2VecAudioConfig, HubertConfig, MCTCTConfig, ParakeetCTCConfig, SEWConfig, SEWDConfig, UniSpeechConfig, UniSpeechSatConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig.
50
-
51
- while loading with AutoModelForSpeechSeq2Seq, an error is thrown:
52
- Traceback (most recent call last):
53
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
54
- model = model_class.from_pretrained(model, **kwargs)
55
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
56
- raise ValueError(
57
- ...<2 lines>...
58
  )
59
- ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.91967eab799804ab256a3819a085b92378906eb2.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForSpeechSeq2Seq.
60
- Model type should be one of DiaConfig, GraniteSpeechConfig, KyutaiSpeechToTextConfig, MoonshineConfig, Pop2PianoConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SpeechEncoderDecoderConfig, Speech2TextConfig, SpeechT5Config, WhisperConfig.
61
-
62
- During handling of the above exception, another exception occurred:
63
-
64
- Traceback (most recent call last):
65
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model
66
- model = model_class.from_pretrained(model, **fp32_kwargs)
67
- File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
68
  raise ValueError(
69
- ...<2 lines>...
70
  )
71
- ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.91967eab799804ab256a3819a085b92378906eb2.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForSpeechSeq2Seq.
72
- Model type should be one of DiaConfig, GraniteSpeechConfig, KyutaiSpeechToTextConfig, MoonshineConfig, Pop2PianoConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SpeechEncoderDecoderConfig, Speech2TextConfig, SpeechT5Config, WhisperConfig.
73
-
74
-
75
 
 
 
2
  # Use a pipeline as a high-level helper
3
  from transformers import pipeline
4
 
5
+ pipe = pipeline("automatic-speech-recognition", model="zai-org/GLM-ASR-Nano-2512")
6
  ```
7
 
8
  ERROR:
9
  Traceback (most recent call last):
10
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1360, in from_pretrained
11
+ config_class = CONFIG_MAPPING[config_dict["model_type"]]
12
+ ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
13
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1048, in __getitem__
14
+ raise KeyError(key)
15
+ KeyError: 'glmasr'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  During handling of the above exception, another exception occurred:
18
 
19
  Traceback (most recent call last):
20
+ File "/tmp/zai-org_GLM-ASR-Nano-2512_03Jrp1f.py", line 26, in <module>
21
+ pipe = pipeline("automatic-speech-recognition", model="zai-org/GLM-ASR-Nano-2512")
22
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 922, in pipeline
23
+ config = AutoConfig.from_pretrained(
24
+ model, _from_pipeline=task, code_revision=code_revision, **hub_kwargs, **model_kwargs
 
 
 
 
 
 
 
 
 
 
 
25
  )
26
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1362, in from_pretrained
 
 
 
 
 
 
 
 
27
  raise ValueError(
28
+ ...<8 lines>...
29
  )
30
+ ValueError: The checkpoint you are trying to load has model type `glmasr` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
 
 
 
31
 
32
+ You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`