ariG23498 HF Staff commited on
Commit
ce57399
·
verified ·
1 Parent(s): ab1c551

Upload zai-org_GLM-ASR-Nano-2512_0.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. zai-org_GLM-ASR-Nano-2512_0.txt +75 -0
zai-org_GLM-ASR-Nano-2512_0.txt ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```CODE:
2
+ # Use a pipeline as a high-level helper
3
+ from transformers import pipeline
4
+
5
+ pipe = pipeline("automatic-speech-recognition", model="zai-org/GLM-ASR-Nano-2512", trust_remote_code=True)
6
+ ```
7
+
8
+ ERROR:
9
+ Traceback (most recent call last):
10
+ File "/tmp/zai-org_GLM-ASR-Nano-2512_0g6VREM.py", line 26, in <module>
11
+ pipe = pipeline("automatic-speech-recognition", model="zai-org/GLM-ASR-Nano-2512", trust_remote_code=True)
12
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1027, in pipeline
13
+ framework, model = infer_framework_load_model(
14
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~^
15
+ adapter_path if adapter_path is not None else model,
16
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
17
+ ...<5 lines>...
18
+ **model_kwargs,
19
+ ^^^^^^^^^^^^^^^
20
+ )
21
+ ^
22
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 333, in infer_framework_load_model
23
+ raise ValueError(
24
+ f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
25
+ )
26
+ ValueError: Could not load model zai-org/GLM-ASR-Nano-2512 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCTC'>, <class 'transformers.models.auto.modeling_auto.AutoModelForSpeechSeq2Seq'>). See the original errors:
27
+
28
+ while loading with AutoModelForCTC, an error is thrown:
29
+ Traceback (most recent call last):
30
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
31
+ model = model_class.from_pretrained(model, **kwargs)
32
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
33
+ raise ValueError(
34
+ ...<2 lines>...
35
+ )
36
+ ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.fdc39709f86b00cdce879c04d967c2146ce4053c.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForCTC.
37
+ Model type should be one of Data2VecAudioConfig, HubertConfig, MCTCTConfig, ParakeetCTCConfig, SEWConfig, SEWDConfig, UniSpeechConfig, UniSpeechSatConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig.
38
+
39
+ During handling of the above exception, another exception occurred:
40
+
41
+ Traceback (most recent call last):
42
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model
43
+ model = model_class.from_pretrained(model, **fp32_kwargs)
44
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
45
+ raise ValueError(
46
+ ...<2 lines>...
47
+ )
48
+ ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.fdc39709f86b00cdce879c04d967c2146ce4053c.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForCTC.
49
+ Model type should be one of Data2VecAudioConfig, HubertConfig, MCTCTConfig, ParakeetCTCConfig, SEWConfig, SEWDConfig, UniSpeechConfig, UniSpeechSatConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig.
50
+
51
+ while loading with AutoModelForSpeechSeq2Seq, an error is thrown:
52
+ Traceback (most recent call last):
53
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
54
+ model = model_class.from_pretrained(model, **kwargs)
55
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
56
+ raise ValueError(
57
+ ...<2 lines>...
58
+ )
59
+ ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.fdc39709f86b00cdce879c04d967c2146ce4053c.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForSpeechSeq2Seq.
60
+ Model type should be one of DiaConfig, GraniteSpeechConfig, KyutaiSpeechToTextConfig, MoonshineConfig, Pop2PianoConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SpeechEncoderDecoderConfig, Speech2TextConfig, SpeechT5Config, WhisperConfig.
61
+
62
+ During handling of the above exception, another exception occurred:
63
+
64
+ Traceback (most recent call last):
65
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model
66
+ model = model_class.from_pretrained(model, **fp32_kwargs)
67
+ File "/tmp/.cache/uv/environments-v2/e265c82f8ee277ed/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
68
+ raise ValueError(
69
+ ...<2 lines>...
70
+ )
71
+ ValueError: Unrecognized configuration class <class 'transformers_modules.zai_hyphen_org.GLM_hyphen_ASR_hyphen_Nano_hyphen_2512.fdc39709f86b00cdce879c04d967c2146ce4053c.configuration_glmasr.GlmasrConfig'> for this kind of AutoModel: AutoModelForSpeechSeq2Seq.
72
+ Model type should be one of DiaConfig, GraniteSpeechConfig, KyutaiSpeechToTextConfig, MoonshineConfig, Pop2PianoConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SpeechEncoderDecoderConfig, Speech2TextConfig, SpeechT5Config, WhisperConfig.
73
+
74
+
75
+