runtime error
Exit code: 1. Reason: tokenizer_config.json: 0%| | 0.00/18.5k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 18.5k/18.5k [00:00<00:00, 49.7MB/s] vocab.txt: 0%| | 0.00/259k [00:00<?, ?B/s][A vocab.txt: 100%|██████████| 259k/259k [00:00<00:00, 19.9MB/s] tokenizer.json: 0%| | 0.00/1.09M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 1.09M/1.09M [00:00<00:00, 34.7MB/s] added_tokens.json: 0%| | 0.00/1.66k [00:00<?, ?B/s][A added_tokens.json: 100%|██████████| 1.66k/1.66k [00:00<00:00, 10.1MB/s] special_tokens_map.json: 0%| | 0.00/971 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 971/971 [00:00<00:00, 6.26MB/s] config.json: 0%| | 0.00/1.75k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.75k/1.75k [00:00<00:00, 11.6MB/s] Traceback (most recent call last): File "/app/app.py", line 8, in <module> model = ORTModelForSeq2SeqLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/optimum/onnxruntime/modeling.py", line 575, in from_pretrained return super().from_pretrained( File "/usr/local/lib/python3.10/site-packages/optimum/modeling_base.py", line 407, in from_pretrained return from_pretrained_method( TypeError: ORTModelForConditionalGeneration._from_pretrained() got an unexpected keyword argument 'decoder_file_with_past_name'
Container logs:
Fetching error logs...