GLM-OCR ONNX (Decoder)

ONNX export of the decoder of zai-org/GLM-OCR. Exported with scripts/export_glm_ocr_onnx.py (Transformers 5.1.0, custom torch.onnx path).

Contents

  • glm_ocr_decoder.onnx / glm_ocr_decoder.onnx.data โ€“ Decoder ONNX (inputs: decoder_input_ids, encoder_hidden_states; output: logits).
  • tokenizer.json, tokenizer_config.json โ€“ Tokenizer from zai-org/GLM-OCR.

Note

The vision encoder was not exported (model forward requires either input_ids or inputs_embeds when called with image inputs only). To run full OCR you need encoder hidden states from another source or the original PyTorch model for the vision part.

Usage

Load with ONNX Runtime; feed encoder_hidden_states (from your vision encoder or zai-org/GLM-OCR in PyTorch) and decoder_input_ids; get logits and decode with the included tokenizer.

Source

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for aoiandroid/glm-ocr-onnx

Base model

zai-org/GLM-OCR
Quantized
(10)
this model