Delete readme.md
Browse files
readme.md
DELETED
|
@@ -1,135 +0,0 @@
|
|
| 1 |
-
# MDiff4STR
|
| 2 |
-
|
| 3 |
-
- [MDiff4STR](#mdiff4str)
|
| 4 |
-
- [1. Introduction](#1-introduction)
|
| 5 |
-
- [1.1 Models and Results](#11-models-and-results)
|
| 6 |
-
- [2. Environment](#2-environment)
|
| 7 |
-
- [3. Model Training / Evaluation](#3-model-training--evaluation)
|
| 8 |
-
- [Dataset Preparation](#dataset-preparation)
|
| 9 |
-
- [Training](#training)
|
| 10 |
-
- [Evaluation](#evaluation)
|
| 11 |
-
- [Inference](#inference)
|
| 12 |
-
- [Latency Measurement](#latency-measurement)
|
| 13 |
-
- [Citation](#citation)
|
| 14 |
-
|
| 15 |
-
<a name="1"></a>
|
| 16 |
-
|
| 17 |
-
## 1. Introduction
|
| 18 |
-
|
| 19 |
-
Paper:
|
| 20 |
-
|
| 21 |
-
> [MDiff4STR: Mask Diffusion Model for Scene Text Recognition](https://arxiv.org/abs/2512.01422)
|
| 22 |
-
> Yongkun Du, Miaomiao Zhao, Songlin Fan, Zhineng Chen\*, Caiyan Jia, Yu-Gang Jiang
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
<a name="model"></a>
|
| 26 |
-
Mask Diffusion Models (MDMs) have recently emerged as a promising alternative to auto-regressive models (ARMs) for vision-language tasks, owing to their flexible balance of efficiency and accuracy. In this paper, for the first time, we introduce MDMs into the Scene Text Recognition (STR) task. We show that vanilla MDM lags behind ARMs in terms of accuracy, although it improves recognition efficiency. To bridge this gap, we propose MDiff4STR, a Mask Diffusion model enhanced with two key improvement strategies tailored for STR. Specifically, we identify two key challenges in applying MDMs to STR: noising gap between training and inference, and overconfident predictions during inference. Both significantly hinder the performance of MDMs. To mitigate the first issue, we develop six noising strategies that better align training with inference behavior. For the second, we propose a token-replacement noise mechanism that provides a non-mask noise type, encouraging the model to reconsider and revise overly confident but incorrect predictions. We conduct extensive evaluations of MDiff4STR on both standard and challenging STR benchmarks, covering diverse scenarios including irregular, artistic, occluded, and Chinese text, as well as whether the use of pretraining. Across these settings, MDiff4STR consistently outperforms popular STR models, surpassing state-of-the-art ARMs in accuracy, while maintaining fast inference with only three denoising steps.
|
| 27 |
-
|
| 28 |
-
### 1.1 Models and Results
|
| 29 |
-
|
| 30 |
-
The accuracy (%) and model files of MDiff4STR on the public dataset of scene text recognition are as follows:
|
| 31 |
-
|
| 32 |
-
Download all Configs, Models, and Logs from [HuggingFace Model](https://huggingface.co/topdu/MDiff4STR).
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
- Test on Common Benchmarks from [PARSeq](https://github.com/baudm/parseq):
|
| 36 |
-
|
| 37 |
-
| Model | Training Data | IC13<br/>857 | SVT | IIIT5k<br/>3000 | IC15<br/>1811 | SVTP | CUTE80 | Avg | Config&Model&Log |
|
| 38 |
-
| :------: | :----------------------------------------------------------: | :----------: | :--: | :-------------: | :-----------: | :--: | :----: | :---: | :-----------------------------------------------------------------------: |
|
| 39 |
-
| MDiff4STR-B | Synthetic datasets (MJ+ST) | 97.7 | 94.0 | 97.3 | 88.1 | 91.2 | 95.8 | 94.02 | TODO |
|
| 40 |
-
| MDiff4STR-S | [Union14M-L-Filter](../../../docs/svtrv2.md#dataset-details) | 99.0 | 98.3 | 98.5 | 89.5 | 92.9 | 98.6 | 96.13 | [HuggingFace Model](https://huggingface.co/topdu/MDiff4STR) |
|
| 41 |
-
| MDiff4STR-B | [Union14M-L-Filter](../../../docs/svtrv2.md#dataset-details) | 99.2 | 98.0 | 98.7 | 91.1 | 93.5 | 99.0 | 96.57 | [HuggingFace Model](https://huggingface.co/topdu/MDiff4STR) |
|
| 42 |
-
|
| 43 |
-
- Test on Union14M-L benchmark from [Union14M](https://github.com/Mountchicken/Union14M/).
|
| 44 |
-
|
| 45 |
-
| Model | Traing Data | Curve | Multi-<br/>Oriented | Artistic | Contextless | Salient | Multi-<br/>word | General | Avg | Config&Model&Log |
|
| 46 |
-
| :------: | :----------------------------------------------------------: | :---: | :-----------------: | :------: | :---------: | :-----: | :-------------: | :-----: | :---: | :---------------------: |
|
| 47 |
-
| MDiff4STR-B | Synthetic datasets (MJ+ST) | 74.6 | 25.2 | 57.6 | 69.7 | 77.9 | 68.0 | 66.9 | 62.83 | Same as the above table |
|
| 48 |
-
| MDiff4STR-S | [Union14M-L-Filter](../../../docs/svtrv2.md#dataset-details) | 88.3 | 84.6 | 76.5 | 84.3 | 83.3 | 85.4 | 83.5 | 83.70 | Same as the above table |
|
| 49 |
-
| MDiff4STR-B | [Union14M-L-Filter](../../../docs/svtrv2.md#dataset-details) | 90.6 | 89.0 | 79.3 | 86.1 | 86.2 | 86.7 | 85.1 | 86.14 | Same as the above table |
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
- Training and test on Chinese dataset, from [Chinese Benckmark](https://github.com/FudanVI/benchmarking-chinese-text-recognition).
|
| 53 |
-
|
| 54 |
-
| Model | Scene | Web | Document | Handwriting | Avg | Config&Model&Log |
|
| 55 |
-
| :------: | :---: | :--: | :------: | :---------: | :---: | :-----------------------------------------------------------------------------------------------------: |
|
| 56 |
-
| MDiff4STR-S | 81.1 | 81.2 | 99.3 | 65.0 | 81.64 | [Google drive](https://drive.google.com/drive/folders/1X3hqArfvRIRtuYLHDtSQheQmDc_oXpY6?usp=drive_link) |
|
| 57 |
-
| MDiff4STR-B | 83.5 | 83.3 | 99.5 | 67.0 | 83.31 | [Google drive](https://drive.google.com/drive/folders/1ZDECKXf8zZFhcKKKpvicg43Ho85uDZkF?usp=drive_link) |
|
| 58 |
-
|
| 59 |
-
<a name="2"></a>
|
| 60 |
-
|
| 61 |
-
## 2. Environment
|
| 62 |
-
|
| 63 |
-
- [PyTorch](http://pytorch.org/) version >= 1.13.0
|
| 64 |
-
- Python version >= 3.7
|
| 65 |
-
|
| 66 |
-
```shell
|
| 67 |
-
git clone -b develop https://github.com/Topdu/OpenOCR.git
|
| 68 |
-
cd OpenOCR
|
| 69 |
-
# Ubuntu 20.04 Cuda 11.8
|
| 70 |
-
conda create -n openocr python==3.8
|
| 71 |
-
conda activate openocr
|
| 72 |
-
conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=11.8 -c pytorch -c nvidia
|
| 73 |
-
pip install -r requirements.txt
|
| 74 |
-
```
|
| 75 |
-
|
| 76 |
-
<a name="3"></a>
|
| 77 |
-
|
| 78 |
-
## 3. Model Training / Evaluation
|
| 79 |
-
|
| 80 |
-
### Dataset Preparation
|
| 81 |
-
|
| 82 |
-
Referring to [Downloading Datasets](../../../docs/svtrv2.md#downloading-datasets)
|
| 83 |
-
|
| 84 |
-
### Training
|
| 85 |
-
|
| 86 |
-
```shell
|
| 87 |
-
# First stage
|
| 88 |
-
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 tools/train_rec.py --c configs/rec/svtrv2/svtrv2_rctc.yml
|
| 89 |
-
|
| 90 |
-
# Second stage
|
| 91 |
-
CUDA_VISIBLE_DEVICES=4,5,6,7 python -m torch.distributed.launch --master_port=23332 --nproc_per_node=4 tools/train_rec.py --c configs/rec/svtrv2/svtrv2_smtr_gtc_rctc.yml --o Global.pretrained_model=./output/rec/u14m_filter/svtrv2_rctc/best.pth
|
| 92 |
-
|
| 93 |
-
# For Multi RTX 4090
|
| 94 |
-
NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --master_port=23333 --nproc_per_node=4 tools/train_rec.py --c configs/rec/svtrv2/svtrv2_rctc.yml
|
| 95 |
-
# 20epoch runs for about 6 hours
|
| 96 |
-
```
|
| 97 |
-
|
| 98 |
-
### Evaluation
|
| 99 |
-
|
| 100 |
-
```shell
|
| 101 |
-
# short text: Common, Union14M-Benchmark, OST
|
| 102 |
-
python tools/eval_rec_all_en.py --c configs/rec/svtrv2/svtrv2_smtr_gtc_rctc_infer.yml
|
| 103 |
-
|
| 104 |
-
# long text: LTB
|
| 105 |
-
python tools/eval_rec_all_long.py --c configs/rec/svtrv2/svtrv2_smtr_gtc_rctc_infer.yml --o Eval.loader.max_ratio=20
|
| 106 |
-
```
|
| 107 |
-
|
| 108 |
-
After a successful run, the results are saved in a csv file in `output_dir` in the config file.
|
| 109 |
-
|
| 110 |
-
### Inference
|
| 111 |
-
|
| 112 |
-
```shell
|
| 113 |
-
python tools/infer_rec.py --c configs/rec/svtrv2/svtrv2_smtr_gtc_rctc_infer.yml --o Global.infer_img=/path/img_fold or /path/img_file
|
| 114 |
-
```
|
| 115 |
-
|
| 116 |
-
### Latency Measurement
|
| 117 |
-
|
| 118 |
-
Firstly, downloading the IIIT5K images from [Google Drive](https://drive.google.com/drive/folders/1Po1LSBQb87DxGJuAgLNxhsJ-pdXxpIfS?usp=drive_link). Then, running the following command:
|
| 119 |
-
|
| 120 |
-
```shell
|
| 121 |
-
python tools/infer_rec.py --c configs/rec/SVTRv2/svtrv2_smtr_gtc_rctc_infer.yml --o Global.infer_img=../iiit5k_test_image
|
| 122 |
-
```
|
| 123 |
-
|
| 124 |
-
## Citation
|
| 125 |
-
|
| 126 |
-
If you find our method useful for your reserach, please cite:
|
| 127 |
-
|
| 128 |
-
```bibtex
|
| 129 |
-
@inproceedings{Du2025MDiff5STR,
|
| 130 |
-
title={MDiff4STR: Mask Diffusion Model for Scene Text Recognition},
|
| 131 |
-
author={Yongkun Du and Miaomiao Zhao and Songlin Fan and Zhineng Chen and Caiyan Jia and Yu-Gang Jiang},
|
| 132 |
-
booktitle={AAAI Oral},
|
| 133 |
-
year={2025},
|
| 134 |
-
}
|
| 135 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|