Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ The documentation of this model in the Transformers library can be found [here](
|
|
| 12 |
|
| 13 |
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutxlm)
|
| 14 |
## Introduction
|
| 15 |
-
LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the
|
| 16 |
|
| 17 |
[LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836)
|
| 18 |
|
|
|
|
| 12 |
|
| 13 |
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutxlm)
|
| 14 |
## Introduction
|
| 15 |
+
LayoutXLM is a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. Experiment results show that it has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset.
|
| 16 |
|
| 17 |
[LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836)
|
| 18 |
|