leduckhai commited on
Commit
d8c3b30
Β·
verified Β·
1 Parent(s): 21010f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -43
README.md CHANGED
@@ -208,63 +208,201 @@ license: mit
208
  tags:
209
  - medical
210
  ---
211
- # MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation
212
 
213
- **<div align="center">Preprint</div>**
 
 
214
 
215
- <div align="center">Khai Le-Duc*, Tuyen Tran*,</div>
216
- <div align="center">Bach Phan Tat, Nguyen Kim Hai Bui, Quan Dang, Hung-Phong Tran, Thanh-Thuy Nguyen, Ly Nguyen, Tuan-Minh Phan, Thi Thu Phuong Tran, Chris Ngo,</div>
217
- <div align="center">Nguyen X. Khanh**, Thanh Nguyen-Tang**</div>
218
 
219
 
220
- <div align="center">*Equal contribution</div>
221
- <div align="center">**Equal supervision</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222
 
223
- * **Abstract:**
224
- Multilingual speech translation (ST) in the medical domain enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics. In this work, we present the first systematic study on medical ST, to our best knowledge, by releasing *MultiMed-ST*, a large-scale ST dataset for the medical domain, spanning all translation directions in five languages: Vietnamese, English, German, French, Traditional Chinese and Simplified Chinese, together with the models. With 290,000 samples, our dataset is the largest medical machine translation (MT) dataset and the largest many-to-many multilingual ST among all domains. Secondly, we present the most extensive analysis study in ST research to date, including: empirical baselines, bilingual-multilingual comparative study, end-to-end vs. cascaded comparative study, task-specific vs. multi-task sequence-to-sequence (seq2seq) comparative study, code-switch analysis, and quantitative-qualitative error analysis. All code, data, and models are available online: [https://github.com/leduckhai/MultiMed-ST](https://github.com/leduckhai/MultiMed-ST).
 
225
 
226
- > Please press ⭐ button and/or cite papers if you feel helpful.
 
 
227
 
228
- * **GitHub:**
229
- [https://github.com/leduckhai/MultiMed-ST](https://github.com/leduckhai/MultiMed-ST)
 
230
 
231
- * **Citation:**
232
- Please cite this paper: [https://arxiv.org/abs/2504.03546](https://arxiv.org/abs/2504.03546)
233
 
234
- ``` bibtex
235
- @article{le2025multimedst,
236
- title={MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation},
237
- author={Le-Duc, Khai and Tran, Tuyen and Tat, Bach Phan and Bui, Nguyen Kim Hai and Dang, Quan and Tran, Hung-Phong and Nguyen, Thanh-Thuy and Nguyen, Ly and Phan, Tuan-Minh and Tran, Thi Thu Phuong and others},
238
- journal={arXiv preprint arXiv:2504.03546},
239
- year={2025}
240
- }
241
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
242
 
243
- ## Dataset and Models:
244
 
245
- Dataset: [HuggingFace dataset](https://huggingface.co/datasets/leduckhai/MultiMed-ST)
 
246
 
247
- Fine-tuned models: [HuggingFace models](https://huggingface.co/leduckhai/MultiMed-ST)
248
 
249
- ## Contact:
 
250
 
251
- Core developers:
 
 
 
 
 
 
 
252
 
253
- **Khai Le-Duc**
254
- ```
255
- University of Toronto, Canada
256
- Email: duckhai.le@mail.utoronto.ca
257
- GitHub: https://github.com/leduckhai
258
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
259
 
260
- **Tuyen Tran**
261
- ```
262
  Hanoi University of Science and Technology, Vietnam
263
- Email: tuyencbt@gmail.com
264
- ```
265
-
266
- **Bui Nguyen Kim Hai**
267
- ```
268
- EΓΆtvΓΆs LorΓ‘nd University, Hungary
269
- Email: htlulem185@gmail.com
270
- ```
 
 
 
 
 
 
 
 
 
 
 
 
208
  tags:
209
  - medical
210
  ---
 
211
 
212
+ <p align="center">
213
+ <img src="./MultiMedST_icon.png" alt="MultiMedST_icon" width="70">
214
+ </p>
215
 
216
+ <h1 align="center">MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation</h1>
 
 
217
 
218
 
219
+ <p align="center">
220
+ <a href="https://arxiv.org/abs/2504.03546">
221
+ <img src="https://img.shields.io/badge/Paper-arXiv%3A2504.03546-b31b1b?logo=arxiv&logoColor=white" alt="Paper">
222
+ </a>
223
+ <a href="https://huggingface.co/datasets/leduckhai/MultiMed-ST">
224
+ <img src="https://img.shields.io/badge/Dataset-HuggingFace-blue?logo=huggingface&logoColor=white" alt="Dataset">
225
+ </a>
226
+ <a href="https://huggingface.co/leduckhai/MultiMed-ST">
227
+ <img src="https://img.shields.io/badge/Models-HuggingFace-green?logo=huggingface&logoColor=white" alt="Models">
228
+ </a>
229
+ <a href="https://github.com/leduckhai/MultiMed-ST/blob/main/LICENSE">
230
+ <img src="https://img.shields.io/badge/License-MIT-yellow" alt="License">
231
+ </a>
232
+ <a href="https://github.com/leduckhai/MultiMed-ST/stargazers">
233
+ <img src="https://img.shields.io/github/stars/leduckhai/MultiMed-ST?style=social" alt="Stars">
234
+ </a>
235
+ </p>
236
 
237
+ <p align="center">
238
+ <strong>πŸ“˜ EMNLP 2025</strong>
239
+ </p>
240
 
241
+ <p align="center">
242
+ <b>Khai Le-Duc*</b>, <b>Tuyen Tran*</b>, Bach Phan Tat, Nguyen Kim Hai Bui, Quan Dang, Hung-Phong Tran, Thanh-Thuy Nguyen, Ly Nguyen, Tuan-Minh Phan, Thi Thu Phuong Tran, Chris Ngo, Nguyen X. Khanh**, Thanh Nguyen-Tang**
243
+ </p>
244
 
245
+ <p align="center">
246
+ <sub>*Equal contribution &nbsp;&nbsp;|&nbsp;&nbsp; **Equal supervision</sub>
247
+ </p>
248
 
249
+ ---
 
250
 
251
+ > ⭐ **If you find this work useful, please consider starring the repo and citing our paper!**
252
+
253
+ ---
254
+
255
+ ## 🧠 Abstract
256
+
257
+ Multilingual speech translation (ST) in the **medical domain** enhances patient care by enabling effective communication across language barriers, alleviating workforce shortages, and improving diagnosis and treatment β€” especially in global health emergencies.
258
+
259
+ In this work, we introduce **MultiMed-ST**, the *first large-scale multilingual medical speech translation dataset*, spanning **all translation directions** across **five languages**:
260
+ πŸ‡»πŸ‡³ Vietnamese, πŸ‡¬πŸ‡§ English, πŸ‡©πŸ‡ͺ German, πŸ‡«πŸ‡· French, πŸ‡¨πŸ‡³ Traditional & Simplified Chinese.
261
+
262
+ With **290,000 samples**, *MultiMed-ST* represents:
263
+ - 🧩 the **largest medical MT dataset** to date
264
+ - 🌐 the **largest many-to-many multilingual ST dataset** across all domains
265
+
266
+ We also conduct **the most comprehensive ST analysis in the field's history**, to our best knowledge, covering:
267
+ - βœ… Empirical baselines
268
+ - πŸ”„ Bilingual vs. multilingual study
269
+ - 🧩 End-to-end vs. cascaded models
270
+ - 🎯 Task-specific vs. multi-task seq2seq approaches
271
+ - πŸ—£οΈ Code-switching analysis
272
+ - πŸ“Š Quantitative & qualitative error analysis
273
+
274
+ All **code, data, and models** are publicly available: πŸ‘‰ [**GitHub Repository**](https://github.com/leduckhai/MultiMed-ST)
275
+
276
+
277
+ <p align="center">
278
+ <img src="./poster_MultiMed-ST_EMNLP2025.png" alt="poster_MultiMed-ST_EMNLP2025" width="85%">
279
+ </p>
280
+
281
+ ---
282
+
283
+ ## 🧰 Repository Overview
284
+
285
+ This repository provides scripts for:
286
+
287
+ - πŸŽ™οΈ **Automatic Speech Recognition (ASR)**
288
+ - 🌍 **Machine Translation (MT)**
289
+ - πŸ”„ **Speech Translation (ST)** β€” both **cascaded** and **end-to-end** seq2seq models
290
+
291
+ It includes:
292
+
293
+ - βš™οΈ Model preparation & fine-tuning
294
+ - πŸš€ Training & inference scripts
295
+ - πŸ“Š Evaluation & benchmarking utilities
296
+
297
+ ---
298
 
299
+ ## πŸ“¦ Dataset & Models
300
 
301
+ - **Dataset:** [πŸ€— Hugging Face Dataset](https://huggingface.co/datasets/leduckhai/MultiMed-ST)
302
+ - **Fine-tuned Models:** [πŸ€— Hugging Face Models](https://huggingface.co/leduckhai/MultiMed-ST)
303
 
304
+ You can explore and download all fine-tuned models for **MultiMed-ST** directly from our Hugging Face repository:
305
 
306
+ <details>
307
+ <summary><b>πŸ”Ή Whisper ASR Fine-tuned Models (Click to expand) </b></summary>
308
 
309
+ | Language | Model Link |
310
+ |-----------|------------|
311
+ | Chinese | [whisper-small-chinese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-chinese) |
312
+ | English | [whisper-small-english](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-english) |
313
+ | French | [whisper-small-french](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-french) |
314
+ | German | [whisper-small-german](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-german) |
315
+ | Multilingual | [whisper-small-multilingual](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-multilingual) |
316
+ | Vietnamese | [whisper-small-vietnamese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-vietnamese) |
317
 
318
+ </details>
319
+
320
+ <details>
321
+ <summary><b>πŸ”Ή LLaMA-based MT Fine-tuned Models (Click to expand) </b></summary>
322
+
323
+ | Source β†’ Target | Model Link |
324
+ |------------------|------------|
325
+ | Chinese β†’ English | [llama_Chinese_English](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Chinese_English) |
326
+ | Chinese β†’ French | [llama_Chinese_French](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Chinese_French) |
327
+ | Chinese β†’ German | [llama_Chinese_German](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Chinese_German) |
328
+ | Chinese β†’ Vietnamese | [llama_Chinese_Vietnamese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Chinese_Vietnamese) |
329
+ | English β†’ Chinese | [llama_English_Chinese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_English_Chinese) |
330
+ | English β†’ French | [llama_English_French](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_English_French) |
331
+ | English β†’ German | [llama_English_German](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_English_German) |
332
+ | English β†’ Vietnamese | [llama_English_Vietnamese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_English_Vietnamese) |
333
+ | French β†’ Chinese | [llama_French_Chinese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_French_Chinese) |
334
+ | French β†’ English | [llama_French_English](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_French_English) |
335
+ | French β†’ German | [llama_French_German](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_French_German) |
336
+ | French β†’ Vietnamese | [llama_French_Vietnamese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_French_Vietnamese) |
337
+ | German β†’ Chinese | [llama_German_Chinese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_German_Chinese) |
338
+ | German β†’ English | [llama_German_English](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_German_English) |
339
+ | German β†’ French | [llama_German_French](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_German_French) |
340
+ | German β†’ Vietnamese | [llama_German_Vietnamese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_German_Vietnamese) |
341
+ | Vietnamese β†’ Chinese | [llama_Vietnamese_Chinese](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Vietnamese_Chinese) |
342
+ | Vietnamese β†’ English | [llama_Vietnamese_English](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Vietnamese_English) |
343
+ | Vietnamese β†’ French | [llama_Vietnamese_French](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Vietnamese_French) |
344
+ | Vietnamese β†’ German | [llama_Vietnamese_German](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/llama_Vietnamese_German) |
345
+
346
+ </details>
347
+
348
+ <details>
349
+ <summary><b>πŸ”Ή m2m100_418M MT Fine-tuned Models (Click to expand) </b></summary>
350
+
351
+ | Source β†’ Target | Model Link |
352
+ |------------------|------------|
353
+ | de β†’ en | [m2m100_418M-finetuned-de-to-en](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-de-to-en) |
354
+ | de β†’ fr | [m2m100_418M-finetuned-de-to-fr](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-de-to-fr) |
355
+ | de β†’ vi | [m2m100_418M-finetuned-de-to-vi](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-de-to-vi) |
356
+ | de β†’ zh | [m2m100_418M-finetuned-de-to-zh](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-de-to-zh) |
357
+ | en β†’ de | [m2m100_418M-finetuned-en-to-de](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-en-to-de) |
358
+ | en β†’ fr | [m2m100_418M-finetuned-en-to-fr](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-en-to-fr) |
359
+ | en β†’ vi | [m2m100_418M-finetuned-en-to-vi](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-en-to-vi) |
360
+ | en β†’ zh | [m2m100_418M-finetuned-en-to-zh](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-en-to-zh) |
361
+ | fr β†’ de | [m2m100_418M-finetuned-fr-to-de](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-fr-to-de) |
362
+ | fr β†’ en | [m2m100_418M-finetuned-fr-to-en](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-fr-to-en) |
363
+ | fr β†’ vi | [m2m100_418M-finetuned-fr-to-vi](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-fr-to-vi) |
364
+ | fr β†’ zh | [m2m100_418M-finetuned-fr-to-zh](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-fr-to-zh) |
365
+ | vi β†’ de | [m2m100_418M-finetuned-vi-to-de](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-vi-to-de) |
366
+ | vi β†’ en | [m2m100_418M-finetuned-vi-to-en](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-vi-to-en) |
367
+ | vi β†’ fr | [m2m100_418M-finetuned-vi-to-fr](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-vi-to-fr) |
368
+ | vi β†’ zh | [m2m100_418M-finetuned-vi-to-zh](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-vi-to-zh) |
369
+ | zh β†’ de | [m2m100_418M-finetuned-zh-to-de](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-zh-to-de) |
370
+ | zh β†’ en | [m2m100_418M-finetuned-zh-to-en](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-zh-to-en) |
371
+ | zh β†’ fr | [m2m100_418M-finetuned-zh-to-fr](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-zh-to-fr) |
372
+ | zh β†’ vi | [m2m100_418M-finetuned-zh-to-vi](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/m2m100_418M-finetuned-zh-to-vi) |
373
+
374
+ </details>
375
+
376
+ ---
377
+
378
+ ## πŸ‘¨β€πŸ’» Core Developers
379
+
380
+ 1. **Khai Le-Duc**
381
+
382
+ University of Toronto, Canada
383
+
384
+ πŸ“§ [duckhai.le@mail.utoronto.ca](mailto:duckhai.le@mail.utoronto.ca)
385
+ πŸ”— [https://github.com/leduckhai](https://github.com/leduckhai)
386
+
387
+ 2. **Tuyen Tran**: πŸ“§ [tuyencbt@gmail.com](mailto:tuyencbt@gmail.com)
388
 
 
 
389
  Hanoi University of Science and Technology, Vietnam
390
+
391
+ 3. **Nguyen Kim Hai Bui**: πŸ“§ [htlulem185@gmail.com](mailto:htlulem185@gmail.com)
392
+
393
+ EΓΆtvΓΆs LorΓ‘nd University, Hungary
394
+
395
+ ## 🧾 Citation
396
+
397
+ If you use our dataset or models, please cite:
398
+
399
+ πŸ“„ [arXiv:2504.03546](https://arxiv.org/abs/2504.03546)
400
+
401
+ ```bibtex
402
+ @inproceedings{le2025multimedst,
403
+ title={MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation},
404
+ author={Le-Duc, Khai and Tran, Tuyen and Tat, Bach Phan and Bui, Nguyen Kim Hai and Anh, Quan Dang and Tran, Hung-Phong and Nguyen, Thanh Thuy and Nguyen, Ly and Phan, Tuan Minh and Tran, Thi Thu Phuong and others},
405
+ booktitle={Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
406
+ pages={11838--11963},
407
+ year={2025}
408
+ }