nielsr's picture
nielsr HF Staff
Update task category and add sample usage
b3f4ac7 verified
|
raw
history blame
3.99 kB
metadata
license: mit
task_categories:
  - feature-extraction
tags:
  - pretraining
  - encoder
  - multilingual
  - fill-mask

mmBERT Training Data (Ready-to-Use)

License: MIT Paper Models GitHub

Complete Training Dataset: Pre-randomized and ready-to-use multilingual training data (3T tokens) for encoder model pre-training.

This dataset is part of the complete, pre-shuffled training data used to train the mmBERT encoder models. Unlike the individual phase datasets, this version is ready for immediate use but the mixture cannot be modified easily. The data is provided in decompressed MDS format ready for use with ModernBERT's Composer and the ModernBERT training repository.

Sample Usage (of models trained with this data)

Here are a few quick examples showing how to use the models trained with this dataset for various tasks.

Small Model for Fast Inference (Feature Extraction)

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-small")
model = AutoModel.from_pretrained("jhu-clsp/mmbert-small")

# Example: Get multilingual embeddings
inputs = tokenizer("Hello world! 你好世界! Bonjour le monde!", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state.mean(dim=1)

Base Model for Classification

from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch

tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
model = AutoModelForMaskedLM.from_pretrained("jhu-clsp/mmbert-base")

# Example: Multilingual masked language modeling
text = "The capital of [MASK] is Paris."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
    outputs = model(**inputs)

# Get predictions for [MASK] tokens
mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
predictions = outputs.logits[mask_indices]
top_tokens = torch.topk(predictions, 5, dim=-1)
predicted_words = [tokenizer.decode(token) for token in top_tokens.indices[0]]
print(f"Predictions: {predicted_words}")

Licensing & Attribution

This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.

Related Resources

Citation

@misc{marone2025mmbertmodernmultilingualencoder,
      title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning}, 
      author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2509.06888},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.06888}, 
}