nielsr HF Staff commited on
Commit
b3f4ac7
·
verified ·
1 Parent(s): 969d659

Update task category and add sample usage

Browse files

This PR updates the `task_categories` to `feature-extraction` to better reflect the primary application of models trained with this dataset. It also adds sample usage snippets for the `mmBERT` models, demonstrating how users can leverage models trained on this data for feature extraction and classification tasks, as found in the associated GitHub repository.

Files changed (1) hide show
  1. README.md +41 -1
README.md CHANGED
@@ -1,11 +1,12 @@
1
  ---
2
  license: mit
3
  task_categories:
4
- - fill-mask
5
  tags:
6
  - pretraining
7
  - encoder
8
  - multilingual
 
9
  ---
10
 
11
  # mmBERT Training Data (Ready-to-Use)
@@ -19,6 +20,45 @@ tags:
19
 
20
  This dataset is part of the complete, pre-shuffled training data used to train the [mmBERT encoder models](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4). Unlike the individual phase datasets, this version is ready for immediate use but **the mixture cannot be modified easily**. The data is provided in **decompressed MDS format** ready for use with [ModernBERT's Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## Licensing & Attribution
23
 
24
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.
 
1
  ---
2
  license: mit
3
  task_categories:
4
+ - feature-extraction
5
  tags:
6
  - pretraining
7
  - encoder
8
  - multilingual
9
+ - fill-mask
10
  ---
11
 
12
  # mmBERT Training Data (Ready-to-Use)
 
20
 
21
  This dataset is part of the complete, pre-shuffled training data used to train the [mmBERT encoder models](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4). Unlike the individual phase datasets, this version is ready for immediate use but **the mixture cannot be modified easily**. The data is provided in **decompressed MDS format** ready for use with [ModernBERT's Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
22
 
23
+ ## Sample Usage (of models trained with this data)
24
+
25
+ Here are a few quick examples showing how to use the models trained with this dataset for various tasks.
26
+
27
+ ### Small Model for Fast Inference (Feature Extraction)
28
+ ```python
29
+ from transformers import AutoTokenizer, AutoModel
30
+
31
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-small")
32
+ model = AutoModel.from_pretrained("jhu-clsp/mmbert-small")
33
+
34
+ # Example: Get multilingual embeddings
35
+ inputs = tokenizer("Hello world! 你好世界! Bonjour le monde!", return_tensors="pt")
36
+ outputs = model(**inputs)
37
+ embeddings = outputs.last_hidden_state.mean(dim=1)
38
+ ```
39
+
40
+ ### Base Model for Classification
41
+ ```python
42
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
43
+ import torch
44
+
45
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
46
+ model = AutoModelForMaskedLM.from_pretrained("jhu-clsp/mmbert-base")
47
+
48
+ # Example: Multilingual masked language modeling
49
+ text = "The capital of [MASK] is Paris."
50
+ inputs = tokenizer(text, return_tensors="pt")
51
+ with torch.no_grad():
52
+ outputs = model(**inputs)
53
+
54
+ # Get predictions for [MASK] tokens
55
+ mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
56
+ predictions = outputs.logits[mask_indices]
57
+ top_tokens = torch.topk(predictions, 5, dim=-1)
58
+ predicted_words = [tokenizer.decode(token) for token in top_tokens.indices[0]]
59
+ print(f"Predictions: {predicted_words}")
60
+ ```
61
+
62
  ## Licensing & Attribution
63
 
64
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.