Dataset Viewer
Auto-converted to Parquet Duplicate
prefix
stringlengths
7
84
target
stringlengths
1
19
category
stringclasses
4 values
Usually I zark. He also
zarks
Morphology
This belongs to the frap. It is the
frap's
Morphology
Usually I glost. Yesterday I
glosted
Morphology
Usually I blode. He also
blodes
Morphology
She is holding a gritch. She has three
gritches
Morphology
This belongs to the glim. It is the
glim's
Morphology
Yesterday we snarked. Today we
snark
Morphology
I saw a clig. Now I see two
cligs
Morphology
This is a swish. There are two
swishes
Morphology
She likes to trip. Yesterday she
tripped
Morphology
This is a rimp. Now there are two
rimps
Morphology
I saw a glint. Now I see two
glints
Morphology
Today I trale. Yesterday I
traled
Morphology
This is a glebe. There are two
glebes
Morphology
He is trying to chive. He has already
chived
Morphology
Today I crive. Yesterday I
crived
Morphology
This is a smix. There are two
smixes
Morphology
I have a kazz. You have two
kazzes
Morphology
Look at the dorg. I see many
dorgs
Morphology
He creates a wem. He creates many
wems
Morphology
I have one klij. You have two
klijes
Morphology
This is a flot. Now there are two
flots
Morphology
One mup, many
mups
Morphology
I decided to yim. I
yimmed
Morphology
She is going to fleg. She is
flegging
Morphology
I saw a glit. Now I see two
glits
Morphology
This is a tush. There are two
tushes
Morphology
He is going to flone. He is
floning
Morphology
Today I lorty. Yesterday I
lortied
Morphology
Look, he is briting! He loves to
brite
Morphology
She likes to plove. Yesterday she
ploved
Morphology
Look, he is wubbing! He loves to
wub
Morphology
I want to pline. Last year I
plined
Morphology
I want to prill. Last year I
prilled
Morphology
This is a prosh. There are two
proshes
Morphology
I have one katch. You have two
katches
Morphology
If you are primming, stop
primming
Morphology
She likes to crally. Yesterday she
crallied
Morphology
I have a slitch. You have two
slitches
Morphology
I see a frump. There are many
frumps
Morphology
There is one zom. There are two
zoms
Morphology
I see a gusp. There are two
gusps
Morphology
Usually I glazz. He also
glazzes
Morphology
She is holding a crin. She has three
crins
Morphology
One gliss. Two
glisses
Morphology
She likes to choy. Yesterday she
choyed
Morphology
This is a wub. Now there are two
wubs
Morphology
Today I pline. Yesterday I
plined
Morphology
She likes to frithe. Yesterday she
frithed
Morphology
He is trying to trape. He has already
traped
Morphology
Yesterday I gribbed. Today I
grib
Morphology
Look, he is zaying! He loves to
zay
Morphology
This is a splosh. There are two
sploshes
Morphology
This is a blunt. There are two
blunts
Morphology
She likes to froy. Yesterday she
froyed
Morphology
I found a trub. Then I found two more
trubs
Morphology
She likes to mat. Yesterday she
matted
Morphology
He yeggs every day. Yesterday he
yegged
Morphology
I saw a jub. Now I see two
jubs
Morphology
Yesterday I decided to glime. So I
glimed
Morphology
One mux. Two
muxes
Morphology
She likes to prine. Yesterday she
prined
Morphology
This is a kix. There are two
kixes
Morphology
This is a scrawp. There are two
scrawps
Morphology
He is trying to hilk. He has already
hilked
Morphology
This is a zatch. There are two
zatches
Morphology
She tends to gwak. She
gwaks
Morphology
Today I nesp. Yesterday I
nesped
Morphology
One crotch. Two
crotches
Morphology
Today I velt. Yesterday I
velted
Morphology
She likes to gritch. Yesterday she
gritched
Morphology
Usually I prosh. He also
proshes
Morphology
He is trying to scobe. He has already
scobed
Morphology
Today I parry. Yesterday I
parried
Morphology
One quax. Two
quaxes
Morphology
Usually I crozz. He also
crozzes
Morphology
One brat, many
brats
Morphology
She is holding a plick. She has three
plicks
Morphology
One zax. Two
zaxes
Morphology
She has a vetch. He has two
vetches
Morphology
This is a scrox. Now there are two
scroxes
Morphology
She wants to glick. Yesterday she
glicked
Morphology
This is a zamp. Now there are two
zamps
Morphology
I see a neff. You see two
neffs
Morphology
This is a gub. Now there are two
gubs
Morphology
This belongs to the drom. It is the
drom's
Morphology
Today I swale. Yesterday I
swaled
Morphology
She wants to smap. Last week she
smapped
Morphology
This is a mox. There are two
moxes
Morphology
One vesh. Two
veshes
Morphology
I see a trox. There are two
troxes
Morphology
She likes to fuke. She is always
fuking
Morphology
This is a gluss. Now there are two
glusses
Morphology
I saw a rub. Now I see two
rubs
Morphology
Yesterday we gurved. Today we
gurve
Morphology
She likes to vop. Yesterday she
vopped
Morphology
The bag of the drid is here. It is the
drid's
Morphology
He has one blit. Now he has two
blits
Morphology
I found a quimp. Then I found two more
quimps
Morphology
I see a wup. You see two
wups
Morphology
End of preview. Expand in Data Studio

M.I.R.O.N. (Multi-aspect Inference Robustness on Objective Next-tokens)

M.I.R.O.N. is a specialized benchmark designed to evaluate the impact of tokenization and architectural constraints on the generation quality of small, Base language models (SLMs).

Unlike global benchmarks (MMLU, GSM8K), MIRON focuses on the atomic capabilities of a model: morphological generalization, noise robustness, and factual integrity within a simple next-token prediction task.

🎯 Main Goal

To evaluate not the "intelligence" in complex reasoning, but the fundamental capability to correctly process input data, robustness to word fragmentation by the tokenizer, and prediction stability under noise conditions.

The benchmark is designed to be solvable even by small transformers. The tasks do not require Instruction Following, making this dataset ideal for testing Pre-trained (Base) checkpoints.

🧩 Data Structure

The dataset consists of 4000 examples (1000 per category), separated into two languages (ru, en). The dataset uses short category tags:

Tag (Category) Full Name Description
Morphology Morphological Generalization Tests on pseudo-words (wug-words). Checks if the model can inflect non-existent words (e.g., Β«wugΒ» -> Β«wugsΒ») based solely on grammar, without relying on lexical memory.
Facts Factual Knowledge Control group. Checks the integrity of perception regarding common named entities (e.g., Β«ParisΒ», Β«SunΒ»). If the tokenizer splits them poorly, access to knowledge becomes difficult.
Logic Logical Patterns Simple numeric and algorithmic sequences (e.g., Β«Tuesday -> WednesdayΒ»). Assesses token stitching when working with numbers and logic.
Noise Noise Robustness The context contains typos and perturbations. Evaluates how much the model's confidence "drifts" with slight input distortions.

πŸ“Š Dataset Fields

  • prefix: Input context for the model.
  • target: The expected continuation (ground truth).
  • category: Test category (Morphology, Facts, Logic, Noise).

πŸ“ Evaluation Methodology (Metrics)

Two metrics are calculated for each example. This allows distinguishing a model that "does not know" (low Score) from a model that "doubts due to tokenization" (low Confidence).

  1. Levenshtein Score (Generation Quality): Normalized Levenshtein distance. Evaluates how close the generated text is to the reference. Range: 0.0% – 100.0%

Scorec=1∣Dcβˆ£βˆ‘i=1∣Dc∣(1βˆ’Lev(Sgen(i),Sref(i))max⁑(∣Sgen(i)∣,∣Sref(i)∣))Γ—100% \text{Score}_c = \frac{1}{|D_c|} \sum_{i=1}^{|D_c|} \left( 1 - \frac{\text{Lev}(S_{\text{gen}}^{(i)}, S_{\text{ref}}^{(i)})}{\max(|S_{\text{gen}}^{(i)}|, |S_{\text{ref}}^{(i)}|)} \right) \times 100\%

  1. Target Confidence (Ground Truth Certainty): The geometric mean probability of the tokens that make up the actual target. It shows how ready the model was to output the correct answer. Range: 0.0% – 100.0%

Conf(Sref)=exp⁑(1Tβˆ‘j=1Tlog⁑P(tj∣context,t<j))Γ—100% \text{Conf}(S_{\text{ref}}) = \exp \left( \frac{1}{T} \sum_{j=1}^{T} \log P(t_j \mid \text{context}, t_{<j}) \right) \times 100\%

πŸ’» Evaluation Code Example (Python)

import torch
import numpy as np
from Levenshtein import distance as lev_distance

def compute_metrics(model, tokenizer, prefix: str, target: str, generated_text: str, device='cuda'):
    model.eval()
    
    # 1. Levenshtein Score
    max_len = max(len(generated_text), len(target))
    if max_len == 0:
        lev_score = 100.0
    else:
        dist = lev_distance(generated_text, target)
        lev_score = (1 - dist / max_len) * 100.0

    # 2. Target Confidence
    prefix_ids = tokenizer(prefix, return_tensors="pt").input_ids.to(device)
    full_ids = tokenizer(prefix + target, return_tensors="pt").input_ids.to(device)
    
    prefix_len = prefix_ids.shape[1]
    target_len = full_ids.shape[1] - prefix_len
    
    if target_len <= 0:
        return {'lev_score': round(lev_score, 2), 'target_confidence': 0.0}

    with torch.no_grad():
        logits = model(full_ids).logits
    
    shift_logits = logits[0, prefix_len-1:-1, :]
    target_labels = full_ids[0, prefix_len:]
    
    log_probs = torch.log_softmax(shift_logits, dim=-1)
    target_log_probs = torch.gather(log_probs, 1, target_labels.unsqueeze(1)).squeeze()
    
    if target_log_probs.dim() == 0:
        target_log_probs = target_log_probs.unsqueeze(0)

    confidence = np.exp(target_log_probs.mean().item()) * 100.0

    return {
        'lev_score': round(lev_score, 2),
        'target_confidence': round(confidence, 2)
    }
Downloads last month
25