chuuhtetnaing's picture
Update README.md
f7b3332 verified
metadata
dataset_info:
  - config_name: filtered
    features:
      - name: text
        dtype: string
      - name: id
        dtype: string
      - name: dump
        dtype: string
      - name: url
        dtype: string
      - name: date
        dtype: string
      - name: file_path
        dtype: string
      - name: language
        dtype: string
      - name: language_score
        dtype: float64
      - name: language_script
        dtype: string
      - name: minhash_cluster_size
        dtype: int64
      - name: top_langs
        dtype: string
    splits:
      - name: train
        num_bytes: 14529270539
        num_examples: 1619895
      - name: test
        num_bytes: 143832829
        num_examples: 16096
    download_size: 4660212792
    dataset_size: 14673103368
  - config_name: removed
    features:
      - name: text
        dtype: string
      - name: id
        dtype: string
      - name: dump
        dtype: string
      - name: url
        dtype: string
      - name: date
        dtype: string
      - name: file_path
        dtype: string
      - name: language
        dtype: string
      - name: language_score
        dtype: float64
      - name: language_script
        dtype: string
      - name: minhash_cluster_size
        dtype: int64
      - name: filter_reason
        dtype: string
      - name: top_langs
        dtype: string
    splits:
      - name: train
        num_bytes: 5843948257
        num_examples: 1033074
    download_size: 1804153242
    dataset_size: 5843948257
configs:
  - config_name: filtered
    data_files:
      - split: train
        path: filtered/train-*
      - split: test
        path: filtered/test-*
  - config_name: removed
    data_files:
      - split: train
        path: removed/train-*
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - language-modeling
  - masked-language-modeling
language:
  - my
pretty_name: Myanmar Fineweb2 Dataset

Please visit to the GitHub repository for other Myanmar Langauge datasets.

Myanmar Fineweb2 Dataset

A preprocessed subset of the Fineweb2 dataset containing only Myanmar language text, with consistent Unicode encoding.

Dataset Description

This dataset is derived from the Fineweb2 created by HuggingFaceFW. It contains only the Myanmar language portion of the original Fineweb2 dataset, with additional preprocessing to standardize text encoding.

Filtered and Removed Subsets

This dataset provides two configurations:

  • filtered: Contains Myanmar text that passed the original filtering criteria
  • removed: Contains Myanmar text that was filtered out by the original filtering pipeline

These two subsets together represent the complete Myanmar language data from Fineweb2 after global deduplication. This structure allows researchers to easily apply their own filtering criteria or work with both the filtered and unfiltered data according to their needs.

Quote from original dataset: While we tried our best to not overfilter, we know that our filtering isn't perfect, and wanted to allow the community to easily re-filter the data with their own filtering criteria. We have therefore also uploaded the data that was removed by our filtering pipeline for each language (it is suffixed by _removed). The filtered + the removed subsets of each language represent the entire data for a given language following global deduplication, which means that you do not have to re-deduplicate it yourself. You can find and adapt our filtering code here.

Preprocessing

The main preprocessing step applied to this dataset was:

  • Zawgyi to Unicode conversion: Myanmar text can be encoded in two different ways - Zawgyi and Unicode. We detected Zawgyi-encoded text and converted it to Unicode for consistency, ensuring all text in the dataset uses the same encoding standard.

The conversion was performed using Myanmar Tools for detection and ICU for transliteration:

from myanmartools import ZawgyiDetector
from icu import Transliterator

# Initialize the detector and converter
detector = ZawgyiDetector()
converter = Transliterator.createInstance('Zawgyi-my')

# Example conversion function
def zawgyi_to_unicode(text):
    score = detector.get_zawgyi_probability(text)
    if score > 0.5:  # If likely Zawgyi
        return converter.transliterate(text)
    return text  # Already Unicode

Dataset Structure

The dataset keep the same fields as the original fineweb-2 dataset.

Usage

Load Filtered Subset

You can load this dataset's filtered subset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("chuuhtetnaing/myanmar-c4-dataset")

Load Whole Dataset (including removed)

You can also load this whole dataset using the Hugging Face datasets library:

from datasets import concatenate_datasets
filtered_ds = load_dataset("chuuhtetnaing/myanmar-fineweb-2-dataset", name='filtered')
removed_ds = load_dataset("chuuhtetnaing/myanmar-fineweb-2-dataset", name='removed')

ds = DatasetDict({
    "train": concatenate_datasets([filtered_ds["train"], removed_ds["train"]]),
    "test": filtered_ds["test"]
})

Dataset Creation

This dataset was created by:

  1. Extracting the Myanmar language split (both filtered and removed) from the original Fineweb2 dataset
  2. Detecting Zawgyi-encoded text using Google's Myanmar Tools probabilistic detector on each line
  3. Converting Zawgyi text to Unicode using ICU's transliteration converter on each line

Dependencies

The preprocessing of this dataset relied on:

License

This dataset follows the same license as the original Fineweb2 dataset. Please refer to the original dataset page for licensing information.

Myanmar Tools is released under the Apache License 2.0.