task_categories:
- text-generation
language:
- en
tags:
- pretrain
size_categories:
- 10B<n<100B
Top 30B token SlimPajama Subset selected by the Professionalism rater
This repository contains the dataset described in the paper Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models.
Code: https://github.com/opendatalab/Meta-rater
Dataset Description
This dataset contains the top 30B tokens from the SlimPajama-627B corpus, selected using the Professionalism dimension of the PRRC (Professionalism, Readability, Reasoning, Cleanliness) framework. Each document in this subset is scored and filtered by a ModernBERT-based rater fine-tuned to assess the degree of professional knowledge and expertise required to comprehend the text.
- Source: SlimPajama-627B Annotated Dataset
- Selection: Top 30B tokens by PRRC-Professionalism score
- Quality metric: Professionalism (0–5 scale, see below)
- Annotation coverage: 100% of selected subset
Dataset Statistics
- Total tokens: 30B (subset of SlimPajama-627B)
- Selection method: Top-ranked by PRRC-Professionalism ModernBERT rater
- Domains: Same as SlimPajama (CommonCrawl, C4, GitHub, Books, ArXiv, Wikipedia, StackExchange)
- Annotation: Each document has a professionalism score (0–5)
Professionalism Quality Metric
Professionalism measures the degree of expertise and prerequisite knowledge required to comprehend the text. Higher scores indicate content that is more technical, specialized, or advanced, while lower scores reflect general or layperson-accessible material.
- 0–1: Minimal technical knowledge required (e.g., children's books, basic web content)
- 2–3: Some specialized knowledge or depth (e.g., popular science, detailed articles)
- 4–5: High expertise required (e.g., academic papers, technical manuals)
Scores are assigned by a ModernBERT model fine-tuned on Llama-3.3-70B-Instruct annotations, as described in the Meta-rater paper.
Annotation Process
- Initial annotation: Llama-3.3-70B-Instruct rated 500k+ SlimPajama samples for professionalism
- Model training: ModernBERT fine-tuned on these annotations
- Scoring: All SlimPajama documents scored by ModernBERT; top 30B tokens selected
Citation
If you use this dataset, please cite:
@article{zhuang2025meta,
title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
journal={arXiv preprint arXiv:2504.14194},
year={2025}
}
License
This dataset is released under the same license as the original SlimPajama dataset. See the original SlimPajama repository for details.
Contact
- Project Lead: Ren Ma (maren@pjlab.org.cn)
- Corresponding Author: Conghui He (heconghui@pjlab.org.cn)
- Issues: GitHub Issues
Made with ❤️ by the OpenDataLab team