metadata
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: redpajama_set_name
dtype: string
splits:
- name: train
num_bytes: 1098210584
num_examples: 250000
download_size: 641358839
dataset_size: 1098210584
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
size_categories:
- 100K<n<1M
From the Frontier Research Team at Takara.ai, we present MicroPajama — a dataset made for distillation and feature extraction without wikipedia from the larger SlimPajama.
This dataset contains 253,636,240 tokens using the BAAI/bge-large-en-v1.5 wordpiece tokenizer, you can reproduce this result in scripts/tokenize/main.py.
For research inquiries and press, please reach out to research@takara.ai
人類を変革する