InfLLM-V2-data-5B / README.md
victor's picture
victor HF Staff
Duplicate from openbmb/InfLLM-V2-data-5B
9ef7a22 verified
metadata
license: apache-2.0
language:
  - en
  - zh
tags:
  - long-context
  - infllm

InfLLM-V2 Long-Context Training Dataset with 5B Tokens

Project Links: [Paper] [InfLLM-V2 Models] [CUDA Kernel Code]


🚀 About InfLLM-V2

InfLLM-V2 is a native sparse attention framework designed for the efficient processing of long-sequence texts. Its core advantage is the ability to maintain high performance comparable to dense attention in short-text scenarios—without any extra parameters—while seamlessly switching to a sparse mode for long-text scenarios, achieving significant end-to-end acceleration.

To support community reproduction and further exploration, we are open-sourcing the full suite of resources for the InfLLM-V2 project, including:

✨ Dataset Description

This dataset contains 5B tokens of long-text data used for training InfLLM-V2.

We demonstrate that only 5B tokens of high-quality long-text data are needed to successfully unlock the model's powerful sparse attention capabilities, without resorting to the trillion-scale data required by other methods. Using this dataset, researchers can efficiently reproduce our results or explore more advanced training methods for long-context models.

Data Composition and Specifications

1. Data Composition

This dataset is a carefully curated mixture from sources including web data, source code, scientific papers, and Wikipedia, augmented with a selection of high-quality in-house data.

2. Specifications

  • Total Tokens: Approximately 5 Billion (5B).
  • Tokenizer: Processed using the tokenizer from MiniCPM4.
  • Data Format: Sharded Parquet (.parquet).
  • Data Fields:
    • input_ids: (list[int]) The list of encoded Token IDs.
    • text: (string) The original text.

How to Use

Given the large size of the dataset, it is highly recommended to load it in streaming mode using the Hugging Face datasets library to avoid memory exhaustion.

from datasets import load_dataset

# Recommended: Load in streaming mode to save memory
ds = load_dataset("openbmb/InfLLM-V2-data-5B", split="train", streaming=True)

The InfLLM-V2 Training Workflow

The long-context capability of InfLLM-V2 is achieved through continued training on high-quality long-text data.

  • Step 1: Start from the base model.

  • Step 2: Continue training on this dataset.

    • Use this dataset (InfLLM-V2-data-5B) to perform continued training on the base model.
  • Step 3: Get the final long-context model.

    • InfLLM-V2-Long-Sparse-Base: The final model after training, equipped with powerful long-context and sparse attention capabilities.

Related Projects

Citation

If you use our work in your research, please cite our paper:

@misc{zhao2025infllmv2densesparseswitchableattention,
      title={InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation}, 
      author={Weilin Zhao and Zihan Zhou and Zhou Su and Chaojun Xiao and Yuxuan Li and Yanghao Li and Yudi Zhang and Weilun Zhao and Zhen Li and Yuxiang Huang and Ao Sun and Xu Han and Zhiyuan Liu},
      year={2025},
      eprint={2509.24663},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.24663}, 
}