--- dataset_info: - config_name: question_only features: - name: qid dtype: string - name: question dtype: string - name: question_no_placeholder dtype: string splits: - name: test num_bytes: 115935 num_examples: 80 download_size: 67424 dataset_size: 115935 - config_name: question_with_checklist features: - name: qid dtype: string - name: question dtype: string - name: category dtype: string - name: checklist_id dtype: string - name: checklist dtype: string - name: question_no_placeholder dtype: string - name: checklist_no_placeholder dtype: string splits: - name: test num_bytes: 967513 num_examples: 543 download_size: 132694 dataset_size: 967513 configs: - config_name: question_only data_files: - split: test path: question_only/test-* - config_name: question_with_checklist data_files: - split: test path: question_with_checklist/test-* --- # Dataset Overview LiveResearchBench provides expert-curated, real-world tasks spanning daily life, enterprise, and academia, each requiring extensive, real-time web search, multi-source reasoning, and cross-domain synthesis. DeepEval offers human-aligned protocols for reliable, systematic evaluation of agentic systems on open-ended deep research tasks. ## 📌 Quick Links [Project Page](https://livedeepresearch.github.io/) [Paper](https://arxiv.org/abs/2510.14240) [Codebase](https://github.com/SalesforceAIResearch/LiveResearchBench) ## Dataset Fields **Subsets**: - `question_with_checklist`: Full dataset with questions and per-question checklists - `question_only`: Questions without checklists For each entry in the dataset: ```python { 'qid': 'market6VWmPyxptfK47civ', # Unique query identifier 'question': 'What is the size, growth rate...', # Research question 'checklists': [ # List of checklist items for coverage evaluation 'Does the report provide data for the U.S. electric vehicle market...', 'Does the report discuss the size, growth rate...', # ... more items ] } ``` ## Loading the Dataset ### Default: Static Mode (No Placeholders) The default static mode loads questions and checklists with dates already filled in (e.g., 2025 instead of `{{current_year}}`): ```python from liveresearchbench.common.io_utils import load_liveresearchbench_dataset # Load static version benchmark_data = load_liveresearchbench_dataset(use_realtime=False) ``` **Example:** - Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in **2025**?" ### Realtime Mode For dynamic evaluation with current dates, use realtime mode: ```python # Load realtime version (replaces {{current_year}} etc.) benchmark_data = load_liveresearchbench_dataset(use_realtime=True) ``` **The following placeholders will be replaced by the current date:** - `{{current_year}}` → 2025 (current year) - `{{last_year}}` → 2024 (current year - 1) - `{{current_date}} or {{date}}` → Nov 12, 2025 (current date) **Example:** - Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in **2025**?" (automatically updated each year) ## Accessing Questions and Checklists ```python from liveresearchbench.common.io_utils import ( load_liveresearchbench_dataset, get_question_for_qid, get_checklists_for_qid ) # Load dataset benchmark_data = load_liveresearchbench_dataset() # Get question for a specific query ID qid = "market6VWmPyxptfK47civ" question = get_question_for_qid(benchmark_data, qid) # Get checklist items for a specific query ID checklists = get_checklists_for_qid(benchmark_data, qid) print(f"Found {len(checklists)} checklist items") ``` ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ## Citation If you find this dataset helpful, please consider citing: ```bibtex @article{sfr2025liveresearchbench, title={LiveResearchBench: A Live Benchmark for User-Centric Deep Research in the Wild}, author={Jiayu Wang and Yifei Ming and Riya Dulepet and Qinglin Chen and Austin Xu and Zixuan Ke and Frederic Sala and Aws Albarghouthi and Caiming Xiong and Shafiq Joty}, year={2025}, url={https://arxiv.org/abs/2510.14240} } ```