Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
                  with h5py.File(first_file, "r") as h5:
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
                  fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
                  fid = h5f.open(name, flags, fapl=fapl)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
                File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
                File "h5py/h5f.pyx", line 102, in h5py.h5f.open
              FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/sadit/SISAP2025@5f98b10913d61d405b0d4950e5e53e2e866f77f5/allknn-benchmark-dev-ccnews.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

This repository contains the development data files used in the SISAP2025 indexing challenge.

Datasets

Tasks

ALL of these files correspond to development files. The evaluation process will use other querysets and subsets.

Additionally to PUBMED23 and Google QA we provide benchmark files for CCNEWS and Yahoo QA to help on tuning and experimentation.

h5 datasets

The benchmark files are provided in h5 format and contain datasets and attributes

πŸ—‚οΈ  HDF5.File: (read-only) benchmark-dev-pubmed23.h5
β”œβ”€ πŸ“‚ itest
β”‚  β”œβ”€ 🏷️  algo
β”‚  β”œβ”€ 🏷️  querytime
β”‚  β”œβ”€ πŸ”’ dists
β”‚  β”œβ”€ πŸ”’ knns
β”‚  └─ πŸ”’ queries
β”œβ”€ πŸ“‚ otest
β”‚  β”œβ”€ 🏷️  algo
β”‚  β”œβ”€ 🏷️  querytime
β”‚  β”œβ”€ πŸ”’ dists
β”‚  β”œβ”€ πŸ”’ knns
β”‚  └─ πŸ”’ queries
└─ πŸ”’ train

Where

  • train is the main dataset to be indexed (a $384 \times n$ matrix)
  • otest/queries is the out-of-distribution query set (a $384 \times n$ matrix)
  • otest/knns (identifiers starting at 1) and otest/dists (distances) are the pre-computed gold standard for the out-of-distribution queries ($384 \times 11000$ matrices)
  • attributes of otest:
    • algo the algorithm used to create the gold-standard.
    • querytime the time in seconds used to compute the gold standard (using an Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz with 60 threads)
  • itest/queries is the in-distribution query set, useful for testing and comparing with the out-of-distribution ones (a $384 \times n$ matrix)
  • itest/knns (identifiers starting at 1) and itest/dists (distances) are the pre-computed gold standard for the in-distribution queries ($384 \times 11000$ matrices)
  • attributes of itest:
    • algo the algorithm used to create the gold-standard.
    • querytime the time in seconds used to compute the gold standard (using an Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz with 60 threads)

Task2 h5 files

benchmark-dev-*.h5 files can be used for development and both gold-standards follow task1. For task2 we provide an additional gold-standard allknn-benchmark-dev-gooaq.h5 that has the following structure:

πŸ—‚οΈ  HDF5.File: (read-only) allknn-benchmark-dev-gooaq.h5
β”œβ”€ 🏷️  algo
β”œβ”€ 🏷️  querytime
β”œβ”€ πŸ”’ dists 
└─ πŸ”’ knns

The descriptions of these h5 datasets are similar to the previous ones, however:

  • knns (identifiers) and dists (distances) corresponding to the outgoing vertices of the $k$ nearest neighbor graph, i.e., the adjacency list of the graph. (a 16 \times n matrix)
  • the edges contain a self loop that will be ignored by our scoring function.
Downloads last month
36