SA-FARI / README.md
hkhedr's picture
inital commit
b38cec7 verified
metadata
extra_gated_fields:
  First Name: text
  Last Name: text
  Date of birth: date_picker
  Country: country
  Affiliation: text
  Job title:
    type: select
    options:
      - Student
      - Research Graduate
      - AI researcher
      - AI developer/engineer
      - Reporter
      - Other
  geo: ip_location
  By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
  The information you provide will be collected, stored, processed and shared in
  accordance with the [Meta Privacy
  Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
  - en
pretty_name: SA-FARI
configs:
  - config_name: SA-FARI
    data_files:
      - split: train
        path: annotation/sa_fari_train.json
      - split: test
        path: annotation/sa_fari_test.json
license: other

SA-FARI Dataset

License CC-BY-NC 4.0

SA-FARI is a wildlife camera dataset collected through a collaboration between Meta and CXL.

All videos and pre-processed JPEGImages can be found in cxl-public-camera-trap, which contains the following contents:

sa_fari/
├── sa_fari_test_tars/
│   ├── JPEGImages_6fps/
│   ├── videos/
├── sa_fari_test/
│   ├── JPEGImages_6fps/
│   ├── videos/
├── sa_fari_train_tars/
│   ├── JPEGImages_6fps/
│   ├── videos/
└── sa_fari_train/
    ├── JPEGImages_6fps/
    └── videos/
  • videos: The original full fps videos.
  • JPEGImages_6fps: For annotation, the videos have been downsampled to 6fps. This folder contains the downsampled frames compatible with the annotation json files below.

This Hugging Face dataset repo contains the annotations:

datasets/facebook/SA-FARI/tree/main/
└── annotation/
    ├── sa_fari_test.json
    ├── sa_fari_test_ext.json
    ├── sa_fari_train.json
    └── sa_fari_train_ext.json
  • sa_fari_test.json and sa_fari_train.json
  • sa_fari_test_ext.json and sa_fari_train_ext.json
    • In additional to the [SA-Co/VEval] format, we added additional metadata to the following fields:
      • videos:
        • video_num_frames, video_fps, video_creation_datetime and location_id have been added as additional metadata to the videos field.
      • categories:
        • Kingdom, Phylum, Class, Order, Family, Genus and Species have been added when applicable as additional metadata to the categories field.

All the SA-FARI annotation files are compatible to use the visualization notebook and offline evaluator developed in SAM 3 Github.

Annotation Format

A format breakdown for sa_fari_test.json and sa_fari_train.json. The format is similar to the YTVIS format.

In the annotation json, e.g. sa_fari_test.json there are 5 fields:

  • info:
    • A dict containing the dataset info
    • E.g. {'version': 'v1', 'date': '2025-09-24', 'description': 'SA-FARI Test'}
  • videos
    • A list of videos that are used in the current annotation json
    • It contains {id, video_name, file_names, height, width, length}
  • annotations
    • A list of positive masklets and their related info
    • It contains {id, segmentations, bboxes, areas, iscrowd, video_id, height, width, category_id, noun_phrase}
      • video_id should match to the videos - id field above
      • category_id should match to the categories - id field below
      • segmentations is a list of RLE
  • categories
    • A globally used noun phrase id map, which is true across all 3 domains.
    • It contains {id, name}
      • name is the noun phrase
  • video_np_pairs
    • A list of video-np pairs, including both positive and negative used in the current annotation json
    • It contains {id, video_id, category_id, noun_phrase, num_masklets}
      • video_id should match the videos - id above
      • category_id should match the categories - id above
      • when num_masklets > 0 it is a positive video-np pair, and the presenting masklets can be found in the annotations field
      • when num_masklets = 0 it is a negative video-np pair, meaning no masklet presenting at all
data {
    "info": info
    "videos": [video]
    "annotations": [annotation]
    "categories": [category]
    "video_np_pairs": [video_np_pair]
}
video {
    "id": int
    "video_name": str  # e.g. sav_000000
    "file_names": List[str]
    "height": int
    "width": width
    "length": length
}
annotation {
    "id": int
    "segmentations": List[RLE]
    "bboxes": List[List[int, int, int, int]]
    "areas": List[int]
    "iscrowd": int
    "video_id": str
    "height": int
    "width": int
    "category_id": int
    "noun_phrase": str
}
category {
    "id": int
    "name": str
}
video_np_pair {
    "id": int
    "video_id": str
    "category_id": int
    "noun_phrase": str
    "num_masklets" int
}