How2Sign / README.md
tanthinhdt's picture
Update README.md
5e471a6
metadata
task_categories:
  - translation
language:
  - en

Information

  • Language: English
  • The dataset contains both RGB (frontal and side view) and keypoints (only frontal view) data. However, the translation text is only available for frontal-view RGB data. Therefore, this repo only support this type of data.
  • Gloss is not currently available.
  • Storage
    • RGB
      • Train: 30.7 GB
      • Validation: 1.65 GB
      • Test: 2.24 GB

Structure

Each sample will have a structure as follows:

{
  'VIDEO_ID': Value(dtype='string', id=None),
  'VIDEO_NAME': Value(dtype='string', id=None),
  'SENTENCE_ID': Value(dtype='string', id=None),
  'SENTENCE_NAME': Value(dtype='string', id=None),
  'START_REALIGNED': Value(dtype='float64', id=None),
  'END_REALIGNED': Value(dtype='float64', id=None),
  'SENTENCE': Value(dtype='string', id=None),
  'VIDEO': Value(dtype='large_binary', id=None)
}

{
  'VIDEO_ID': '--7E2sU6zP4',
  'VIDEO_NAME': '--7E2sU6zP4-5-rgb_front',
  'SENTENCE_ID': '--7E2sU6zP4_10',
  'SENTENCE_NAME': '--7E2sU6zP4_10-5-rgb_front',
  'START_REALIGNED': 129.06,
  'END_REALIGNED': 142.48,
  'SENTENCE': "And I call them decorative elements because basically all they're meant to do is to enrich and color the page.",
  'VIDEO': <video-bytes>
}

How To Use

Because the returned video will be in bytes, here is a way to extract frames and fps:

# pip install av

import av
import io
import numpy as np
import os
from datasets import load_dataset


def extract_frames(video_bytes):
    # Create a memory-mapped file from the bytes
    container = av.open(io.BytesIO(video_bytes))

    # Find the video stream
    visual_stream = next(iter(container.streams.video), None)

    # Extract video properties
    video_fps = visual_stream.average_rate

    # Initialize arrays to store frames
    frames_array = []

    # Extract frames
    for packet in container.demux([visual_stream]):
        for frame in packet.decode():
            img_array = np.array(frame.to_image())
            frames_array.append(img_array)

    return frames_array, video_fps


dataset = load_dataset("VieSignLang/how2sign-clips", split="test", streaming=True)
sample = next(iter(dataset))["video"]
frames, video_fps = extract_frames(sample)
print(f"Number of frames: {frames.shape[0]}")
print(f"Video FPS: {video_fps}")