nonverbalspeech38k / README.md
nonverbalspeech's picture
Update README.md
50b9b9c verified
metadata
license: cc-by-nc-4.0
task_categories:
  - text-to-speech
  - automatic-speech-recognition
language:
  - zh
  - en
tags:
  - non-verbal
  - paralinguistic
  - expressive
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: duration
      dtype: float64
    - name: non_verbal_region
      list: float64
    - name: v1_caption
      dtype: string
    - name: source
      dtype: string
    - name: language
      dtype: string
    - name: label
      dtype: string
    - name: audio
      dtype: audio
    - name: caption
      dtype: string
  splits:
    - name: train
      num_bytes: 22591332353.942
      num_examples: 38718
  download_size: 21619613666
  dataset_size: 22591332353.942

🎉 🎉 🎉 NonVerbalSpeech-38K: A Scalable Pipeline for Enabling Non-Verbal Speech Generation and Understanding

The official repository for NonVerbalSpeech-38K (NVS-38K) dataset. ( News | Demo Page )

The NVS-38K dataset is constructed from in-the-wild audio sources, such as movies, cartoons, and audiobooks (see Section: Source Distribution of NVS-38K). It contains a total of 38,718 samples spanning approximately 131 hours, annotated with 10 non-verbal categories (see Section: Special Tag in the NVS-38K). NVS-38K is designed to support both non-verbal speech generation and non-verbal speech understanding tasks (see Figure below).

image/png

🎉 🎉 🎉 NEWS

[2025.08.31] (Caption Updated.)

The column "v1_caption" represents [2025.08.06] (Initial release.). The column "caption" contains the updated version ([2025.08.31] (Caption Updated)). All other columns remain the same as in [2025.08.06] (Initial release.). (see Section: Usage)

We have largely resolved the caption–audio misalignment issue caused by inaccurate ASR timestamps.

Specifically, we update the operation of integrating non-verbal tags with speech transcripts to remove the reliance on ASR timestamps. The detailed procedure is shown in the figure below:

image/jpeg

Example: (Additional examples can be found on the Demo Page)

"李渊看着秦琼心说,这是我的金殿哪,他在这儿就指着我两个儿子,让我两个儿子心服口服。哎呀。李渊心说,[sniff]<B>大唐江山要紧</B>,我也不能再包庇我的两个儿子了。哎呀,是世民,你也给我跪下。" ([2025.08.06 version], although extended to the sub-sentence level, it remains misaligned with the audio with respect to the tag [sniff].)

updated to:

"李渊看着秦琼心说,这是我的金殿哪,他在这儿就指着我两个儿子,让我两个儿子心服口服。[sniff]哎呀。李渊心说,大唐江山要紧,我也不能再包庇我的两个儿子了,二儿是民,你也给我跪下。" (This is fully aligned with the audio with regard to the tag [sniff].)

We conducted experiments on the updated captions to validate the improvements. The results are as follows:

NonVerbal Speech Generation

image/png

  1. After the Refined update, the CLAP-Score exhibits a noticeable improvement, indicating enhanced controllability. However, other metrics show a decline. This is likely due to the Refined operation splitting the audio into smaller segments, which makes ASR more susceptible to hallucinations and semantic inconsistencies, and also because the non-verbal timestamps are not perfectly accurate.
  2. After the Refined + Aligned update, the CLAP-Score improves further and approaches the level of Dia. Meanwhile, the other metrics are roughly consistent with the original NVS, resolving issues such as missing words, extra words, and semantic inconsistencies that appeared after the Refined update.

NonVerbal Speech Understanding (NonVerbal Speech Inline Caption Generation)

image/png

  1. After the Refined update, although the distance metrics improved as expected, the WER metric deteriorated. This is likely because splitting the audio into smaller segments makes ASR more prone to hallucinations and semantic inconsistencies, and also because the non-verbal timestamps are not perfectly accurate.

  2. Building on Refined, using the ASR results from the original full-length audio for Aligned can effectively mitigate the WER degradation, even outperforming the original version (which suffered from poor alignment). Meanwhile, the distance metrics improve further, likely because their calculation is based on edit distance—more accurate text predictions contribute to better distance scores.

  3. An unusual observation is that the F1-score decreases, both Refined alone and after Refined + Aligned. The F1-score drops to a level close to that of Capspeech. The common point between Capspeech, Refined, and Refined + Aligned is that they achieve better alignment than the original NVS.

[2025.08.06] (Initial Release.)

The initial release of our dataset.

Due to ASR-based timestamp limitations, slight misalignments may exist between audio and captions.
However, the detected non-verbal segments themselves are accurate. We plan to further improve alignment in future updates.
In the current version, we extend overlapping non-verbal expressions to the sub-sentence level to mitigate misalignment.

Example:

李渊看着秦琼心说,这是我的金殿哪,他在这儿就指着我两个儿子,让我两个儿子心服口服。哎呀。李渊心说,大唐[sniff]<B>江山</B>要紧,我也不能再包庇我的两个儿子了。哎呀,是世民,你也给我跪下。

changed to:

李渊看着秦琼心说,这是我的金殿哪,他在这儿就指着我两个儿子,让我两个儿子心服口服。哎呀。李渊心说,[sniff]<B>大唐江山要紧</B>,我也不能再包庇我的两个儿子了。哎呀,是世民,你也给我跪下。

This strategy aims to improve alignment between captions and audio. However, we observe that such modifications do not significantly enhance alignment quality. To support further research, we also release the Non-Verbal Regions detected by our model. These can be used for re-annotation with more precise timestamps.


📉 Dataset Overriew

Source Distribution of NVS-38K

Source Crawled (hrs) NVS-38K (#)
Radio Dramas 17,400 21,668
Comedy Sketches 7,200 7,273
Cartoon 3,600 5,019
Variety Shows 1,400 1,217
Short Plays 1,200 809
Speeches 1,079 158
Documentaries 600 105
Movies 500 1,090
Audiobooks 263 1,375
Toy Unboxing 9 4

Note: “Crawled (hrs)” shows the duration of the original crawled data; “NVS-38K (#)” shows the final sample counts in the proposed NonVerbalSpeech-38K dataset.

Distributions of Languages, Labels, and Durations in NVS-38K

image/png

Special Tag in the NVS-38K

  • [snore], [throatclearing], [crying], [breath], [sniff], [laughing], [coughing], [gasp], [yawn], [sigh]
  • <B> and </B>. The <B> and </B> tags indicate that non-verbal expressions overlap with spoken words.

🔧 Usage

from datasets import load_dataset
ds = load_dataset("nonverbalspeech/nonverbalspeech38k")
ds.save_to_disk("./nonverbalspeech38k")
>>> from datasets import load_from_disk
>>> ds = load_from_disk("./nonverbalspeech38k")
Loading dataset from disk: 100%|████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:00<00:00, 6769.04it/s]
>>> ds["train"][0]

# [2025.08.06] (Initial Release.)
{'duration': 17.854, 'non_verbal_region': [9.379500124000003, 10.124500248000004], 'caption': '杨总,我知道数的都是雷政委的名字,没关系,以我的身份没有权利,拥有什么研究成果的,让你受委屈了。没事的, [sigh]<B> 能在红暗基地做研究, </B> 能看到这么丰富的资料。', 'source': 'Radio Dramas', 'language': 'ZH', 'label': 'sigh', 'audio': {'path': None, 'array': array([0.00048828, 0.00057983, 0.00094604, ..., 0.00595093, 0.00473022,
       0.        ]), 'sampling_rate': 24000}} 

# [2025.08.31] (Caption Updated.)
{'duration': 17.854, 'non_verbal_region': [9.379500124000003, 10.124500248000004], 'v1_caption': '杨总,我知道数的都是雷政委的名字,没关系,以我的身份没有权利,拥有什么研究成果的,让你受委屈了。没事的, [sigh]<B> 能在红暗基地做研究, </B> 能看到这么丰富的资料。', 'source': 'Radio Dramas', 'language': 'ZH', 'label': 'sigh', 'audio': {'path': None, 'array': array([0.00048828, 0.00057983, 0.00094604, ..., 0.00595093, 0.00473022,
       0.        ]), 'sampling_rate': 24000}, 'caption': '杨总,我知道数的都是雷政委的名字,没关系,以我的身份没有权利,拥有什么研究成果的,[sigh]让你受委屈了。没事的,能在红暗基地做研究,能看到这么丰富的资料。'} 

⚠️ IMPORTANT Notes

Disclaimer: The NVS-38K dataset does not own the copyrights of the audio files it contains; copyright ownership resides with the original creators. The dataset is made available solely for non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.

📖 Reference

If you use the NVS-38K dataset, please cite the following papers:

Coming Soon...