audio
audioduration (s)
0.17
6.84
label
class label
2 classes
1I+only+have+one
0I+have+one+now
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
0I+have+one+now
1I+only+have+one
0I+have+one+now
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
1I+only+have+one
1I+only+have+one
0I+have+one+now
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
0I+have+one+now
1I+only+have+one
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
1I+only+have+one
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
1I+only+have+one
1I+only+have+one
0I+have+one+now
1I+only+have+one
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one
0I+have+one+now
0I+have+one+now
0I+have+one+now
1I+only+have+one

Dataset Card for "have_one"

The dataset consists of utterances of have one that are cut either from an utterance of I have one now, or from an utterance of I only have one. The first tends to have prominence on have, while the second tends to have prominence on one. See github.com/MatsRooth/fiyou on the methodology for finding the utterances on Youtube, and aligning and cutting them using Kaldi.

To put such a dataset on huggingface hub, start with this directory structure, where the bottom directories contain wav files.

have_one
└── data
    ├── I+have+one+now
    └── I+only+have+one

Run have_one_hub.py to create the dataset, using the generic Huggingface methodology for audio datasets.

The dataset is used in the wav2vec2 binary classification model MatsRooth/wav2vec2-base_have_one.

Often cutting with a Kaldi phone alignment gives a snippet that includes part of preceding vowel, or has formant structure in the start of /h/ that gives information about the preceding vowel. These vowels are different for the two classes, and so classification can be based on this, as well as the intended prosodic difference. This needs to be corrected.

Downloads last month
0
Edit dataset card