Update README.md
Browse files
README.md
CHANGED
@@ -29,4 +29,31 @@ configs:
|
|
29 |
- split: train
|
30 |
path: data/train-*
|
31 |
---
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
- split: train
|
30 |
path: data/train-*
|
31 |
---
|
32 |
+
|
33 |
+
# Re-Upload
|
34 |
+
|
35 |
+
This repository is a re-upload of [akgphysics/AudioSet](https://huggingface.co/datasets/agkphysics/AudioSet) in Parquet format, with all audio resampled to 16 KHz using `torchaudio.transforms.Resample`.
|
36 |
+
|
37 |
+
# Author's Description
|
38 |
+
|
39 |
+
> Audio Set: An ontology and human-labeled dataset for audio events
|
40 |
+
>
|
41 |
+
> Audio event recognition, the human-like ability to identify and relate sounds from audio, is a nascent problem in machine perception. Comparable problems such as object detection in images have reaped enormous benefits from comprehensive datasets - principally ImageNet. This paper describes the creation of Audio Set, a large-scale dataset of manually-annotated audio events that endeavors to bridge the gap in data availability between image and audio research. Using a carefully structured hierarchical ontology of 632 audio classes guided by the literature and manual curation, we collect data from human labelers to probe the presence of specific audio classes in 10 second segments of YouTube videos. Segments are proposed for labeling using searches based on metadata, context (e.g., links), and content analysis. The result is a dataset of unprecedented breadth and size that will, we hope, substantially stimulate the development of high-performance audio event recognizers.
|
42 |
+
>
|
43 |
+
> Jort F. Gemmeke; Daniel P. W. Ellis; Dylan Freedman; Aren Jansen; Wade Lawrence; R. Channing Moore; Manoj Plakal; Marvin Ritter et al., [10.1109/ICASSP.2017.7952261](https://ieeexplore.ieee.org/document/7952261)
|
44 |
+
|
45 |
+
# License
|
46 |
+
|
47 |
+
The AudioSet is published under CC-BY-4.0. Individual contribution information may be viewed at [research.google.com](https://research.google.com/audioset/dataset/index.html).
|
48 |
+
|
49 |
+
# Citation
|
50 |
+
|
51 |
+
```
|
52 |
+
@inproceedings{jort_audioset_2017,
|
53 |
+
title = {Audio Set: An ontology and human-labeled dataset for audio events},
|
54 |
+
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
|
55 |
+
year = {2017},
|
56 |
+
booktitle = {Proc. IEEE ICASSP 2017},
|
57 |
+
address = {New Orleans, LA}
|
58 |
+
}
|
59 |
+
```
|