Datasets:
SeoyeonPark1223
commited on
Commit
•
8d258bf
1
Parent(s):
25dffb7
Upload 2 files
Browse files- README.md +43 -0
- pics/mfcc-output.png +3 -0
README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- feature-extraction
|
4 |
+
language:
|
5 |
+
- ko
|
6 |
+
tags:
|
7 |
+
- audio
|
8 |
+
- homecam
|
9 |
+
- npy
|
10 |
+
---
|
11 |
+
|
12 |
+
## Dataset Overview
|
13 |
+
- The dataset is a curated collection of `.npy` files containing MFCC features extracted from raw audio recordings.
|
14 |
+
- It has been specifically designed for training and evaluating machine learning models in the context of real-world emergency sound detection and classification tasks.
|
15 |
+
- The dataset captures diverse audio scenarios, making it a robust resource for developing safety-focused AI systems, such as the `SilverAssistant` project.
|
16 |
+
|
17 |
+
## Dataset Description
|
18 |
+
- The dataset used for this audio model consists of `.npy` files containing MFCC features extracted from raw audio recordings. These recordings include various real-world scenarios, such as:
|
19 |
+
- Criminal activities
|
20 |
+
- Violence
|
21 |
+
- Falls
|
22 |
+
- Cries for help
|
23 |
+
- Normal indoor sounds
|
24 |
+
|
25 |
+
- Feature Extraction Process
|
26 |
+
1. Audio Collection:
|
27 |
+
- Audio samples were sourced from datasets, such as AI Hub, to ensure coverage of diverse scenarios.
|
28 |
+
- These include emergency and non-emergency sounds to train the model for accurate classification.
|
29 |
+
2. MFCC Extraction:
|
30 |
+
- The raw audio signals were processed to extract Mel-Frequency Cepstral Coefficients (MFCC).
|
31 |
+
- The MFCC features effectively capture the frequency characteristics of the audio, making them suitable for sound classification tasks.
|
32 |
+
![MFCC Output](./pics/mfcc-output.png)
|
33 |
+
3. Output Format:
|
34 |
+
- The extracted MFCC features are saved as `13 x n` numpy arrays, where:
|
35 |
+
- 13: Represents the number of MFCC coefficients (features).
|
36 |
+
- n: Corresponds to the number of frames in the audio segment.
|
37 |
+
4. Saved Dataset:
|
38 |
+
- The processed `13 x n` MFCC arrays are stored as `.npy` files, which serve as the direct input to the model.
|
39 |
+
|
40 |
+
- Adaptation in `SilverAssistant` project: [HuggingFace SilverAudio Model](https://huggingface.co/SilverAvocado/Silver-Audio)
|
41 |
+
|
42 |
+
## Data Source
|
43 |
+
- Source: [AI Hub 위급상황 음성/음향](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=&topMenu=&aihubDataSe=data&dataSetSn=170)
|
pics/mfcc-output.png
ADDED
Git LFS Details
|