Update README.md
Browse files
README.md
CHANGED
|
@@ -40,3 +40,133 @@ configs:
|
|
| 40 |
- split: validation
|
| 41 |
path: data/validation-*
|
| 42 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
- split: validation
|
| 41 |
path: data/validation-*
|
| 42 |
---
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
# FMA Genre Classification Dataset
|
| 46 |
+
|
| 47 |
+
The FMA Genre Classification Dataset is a subset of the Free Music Archive (FMA), containing audio samples and genre labels for music classification tasks. This version uses the "small" subset of FMA, which contains 8,000 tracks of 30 seconds each, evenly distributed across 8 genres.
|
| 48 |
+
|
| 49 |
+
## Dataset Description
|
| 50 |
+
|
| 51 |
+
### Dataset Summary
|
| 52 |
+
|
| 53 |
+
This dataset consists of 8,000 audio tracks from the Free Music Archive (FMA), each 30 seconds in length, distributed across 8 musical genres. The audio has been preprocessed to ensure consistent format and sampling rate (16 kHz). The dataset is split into training (80%) and validation (20%) sets.
|
| 54 |
+
|
| 55 |
+
### Supported Tasks
|
| 56 |
+
|
| 57 |
+
- **Audio Classification**: The dataset can be used to train models for music genre classification
|
| 58 |
+
- **Audio Feature Learning**: The dataset is suitable for training audio representation models
|
| 59 |
+
|
| 60 |
+
### Languages
|
| 61 |
+
|
| 62 |
+
The audio content spans multiple languages, but the metadata is in English.
|
| 63 |
+
|
| 64 |
+
### Dataset Structure
|
| 65 |
+
|
| 66 |
+
```
|
| 67 |
+
Number of tracks: 8,000
|
| 68 |
+
Audio length: 30 seconds each
|
| 69 |
+
Sampling rate: 16kHz (resampled from 44.1kHz)
|
| 70 |
+
Format: MP3
|
| 71 |
+
Split: 80% training, 20% validation
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
#### Data Fields
|
| 75 |
+
|
| 76 |
+
- `audio`: Audio file (MP3 format, 30s, resampled to 16kHz)
|
| 77 |
+
- `genre`: Genre label (one of 8 classes)
|
| 78 |
+
- `track_id`: Unique identifier for the track
|
| 79 |
+
- `title`: Track title
|
| 80 |
+
- `artist`: Artist name
|
| 81 |
+
|
| 82 |
+
#### Genres
|
| 83 |
+
|
| 84 |
+
1. Electronic
|
| 85 |
+
2. Experimental
|
| 86 |
+
3. Folk
|
| 87 |
+
4. Hip-Hop
|
| 88 |
+
5. Instrumental
|
| 89 |
+
6. International
|
| 90 |
+
7. Pop
|
| 91 |
+
8. Rock
|
| 92 |
+
|
| 93 |
+
### Dataset Creation
|
| 94 |
+
|
| 95 |
+
#### Source Data
|
| 96 |
+
|
| 97 |
+
The dataset is derived from the Free Music Archive (FMA), specifically the "small" subset. FMA is an open and easily accessible dataset consisting of full-length audio tracks with associated metadata.
|
| 98 |
+
|
| 99 |
+
[Original FMA Dataset Paper](https://arxiv.org/abs/1612.01840)
|
| 100 |
+
|
| 101 |
+
#### Preprocessing
|
| 102 |
+
|
| 103 |
+
1. Audio files are loaded from MP3 format
|
| 104 |
+
2. Resampled from 44.1kHz to 16kHz
|
| 105 |
+
3. Converted to mono if stereo
|
| 106 |
+
4. Verified for consistent length (30 seconds)
|
| 107 |
+
5. Metadata cleaned and verified to match existing audio files
|
| 108 |
+
|
| 109 |
+
### Considerations for Using the Data
|
| 110 |
+
|
| 111 |
+
#### Social Impact of Dataset
|
| 112 |
+
|
| 113 |
+
This dataset promotes research in music information retrieval and machine learning while respecting creative commons licensing. It helps advance automated music understanding while providing proper attribution to artists.
|
| 114 |
+
|
| 115 |
+
#### Discussion of Biases
|
| 116 |
+
|
| 117 |
+
The dataset may contain biases in:
|
| 118 |
+
- Genre representation (equal distribution might not reflect real-world music distribution)
|
| 119 |
+
- Western music bias
|
| 120 |
+
- English language bias in metadata
|
| 121 |
+
- Artist representation
|
| 122 |
+
|
| 123 |
+
#### Other Known Limitations
|
| 124 |
+
|
| 125 |
+
- Limited to 30-second clips
|
| 126 |
+
- Genre boundaries can be subjective
|
| 127 |
+
- Some tracks might fit multiple genres
|
| 128 |
+
- Audio quality varies between tracks
|
| 129 |
+
|
| 130 |
+
### Additional Information
|
| 131 |
+
|
| 132 |
+
#### Dataset Curators
|
| 133 |
+
|
| 134 |
+
This version of the dataset was curated by [rpmon], based on the original FMA dataset created by Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, and Xavier Bresson.
|
| 135 |
+
|
| 136 |
+
#### Licensing Information
|
| 137 |
+
|
| 138 |
+
The dataset is released under Creative Commons licenses. Individual tracks may have different Creative Commons licenses. Please refer to the original FMA dataset for specific license information.
|
| 139 |
+
|
| 140 |
+
#### Citation Information
|
| 141 |
+
|
| 142 |
+
If you use this dataset, please cite the original FMA paper:
|
| 143 |
+
|
| 144 |
+
```
|
| 145 |
+
@inproceedings{defferrard2017fma,
|
| 146 |
+
title={FMA: A Dataset for Music Analysis},
|
| 147 |
+
author={Defferrard, Michal and Benzi, Kirell and Vandergheynst, Pierre and Bresson, Xavier},
|
| 148 |
+
booktitle={18th International Society for Music Information Retrieval Conference},
|
| 149 |
+
year={2017}
|
| 150 |
+
}
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
### Usage Examples
|
| 154 |
+
|
| 155 |
+
```python
|
| 156 |
+
from datasets import load_dataset
|
| 157 |
+
|
| 158 |
+
# Load the dataset
|
| 159 |
+
dataset = load_dataset("rpmon/fma-genre-classification")
|
| 160 |
+
|
| 161 |
+
# Access training data
|
| 162 |
+
train_data = dataset['train']
|
| 163 |
+
|
| 164 |
+
# Get an audio sample and its genre
|
| 165 |
+
audio = train_data[0]['audio']
|
| 166 |
+
genre = train_data[0]['genre']
|
| 167 |
+
|
| 168 |
+
# Process with AST feature extractor
|
| 169 |
+
from transformers import ASTFeatureExtractor
|
| 170 |
+
feature_extractor = ASTFeatureExtractor.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
|
| 171 |
+
inputs = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt")
|
| 172 |
+
```
|