Datasets:
metadata
license: cc-by-4.0
task_categories:
- audio-classification
language:
- en
- id
tags:
- math-rock
- midwest-emo
- mbti-classification
- music-analysis
- multimodal
- audio-chunking
dataset_info:
features:
- name: artist
dtype: string
- name: song
dtype: string
- name: display_name
dtype: string
- name: file_name
dtype: string
- name: mbti
dtype:
class_label:
names:
'0': INTJ
'1': INTP
'2': ENTJ
'3': ENTP
'4': INFJ
'5': INFP
'6': ENFJ
'7': ENFP
'8': ISTJ
'9': ISFJ
'10': ESTJ
'11': ESFJ
'12': ISTP
'13': ISFP
'14': ESTP
'15': ESFP
- name: emotion
dtype:
class_label:
names:
'0': admiration
'1': amusement
'2': anger
'3': annoyance
'4': approval
'5': caring
'6': confusion
'7': curiosity
'8': desire
'9': disappointment
'10': disapproval
'11': disgust
'12': embarrassment
'13': excitement
'14': fear
'15': gratitude
'16': grief
'17': joy
'18': love
'19': nervousness
'20': optimism
'21': pride
'22': realization
'23': relief
'24': remorse
'25': sadness
'26': surprise
'27': neutral
- name: vibe
dtype:
class_label:
names:
'0': Melancholic
'1': Aggressive
'2': Dreamy
'3': Energetic
'4': Nostalgic
'5': Atmospheric
'6': Twinkly
'7': Complex
- name: intensity
dtype: float64
- name: tempo_bpm
dtype: float64
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 91868396856
num_examples: 4000
download_size: 85782279621
dataset_size: 91868396856
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Neural Math Rock — Augmented & Chunked Dataset
This dataset is a refined, augmented version of the original "Neural Math Rock" collection. It utilizes a Sliding Window Audio Chunking strategy to segment 2,500 full-length Math Rock and Midwest Emo tracks into high-density training samples for multimodal deep learning architectures (WavLM + XLM-RoBERTa).
Dataset Specifications
- Total Samples: Approximately 38,900 audio chunks (derived from 2,500 original tracks).
- Window Duration: 15 seconds per chunk (minimum threshold: 10 seconds).
- Audio Profile: FLAC format, Mono, 16,000 Hz sampling rate (optimized for WavLM feature extraction).
- Total Size: ~14.6 GB (distributed across 389 Parquet shards).
Features and Schema
chunk_id: Unique identifier for each segment (Format:Artist_Song_chunkXXX).text_missing: Boolean flag;Trueindicates instrumental segments (low RMS energy, no vocal presence).split: Pre-definedtrainortestassignments using a Group-based Splitting strategy (Anti-Leakage) to ensure chunks from the same song do not span multiple splits.mbti&emotion: Ground truth labels inherited from the parent track.vibe,intensity,tempo: Technical metadata for multi-label or auxiliary task learning.
Technical Considerations
The dataset uses a Weak Supervision approach where labels from the parent track are applied to all constituent chunks. For optimal results during model evaluation, it is recommended to implement an Ensemble Voting mechanism (aggregating predictions from all chunks belonging to a single song) rather than single-chunk inference.
Usage
Due to the dataset size, it is recommended to use the streaming parameter to avoid excessive memory consumption.
from datasets import load_dataset
# Load dataset in streaming mode
dataset = load_dataset("anggars/neural-mathrock", streaming=True)
# Fetch first sample
sample = next(iter(dataset['train']))
print(f"ID: {sample['chunk_id']}")
print(f"Instrumental: {sample['text_missing']}")