danielrosehill commited on
Commit
6be4d8f
·
1 Parent(s): dc365df

Add 2 voice notes via SQLite backend export

Browse files
README.md CHANGED
@@ -1,217 +1,74 @@
1
- ---
2
- task_categories:
3
- - automatic-speech-recognition
4
- language:
5
- - en
6
- pretty_name: "Voice Note Audio Dataset"
7
- size_categories:
8
- - "n<1K"
9
- tags:
10
- - speech-to-text
11
- - noise-robustness
12
- - evaluation
13
- - whisper
14
- - real-world-audio
15
- - voice-notes
16
- license: mit
17
- ---
18
-
19
- # Voice Note Audio Dataset
20
-
21
- A curated dataset of real-world voice notes collected by Daniel Rosehill, primarily recorded in and around Jerusalem, Israel. This dataset captures authentic voice recordings in diverse acoustic environments and formats, reflecting typical daily usage patterns with speech-to-text transcription applications.
22
-
23
- **Current Status:** 190+ annotated voice notes with comprehensive metadata
24
-
25
- ## Dataset Overview
26
-
27
- This dataset is part of a larger voice note training collection being curated for STT fine-tuning, entity recognition, and real-world speech recognition evaluation. Unlike studio-quality audio commonly used in speech recognition training, these recordings intentionally include the challenges present in everyday voice note usage:
28
-
29
- - Variable background noise (traffic, conversations, music)
30
- - Different recording environments (indoor, outdoor, vehicles)
31
- - Multiple microphone types and Bluetooth codecs
32
- - Natural speaking patterns and multilingual content
33
- - Real-world audio quality variations
34
-
35
- ## Key Features
36
-
37
- ### Comprehensive Annotations
38
-
39
- Each voice note includes rich metadata stored in JSON format:
40
-
41
- - **Audio Metadata**: Duration, bitrate, sample rate, file format, codec information
42
- - **Transcripts**: AI-generated (uncorrected) and manually corrected ground truth versions
43
- - **Text Metrics**: Word count, character count, lexical diversity, WPM (words per minute)
44
- - **Quality Ratings**: Audio quality assessments, noise type classification
45
- - **Environmental Context**: Recording location, time of day, background conditions
46
- - **Content Classification**: Note type (email draft, to-do, idea, meeting note, etc.)
47
- - **Language Information**: Primary language, multilingual indicators, mixed-language notes
48
- - **Technical Details**: Microphone type, Bluetooth codec, recording device
49
 
50
- ### Dataset Statistics
51
 
52
- The repository includes an auto-generated `STATS.md` file with comprehensive metrics:
53
 
54
- - Total audio duration and word count across all recordings
55
- - Average duration and word count per note
56
- - Dataset completeness percentages (transcripts, corrections, annotations)
57
- - Character counts and text complexity metrics
58
 
59
- Statistics are automatically updated when new recordings are added to the dataset.
 
 
 
 
 
60
 
61
- ## Data Organization
62
 
63
  ```
64
- Voice-Note-Audio/
65
- ├── audio/ # Audio files (MP3, WAV, M4A, OGG)
66
- ├── transcripts/
67
- │ ├── uncorrected/ # AI-generated transcripts from STT
68
- │ └── ground_truths/ # Manually corrected transcripts
69
- ├── annotations/ # JSON metadata for each recording
70
- ├── schema/ # Annotation schema (versioned)
71
- │ ├── annotation_schema_v1.json # Schema definition v1.0.0
72
- │ ├── README.md # Schema documentation
73
- │ └── CHANGELOG.md # Version history
74
- ├── STATS.md # Auto-generated dataset statistics
75
- └── README.md # This file
 
 
 
 
 
 
 
76
  ```
77
 
78
- Files are numbered sequentially (e.g., `1.mp3`, `1.txt`, `1.json`) for easy cross-referencing.
79
-
80
- ## Dataset Management
81
-
82
- This dataset is actively managed using a custom Hugging Face Space application: **Voice Note Dataset Manager**
83
-
84
- The management interface provides:
85
- - Quick upload functionality with batch processing
86
- - Automated metadata extraction and calculation
87
- - Real-time statistics tracking and visualization
88
- - Browse, edit, and delete capabilities
89
- - Comprehensive annotation support
90
- - Automatic stats file generation
91
-
92
- ## Use Cases
93
-
94
- ### 1. STT Model Fine-Tuning
95
- Train and evaluate speech recognition models on real-world voice notes with natural noise and speaking patterns, improving accuracy for everyday recording conditions.
96
-
97
- ### 2. Noise Robustness Evaluation
98
- Benchmark STT systems against various background noise types and acoustic challenges commonly encountered in voice note applications.
99
-
100
- ### 3. Entity Recognition Development
101
- Develop specialized NER (Named Entity Recognition) models for voice notes to identify dates, names, locations, organizations, and other entities in spoken content.
102
-
103
- ### 4. Voice Note Classification
104
- Train models to automatically categorize voice notes by type (to-do items, meeting notes, ideas, etc.) based on audio characteristics and content.
105
-
106
- ### 5. Multilingual Speech Research
107
- Study code-switching and multilingual speech patterns in authentic voice recordings containing mixed English, Hebrew, and other languages.
108
-
109
- ## Annotation Schema
110
-
111
- The dataset uses a comprehensive, versioned annotation schema to ensure consistency and enable schema evolution over time.
112
-
113
- **Current Schema Version: 1.0.0** (Released: 2025-10-26)
114
-
115
- ### Schema Versioning
116
-
117
- The annotation schema follows [Semantic Versioning](https://semver.org/) (MAJOR.MINOR.PATCH):
118
- - **MAJOR**: Incompatible schema changes
119
- - **MINOR**: Backward-compatible additions
120
- - **PATCH**: Backward-compatible bug fixes
121
 
122
- Each annotation automatically includes a `schema_version` field, enabling:
123
- - Tracking which schema version was used for each annotation
124
- - Backward compatibility as the schema evolves
125
- - Migration paths when schema updates occur
126
- - Historical analysis of annotation practices
127
 
128
- Schema files and documentation are maintained in the `schema/` directory:
129
- - `annotation_schema_v1.json` - Current schema definition
130
- - `README.md` - Schema usage and documentation
131
- - `CHANGELOG.md` - Version history and changes
132
 
133
- ### Schema Coverage
 
 
134
 
135
- #### Classification (31 categories)
136
- Comprehensive note type classification including:
137
- - **Communication**: Email drafts, replies
138
- - **Task Management**: To-do lists, reminders, shopping lists
139
- - **Content Creation**: Blog posts, articles, social media, scripts, presentations
140
- - **Development**: Prompts (general, development, creative), documentation, code comments, bug reports, feature requests
141
- - **Personal & Professional**: Journal entries, memos, ideas, meeting notes, research notes, project planning
142
- - **General**: Questions, other
143
 
144
- #### Audio Defects (10 categories)
145
- Real-world audio challenges for STT evaluation:
146
- - Background noise, music, conversations
147
- - Crying baby, traffic sounds
148
- - Poor quality (distortion, clipping)
149
- - Multiple speakers, wind noise, echo
150
- - Phone ringing/notifications
151
 
152
- #### Content Issues (5 categories)
153
- Recording-level problems:
154
- - Side conversations, partial content
155
- - False starts, thinking aloud
156
- - Self-correction during recording
157
-
158
- #### Languages (7 supported)
159
- Multi-language support for:
160
- - English, Hebrew, Arabic
161
- - Russian, French, Spanish, German
162
-
163
- #### Transcription Quality (5 levels)
164
- STT output assessment:
165
- - Excellent, Good, Fair, Poor, Unusable
166
-
167
- #### Additional Metadata
168
- - **Audio Quality Indicators**: Quality ratings, noise types, environmental factors
169
- - **Technical Specifications**: Microphone types, Bluetooth codecs, audio formats
170
- - **Text Analysis**: Word/character counts, lexical diversity, speaking rate (WPM)
171
- - **Context**: Recording location, time of day, multi-language indicators
172
-
173
- See `schema/README.md` and `schema/CHANGELOG.md` for complete schema documentation and version history.
174
-
175
- ## Recording Equipment
176
-
177
- Voice notes were captured using:
178
- - **OnePlus Nord 3**: Internal microphone (primary device)
179
- - **Poly 5200**: Bluetooth headset microphone
180
- - **ATR 4697**: Professional wired microphone
181
-
182
- Various Bluetooth codecs documented in metadata when applicable.
183
-
184
- ## Dataset Growth
185
-
186
- This is an actively growing dataset. New voice notes are continuously added with full annotations and metadata. Check `STATS.md` in the repository for current dataset size and metrics.
187
-
188
- ## Citation
189
-
190
- If you use this dataset in your research, please cite:
191
-
192
- ```bibtex
193
- @dataset{rosehill_voicenote_2024,
194
- author = {Rosehill, Daniel},
195
- title = {Voice Note Audio Dataset},
196
- year = {2024},
197
  publisher = {Hugging Face},
198
- howpublished = {\url{https://huggingface.co/datasets/danielrosehill/Voice-Note-Audio}}
199
  }
200
  ```
201
 
202
- ## License
203
-
204
- This dataset is released under the MIT License, allowing for both commercial and non-commercial use with attribution.
205
-
206
- ## Contact
207
-
208
- **Daniel Rosehill**
209
- - Website: [danielrosehill.com](https://danielrosehill.com)
210
- - Email: public@danielrosehill.com
211
- - Hugging Face: [@danielrosehill](https://huggingface.co/danielrosehill)
212
 
213
- ## Acknowledgments
214
 
215
- AI transcripts provided by [Voicenotes.com](https://voicenotes.com), serving as baseline uncorrected transcripts for comparison with ground truth corrections.
216
 
217
- Dataset management interface built using Gradio and Hugging Face Spaces.
 
1
+ # Voice Notes Dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ ## Dataset Description
4
 
5
+ This dataset contains real-world voice recordings with transcripts and comprehensive annotations.
6
 
7
+ ### Dataset Statistics
 
 
 
8
 
9
+ - **Total Entries**: 2
10
+ - **Audio Files**: 2
11
+ - **Uncorrected Transcripts**: 2
12
+ - **Ground Truth Transcripts**: 0
13
+ - **Annotation Files**: 2
14
+ - **Export Date**: 2025-10-27
15
 
16
+ ### Dataset Structure
17
 
18
  ```
19
+ audio/ # Audio recordings (MP3, etc.)
20
+ ├── 1.mp3
21
+ ├── 2.mp3
22
+ └── ...
23
+
24
+ transcripts/
25
+ ├── uncorrected/ # Original STT transcripts
26
+ │ ├── 1.txt
27
+ │ ├── 2.txt
28
+ │ └── ...
29
+ └── ground_truths/ # Corrected transcripts (when available)
30
+ ├── 1.txt
31
+ ├── 2.txt
32
+ └── ...
33
+
34
+ annotations/ # Metadata and annotations
35
+ ├── 1.json
36
+ ├── 2.json
37
+ └── ...
38
  ```
39
 
40
+ ### Annotation Schema
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
+ Each annotation file contains:
43
+ - Audio metadata (duration, bitrate, sample rate, etc.)
44
+ - Text metrics (word count, WPM, lexical diversity)
45
+ - Temporal information (recording date, time of day)
46
+ - Custom annotations (audio quality, classification, etc.)
47
 
48
+ ### Use Cases
 
 
 
49
 
50
+ 1. **Real-World STT Evaluation**: Test speech-to-text models on non-ideal conditions
51
+ 2. **Voice Note Classification**: Train models to categorize voice notes
52
+ 3. **Audio Quality Assessment**: Analyze impact of background noise on transcription
53
 
54
+ ### Citation
 
 
 
 
 
 
 
55
 
56
+ If you use this dataset, please cite:
 
 
 
 
 
 
57
 
58
+ ```
59
+ @dataset{voice_notes_dataset,
60
+ author = {Daniel Rosehill},
61
+ title = {Voice Notes Dataset},
62
+ year = {2025},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  publisher = {Hugging Face},
64
+ howpublished = {\url{https://huggingface.co/datasets/...}}
65
  }
66
  ```
67
 
68
+ ### License
 
 
 
 
 
 
 
 
 
69
 
70
+ [Specify your license here]
71
 
72
+ ### Contact
73
 
74
+ For questions or feedback, please contact: public@danielrosehill.com
annotations/1.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:641dbbf16f7ab795cb972766bdcb923c67ec205b81390fa2811b468e62921a75
3
+ size 650
annotations/2.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:730408fb19fe96aa95017428c16a1f8d8012c7cf4c19d2d50d5a3b38dde592ae
3
+ size 654
audio/1.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7ed9e2ba97d3231457b3e699f67130488af59df2827599cecbaa4f054e1ccf1
3
+ size 1524716
audio/2.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff2b40d06add3f07ca26e609ca0fef0270b9f4e72bbfe33a31bf193bcee7e96b
3
+ size 4384556
transcripts/uncorrected/1.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ I'd like to consider a wee factor and then just give me your thoughts about this so currently it's a file based backend what I was wondering is would it make more sense to have a lightweight database backend SQLite let's say and and the important part of the utility which is the Hugging Face dataset push is what I'm using for the classification model would actually be a job whereby locally it will create the dataset from the local backend.<br><br>In other words, rather than having this sit in place as files, it's going to be constructed periodically. Basically when I say okay I've uploaded another batch, let's push, would that be easier and more logical to integrate with the front end?
transcripts/uncorrected/2.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Okay, so I'd like to add to the VoiceNote dataset manager. So I have really annotations, there's two main objectives for this project as I currently conceive of it. And I think on the front end it would be useful to, when I'm uploading stuff and annotating, to have two separate sections for it, a little bit more clearly delineated. and so on.<br><br>So, if we have delineated, for example, where we have upload new voice note, that can firstly just be called maybe upload, next section transcripts, next section, and by next section I'm defining the headers, next section classification, next section annotations.<br><br>So in classification, I'll just add a few more recurrent ones that we should have. Prompt General, Development Prompt, Read Me Dictation, Social Media Post, and then in Annotations.<br><br>So content issues call that Audio defects and let add one for a significant background noise In audio quality issues, what I'd like to have actually maybe is, and again, we're going to, I mean, in the process of defining the annotations and might have to sort of work backwards initially, but most of them haven't been annotated yet. I'm not going to start annotating until the schema is defined so it would actually be a lagging annotation process.<br><br>The ones that are missing currently are background music. You have background noise but I think background music is actually very important because from a copyright standpoint that could be an issue. and for multi-language don't actually even have English Hebrew I'd have to keep it open-ended as to what other languages are present and I'd like to have one for background conversations actually and tagging by language so English Hebrew Arabic Russian French I'm hard these would be the ones that encounter my local environments a lot