Datasets:

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
License:
Taejin commited on
Commit
e59ebfe
1 Parent(s): 5a3b9ef

README.md first commit

Browse files

Adding descriptions to the dataset.

Files changed (1) hide show
  1. README.md +140 -0
README.md CHANGED
@@ -1,3 +1,143 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+
4
+ dataset_info:
5
+ features:
6
+ - name: session_id
7
+ dtype: string
8
+ - name: start_time
9
+ dtype: float
10
+ - name: end_time
11
+ dtype: float
12
+ - name: words
13
+ dtype: string
14
+ - name: speaker
15
+ dtype: string
16
+ splits:
17
+ - name: dev
18
+ num_bytes:
19
+ num_examples: 142
20
+ - name: eval
21
+ num_bytes:
22
+ num_examples: 104
23
+ download_size:
24
+ dataset_size:
25
  ---
26
+
27
+
28
+ # Dataset Name: Dataset for ASR Speaker-Tagging Corrections (Speaker Diarization)
29
+
30
+
31
+ ## Description
32
+
33
+ - This dataset is pairs of erroneous ASR output and speaker tagging, which are generated from a ASR system and speaker diarization system.
34
+ Each source erroneous transcription is paired with human-annotated transcription, which has correct transcription and speaker tagging.
35
+ - [SEGment-wise Long-form Speech Transcription annotation](#segment-wise-long-form-speech-transcription-annotation-seglst) (`SegLST`), the file format used in the [CHiME challenges](https://www.chimechallenge.org)
36
+
37
+
38
+ Example) `session_ge1nse2c.seglst.json`
39
+
40
+ ```
41
+ [
42
+ ...
43
+ {
44
+ "session_id": "session_ge1nse2c",
45
+ "words": "well that is the problem we have erroneous transcript and speaker tagging we want to correct it using large language models",
46
+ "start_time": 181.88,
47
+ "end_time": 193.3,
48
+ "speaker": "speaker1"
49
+ },
50
+ {
51
+ "session_id": "session_ge1nse2c",
52
+ "words": "it seems like a really interesting problem I feel that we can start with very simple methods",
53
+ "start_time": 194.48,
54
+ "end_time": 205.03,
55
+ "speaker": "speaker2"
56
+ },
57
+ ...
58
+ ]
59
+ ```
60
+
61
+ ## Structure
62
+
63
+ ### Data Split
64
+
65
+ The dataset is divided into training and test splits:
66
+
67
+ - Development Data: 142 entries
68
+ - 2 to 4 speakers in each session
69
+ - Approximately 10 ~ 40 mins of recordings
70
+ - Evaluation Data: 104 entries
71
+ - 2 to 4 speakers in each session
72
+ - Approximately 10 ~ 40 mins of recordings
73
+
74
+ ### Keys (items)
75
+
76
+ `session_id`: "session_ge1nse2c",
77
+ `words": Transcription corresponding to t
78
+ `start_time`: Start time in second.
79
+ `end_time`: End time in second.
80
+ `speaker`: Speaker tagging in string: "speaker<N>"
81
+
82
+ ### Source Datasets
83
+
84
+ The dataset combines entries from various sources:
85
+
86
+ - **Development Sources**:
87
+ - `dev`: 142 sessions
88
+
89
+ - **Evaluation Sources**:
90
+ - `eval`: 104 Sessions
91
+
92
+
93
+
94
+ ## Access
95
+
96
+ The dataset can be accessed and downloaded through the HuggingFace Datasets library.
97
+
98
+
99
+ ## Evaluation
100
+
101
+ This dataset can be evaluated by [MeetEval Software](https://github.com/fgnt/meeteval)
102
+
103
+ ### From PyPI
104
+ ```
105
+ pip install meeteval
106
+ ```
107
+
108
+ ### From source
109
+ ```
110
+ git clone https://github.com/fgnt/meeteval
111
+ pip install -e ./meeteval
112
+ ```
113
+
114
+ ### Evaluate the corrected segLST files:
115
+ ```
116
+ python -m meeteval.wer cpwer -h err_source_text/dev/session_ge1nse2c.json -r ref_annotate_text/dev/session_ge1nse2c.json
117
+ ```
118
+ Or after installation, you can use the following command alternatively.
119
+ ```
120
+ meeteval-wer cpwer -h err_source_text/dev/session_ge1nse2c.json -r ref_annotate_text/dev/session_ge1nse2c.json
121
+ ```
122
+
123
+ ### References
124
+
125
+ ```bib
126
+ @inproceedings{park2024enhancing,
127
+ title={Enhancing speaker diarization with large language models: A contextual beam search approach},
128
+ author={Park, Tae Jin and Dhawan, Kunal and Koluguri, Nithin and Balam, Jagadeesh},
129
+ booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
130
+ pages={10861--10865},
131
+ year={2024},
132
+ organization={IEEE}
133
+ }
134
+ ```
135
+
136
+ ```bib
137
+ @InProceedings{MeetEval23,
138
+ title={MeetEval: A Toolkit for Computation of Word Error Rates for Meeting Transcription Systems},
139
+ author={von Neumann, Thilo and Boeddeker, Christoph and Delcroix, Marc and Haeb-Umbach, Reinhold},
140
+ booktitle={CHiME-2023 Workshop, Dublin, England},
141
+ year={2023}
142
+ }
143
+ ```