File size: 3,721 Bytes
3367790
 
 
e59ebfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c45cc76
e59ebfe
 
c45cc76
 
 
 
 
 
e59ebfe
 
 
c46d539
5c30874
c46d539
 
 
e59ebfe
 
 
5c30874
 
e59ebfe
c45cc76
 
 
e59ebfe
c45cc76
e59ebfe
 
c45cc76
e59ebfe
 
 
c45cc76
e59ebfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: apache-2.0
---


# Dataset Name: Dataset for ASR Speaker-Tagging Corrections (Speaker Diarization)


## Description

- This dataset is pairs of erroneous ASR output and speaker tagging, which are generated from a ASR system and speaker diarization system.
Each source erroneous transcription is paired with human-annotated transcription, which has correct transcription and speaker tagging.
- [SEGment-wise Long-form Speech Transcription annotation](#segment-wise-long-form-speech-transcription-annotation-seglst) (`SegLST`), the file format used in the [CHiME challenges](https://www.chimechallenge.org)


Example) `session_ge1nse2c.seglst.json`

```
[
...
    {
        "session_id": "session_ge1nse2c",
        "words": "well that is the problem we have erroneous transcript and speaker tagging we want to correct it using large language models",
        "start_time": 181.88,
        "end_time": 193.3,
        "speaker": "speaker1"
    },
    {
        "session_id": "session_ge1nse2c",
        "words": "it seems like a really interesting problem I feel that we can start with very simple methods",
        "start_time": 194.48,
        "end_time": 205.03,
        "speaker": "speaker2"
    },
...
]
```

## Structure

### Data Split

The dataset is divided into training and test splits:

- Training Data: 222 entries
  - 2 to 4 speakers in each session 
  - Approximately 10 ~ 40 mins of recordings 
- Development Data: 13 entries
  - 2 speakers in each session 
  - Approximately 10 mins of recordings 
- Evaluation Data: 11 entries
  - 2 speakers in each session 
  - Approximately 10 mins of recordings 

### Keys (items)

- `session_id`: "session_ge1nse2c",
- `words`: Transcription corresponding to the time stamp (start, end).
- `start_time`: Start time in second.
- `end_time`: End time in second.
- `speaker`: Speaker tagging in string "speaker\<N\>"

### Source Datasets

`err_source_text`: This is the erroneous ASR-Diarization results to be fixed. Has dev, eval folders
`ref_annotated_text`: This is the human annotated ground-truth for evaluation. Only dev split is included. 

- **Training Sources**:
  - `dev`: 222 sessions

- **Development Sources**:
  - `dev`: 13 sessions

- **Evaluation Sources**:
  - `eval`: 11 Sessions

## Access

The dataset can be accessed and downloaded through the HuggingFace Datasets library (i.e., This Repository).

## Evaluation

This dataset can be evaluated by [MeetEval Software](https://github.com/fgnt/meeteval)

### From PyPI
```
pip install meeteval
```

### From source
```
git clone https://github.com/fgnt/meeteval
pip install -e ./meeteval
```

### Evaluate the corrected segLST files:
```
python -m meeteval.wer cpwer -h err_source_text/dev/session_ge1nse2c.json -r ref_annotate_text/dev/session_ge1nse2c.json
```
Or after installation, you can use the following command alternatively.
```
meeteval-wer cpwer -h err_source_text/dev/session_ge1nse2c.json -r ref_annotate_text/dev/session_ge1nse2c.json
```

### References

```bib
@inproceedings{park2024enhancing,
  title={Enhancing speaker diarization with large language models: A contextual beam search approach},
  author={Park, Tae Jin and Dhawan, Kunal and Koluguri, Nithin and Balam, Jagadeesh},
  booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={10861--10865},
  year={2024},
  organization={IEEE}
}
```

```bib
@InProceedings{MeetEval23,
  title={MeetEval: A Toolkit for Computation of Word Error Rates for Meeting Transcription Systems},
  author={von Neumann, Thilo and Boeddeker, Christoph and Delcroix, Marc and Haeb-Umbach, Reinhold},
  booktitle={CHiME-2023 Workshop, Dublin, England},
  year={2023}
}
```