polinaeterna HF staff commited on
Commit
8b17c82
1 Parent(s): 1a63922

fix meta (description and citation)

Browse files
Files changed (1) hide show
  1. ami.py +34 -43
ami.py CHANGED
@@ -30,53 +30,44 @@ import os
30
  import datasets
31
 
32
  _CITATION = """\
33
- @article{DBLP:journals/corr/abs-2106-06909,
34
- author = {Guoguo Chen and
35
- Shuzhou Chai and
36
- Guanbo Wang and
37
- Jiayu Du and
38
- Wei{-}Qiang Zhang and
39
- Chao Weng and
40
- Dan Su and
41
- Daniel Povey and
42
- Jan Trmal and
43
- Junbo Zhang and
44
- Mingjie Jin and
45
- Sanjeev Khudanpur and
46
- Shinji Watanabe and
47
- Shuaijiang Zhao and
48
- Wei Zou and
49
- Xiangang Li and
50
- Xuchen Yao and
51
- Yongqing Wang and
52
- Yujun Wang and
53
- Zhao You and
54
- Zhiyong Yan},
55
- title = {GigaSpeech: An Evolving, Multi-domain {ASR} Corpus with 10, 000 Hours
56
- of Transcribed Audio},
57
- journal = {CoRR},
58
- volume = {abs/2106.06909},
59
- year = {2021},
60
- url = {https://arxiv.org/abs/2106.06909},
61
- eprinttype = {arXiv},
62
- eprint = {2106.06909},
63
- timestamp = {Wed, 29 Dec 2021 14:29:26 +0100},
64
- biburl = {https://dblp.org/rec/journals/corr/abs-2106-06909.bib},
65
- bibsource = {dblp computer science bibliography, https://dblp.org}
66
  }
67
  """
68
 
69
  _DESCRIPTION = """\
70
- GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
71
- labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
72
- and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
73
- and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
74
- sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
75
- for speech recognition training, and to filter out segments with low-quality transcription. For system training,
76
- GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
77
- For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
78
- and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
79
- are re-processed by professional human transcribers to ensure high transcription quality.
80
  """
81
 
82
  _HOMEPAGE = "https://groups.inf.ed.ac.uk/ami/corpus/"
 
30
  import datasets
31
 
32
  _CITATION = """\
33
+ @inproceedings{10.1007/11677482_3,
34
+ author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre},
35
+ title = {The AMI Meeting Corpus: A Pre-Announcement},
36
+ year = {2005},
37
+ isbn = {3540325492},
38
+ publisher = {Springer-Verlag},
39
+ address = {Berlin, Heidelberg},
40
+ url = {https://doi.org/10.1007/11677482_3},
41
+ doi = {10.1007/11677482_3},
42
+ abstract = {The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting
43
+ recordings. It is being created in the context of a project that is developing meeting
44
+ browsing technology and will eventually be released publicly. Some of the meetings
45
+ it contains are naturally occurring, and some are elicited, particularly using a scenario
46
+ in which the participants play different roles in a design team, taking a design project
47
+ from kick-off to completion over the course of a day. The corpus is being recorded
48
+ using a wide range of devices including close-talking and far-field microphones, individual
49
+ and room-view video cameras, projection, a whiteboard, and individual pens, all of
50
+ which produce output signals that are synchronized with each other. It is also being
51
+ hand-annotated for many different phenomena, including orthographic transcription,
52
+ discourse properties such as named entities and dialogue acts, summaries, emotions,
53
+ and some head and hand gestures. We describe the data set, including the rationale
54
+ behind using elicited material, and explain how the material is being recorded, transcribed
55
+ and annotated.},
56
+ booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction},
57
+ pages = {28–39},
58
+ numpages = {12},
59
+ location = {Edinburgh, UK},
60
+ series = {MLMI'05}
 
 
 
 
 
61
  }
62
  """
63
 
64
  _DESCRIPTION = """\
65
+ The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
66
+ synchronized to a common timeline. These include close-talking and far-field microphones, individual and
67
+ room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
68
+ the participants also have unsynchronized pens available to them that record what is written. The meetings
69
+ were recorded in English using three different rooms with different acoustic properties, and include mostly
70
+ non-native speakers. \n
 
 
 
 
71
  """
72
 
73
  _HOMEPAGE = "https://groups.inf.ed.ac.uk/ami/corpus/"