This is a mirror of The AMI Corpus acoustic data originally hosted on <a href="http://groups.inf.ed.ac.uk/ami/corpus/">http://groups.inf.ed.ac.uk/ami/corpus/</a>

<p>
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers.
</p>

The associated paper(s) describing the data: </br>

<ul>
<li>Jean Carletta (2007). Unleashing the killer corpus: experiences in creating the multi-everything AMI Meeting Corpus. Language Resources and Evaluation Journal 41(2): 181-190. <a href="http://homepages.inf.ed.ac.uk/jeanc/carletta.LREC-keynote06.pdf">pdf</a>
</li>
<li>Steve Renals, Thomas Hain, and Hervé Bourlard (2007). Recognition and interpretation of meetings: The AMI and AMIDA projects. In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU '07). <a href="http://www.cstr.ed.ac.uk/downloads/publications/2007/ami-asru2007.pdf">pdf</a></li>
</ul>

