LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by
Vassil Panayotov with the assistance of Daniel Povey.
The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
<p>
Acoustic models, trained on this data set, are available at <a href="http://www.kaldi-asr.org/downloads/build/6/trunk/egs/">
kaldi-asr.org</a> and language models, suitable for evaluation can be found at 
<a href="http://www.openslr.org/11/">http://www.openslr.org/11/</a>.
<p>
For more information, see the paper
 "LibriSpeech: an ASR corpus based on public domain audio books",
 Vassil Panayotov, Guoguo Chen, Daniel Povey and Sanjeev Khudanpur, ICASSP 2015 (submitted) 
 <a href="http://www.danielpovey.com/files/2015_icassp_librispeech.pdf" target=_blank>(pdf)</a> </p>

