bofenghuang commited on
Commit
2b478a5
1 Parent(s): 386e87b

updt README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -108,7 +108,13 @@ model-index:
108
 
109
  # Fine-tuned wav2vec2-FR-7K-large model for ASR in French
110
 
111
- [![Model architecture](https://img.shields.io/badge/Model_Architecture-Wav2Vec2--CTC-lightgrey)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-315M-lightgrey)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-French-lightgrey)](#datasets)
 
 
 
 
 
 
112
 
113
  This model is a fine-tuned version of [LeBenchmark/wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large), trained on a composite dataset comprising of over 2200 hours of French speech audio, using the train and validation splits of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0), [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech), [Voxpopuli](https://github.com/facebookresearch/voxpopuli), [Multilingual TEDx](http://www.openslr.org/100), [MediaSpeech](https://www.openslr.org/108), and [African Accented French](https://huggingface.co/datasets/gigant/african_accented_french). When using the model make sure that your speech input is also sampled at 16Khz.
114
 
 
108
 
109
  # Fine-tuned wav2vec2-FR-7K-large model for ASR in French
110
 
111
+ <style>
112
+ img {
113
+ display: inline;
114
+ }
115
+ </style>
116
+
117
+ [![Model architecture](https://img.shields.io/badge/Model_Architecture-Wav2Vec2--CTC-lightgrey)](#model-architecture) [![Model size](https://img.shields.io/badge/Params-315M-lightgrey)](#model-architecture) [![Language](https://img.shields.io/badge/Language-French-lightgrey)](#datasets)
118
 
119
  This model is a fine-tuned version of [LeBenchmark/wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large), trained on a composite dataset comprising of over 2200 hours of French speech audio, using the train and validation splits of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0), [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech), [Voxpopuli](https://github.com/facebookresearch/voxpopuli), [Multilingual TEDx](http://www.openslr.org/100), [MediaSpeech](https://www.openslr.org/108), and [African Accented French](https://huggingface.co/datasets/gigant/african_accented_french). When using the model make sure that your speech input is also sampled at 16Khz.
120