--- license: cc-by-4.0 language: - 'no' - nb - nn - en datasets: - NbAiLab/ncc_speech - NbAiLab/NST - NbAiLab/NPSC tags: - audio - asr - automatic-speech-recognition - hf-asr-leaderboard metrics: - wer - cer library_name: transformers pipeline_tag: automatic-speech-recognition --- # NB-Whisper small (beta) This is a **_public beta_** of the Norwegian NB-Whisper. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
Speech given by His Majesty The King of Norway at the garden party hosted by Their Majesties The King and Queen at the Palace Park on 1 September 2016.
## Model Details NB-Whisper models are available in five different sizes (the table has links to the other sizes with semi-identical model cards): | Model Size | Parameters | Availability | |------------|------------|--------------| | tiny | 39M | _Will be released in public beta later this summer_ | | base | 74M | _Will be released in public beta later this summer_ | | small | 244M | This model, available in public beta | | medium | 769M | _Will be released in public beta later this summer_ | | large | 1550M | _Will be released in public beta later this summer_ | An official release of NB-Whisper models is planned for the Fall 2023. Please refer to the OpenAI Whisper model card for more details about the backbone model. ### Model Description - **Developed by:** [NB AI-Lab](https://ai.nb.no/) - **Shared by:** [NB AI-Lab](https://ai.nb.no/) - **Model type:** `whisper` - **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English - **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) - **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small) ### Model Sources - **Repository:** https://github.com/NbAiLab/nb-whisper/ - **Paper:** _Coming soon_ - **Demo:** _Coming soon_ ## Uses ### Direct Use This is a **_public beta_** release. The models published in this repository are intended for a generalist purpose and are available to third parties. ### Downstream Use We are confident that NB-Whisper will give better results than the multilingual OpenAI Whisper if the target is Norwegian. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself. A significant part of the training material comes from TV subtitles. Subtitles often shorten sentences to make them more readable. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models. ### Recommendations We recommend users of NB-Whisper models to consider finetuning them for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import pipeline asr = pipeline( "automatic-speech-recognition", "NbAiLab/nb-whisper-small-beta" ) asr( "audio.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'} ) # {'text': ' Så mange anga kører seg i så viktig sak, så vi får du kører det tilbake med. Om kabaret gudam i at vi skal hjælge. Kør seg vi gjør en uda? Nei noe skal å abelistera sonvorne skrifer. Det er sak, så kjent det bare handling i samtatsen til bargører. Trudet første lask. På den å først så å køre og en gange samme, og så får vi gjør å vorte vorte vorte når vi kjent dit.'} ``` Timestamps can also be retrieved by passing in the right parameter. ```python asr( "audio.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'}, return_timestamps=True, ) # {'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om hva valget dem gjør at vi skal gjøre. Hva skjer vi gjøre nå da? Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget, tror det første # r. Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.', # 'chunks': [{'timestamp': (0.0, 5.34), # 'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om'}, # {'timestamp': (5.34, 8.64), # 'text': ' hva valget dem gjør at vi skal gjøre.'}, # {'timestamp': (8.64, 10.64), 'text': ' Hva skjer vi gjøre nå da?'}, # {'timestamp': (10.64, 17.44), # 'text': ' Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget,'}, # {'timestamp': (17.44, 19.44), 'text': ' tror det første år.'}, # {'timestamp': (19.44, 23.94), # 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]} ``` ## Training Details ### Training Data Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes: - [NST Norwegian ASR Database (16 kHz)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-54/), and its corresponding [dataset](https://huggingface.co/datasets/NbAiLab/NST) - Transcribed speeches from the Norwegian Parliament produced by Språkbanken - TV broadcast (NRK) subtitles (NLN digital collection) - Audiobooks (NLN digital collection) ### Training Procedure #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** bf16 mixed precision #### Speeds, Sizes, Times [optional] [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] [More Information Needed] ## Environmental Impact Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** TPUv4 - **Hours used:** 1,536 - **Cloud Provider:** Google Cloud - **Compute Region:** `us-central1` - **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider. ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation _A paper is coming soon!_ ## Acknowledgements Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs. ## Contact ailab@nb.no