-
HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models
Paper • 2309.15701 • Published • 2 -
CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders
Paper • 2309.07707 • Published • 1 -
Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling
Paper • 2311.00430 • Published • 53 -
Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Paper • 2309.13876 • Published • 1
Collections
Discover the best community collections!
Collections including paper arxiv:2311.00430
-
Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling
Paper • 2311.00430 • Published • 53 -
Efficient yet Competitive Speech Translation: FBK@IWSLT2022
Paper • 2205.02629 • Published • 1 -
Speechformer: Reducing Information Loss in Direct Speech Translation
Paper • 2109.04574 • Published • 1 -
Joint Speech Translation and Named Entity Recognition
Paper • 2210.11987 • Published • 1
-
Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling
Paper • 2311.00430 • Published • 53 -
MSTRE-Net: Multistreaming Acoustic Modeling for Automatic Lyrics Transcription
Paper • 2108.02625 • Published • 1 -
FLAP: Fast Language-Audio Pre-training
Paper • 2311.01615 • Published • 16 -
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
Paper • 2402.01831 • Published • 12
-
Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling
Paper • 2311.00430 • Published • 53 -
distil-whisper/distil-large-v2
Automatic Speech Recognition • Updated • 67.4k • 491 -
distil-whisper/distil-medium.en
Automatic Speech Recognition • Updated • 288k • 110 -
distil-whisper/distil-small.en
Automatic Speech Recognition • Updated • 24.1k • 80
-
Detecting Pretraining Data from Large Language Models
Paper • 2310.16789 • Published • 9 -
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models
Paper • 2310.13671 • Published • 17 -
AutoMix: Automatically Mixing Language Models
Paper • 2310.12963 • Published • 14 -
An Emulator for Fine-Tuning Large Language Models using Small Language Models
Paper • 2310.12962 • Published • 13
-
Large-Scale Automatic Audiobook Creation
Paper • 2309.03926 • Published • 52 -
Improving Language Model-Based Zero-Shot Text-to-Speech Synthesis with Multi-Scale Acoustic Prompts
Paper • 2309.11977 • Published • 2 -
SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
Paper • 2308.16692 • Published • 1 -
AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining
Paper • 2308.05734 • Published • 33
-
Large-Scale Automatic Audiobook Creation
Paper • 2309.03926 • Published • 52 -
UniAudio: An Audio Foundation Model Toward Universal Audio Generation
Paper • 2310.00704 • Published • 16 -
Improving Language Model-Based Zero-Shot Text-to-Speech Synthesis with Multi-Scale Acoustic Prompts
Paper • 2309.11977 • Published • 2 -
SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
Paper • 2308.16692 • Published • 1
-
Democratizing Reasoning Ability: Tailored Learning from Large Language Model
Paper • 2310.13332 • Published • 14 -
Teaching Language Models to Self-Improve through Interactive Demonstrations
Paper • 2310.13522 • Published • 10 -
Self-Convinced Prompting: Few-Shot Question Answering with Repeated Introspection
Paper • 2310.05035 • Published • 1 -
Tuna: Instruction Tuning using Feedback from Large Language Models
Paper • 2310.13385 • Published • 8
-
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 13 -
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Paper • 2310.14566 • Published • 23 -
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 5 -
Conditional Diffusion Distillation
Paper • 2310.01407 • Published • 19