Speculative Decoding Snippet Not Working

#20
by nateraw - opened

Hi there! Seems the snippet under the speculative decoding section does not work. I just ran this in a new colab notebook (link here) and got the following:

TypeError: linear(): argument 'input' (position 1) must be Tensor, not BaseModelOutput

I meant to report this issue last week, but didn't get around to it. the error, if I remember correctly, was different then than it is now.

Related...it seems the explanation in that section starts by saying you can use whisper-tiny, but then also mentions using distil-whisper, which isn't used in the snippet at all, so kind of confusing (unless this is whisper-tiny??). Not sure!

@sanchit-gandhi / @patrickvonplaten can you please have a look when you get the chance? :) Would really love to try this out.

Same question here. I saw "distil-whisper" part coming from https://github.com/huggingface/distil-whisper. Also, adding "return_dict=True" will pass this error, but get stuck in another error.
File "/home/dev/.cache/pypoetry/virtualenvs/playground-_ijcKkog-py3.10/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 306, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [384, 80, 3], expected input[1, 1, 1500] to have 80 channels, but got 1 channels instead

Ive also tried distil-whisper and had issues. This code works:

from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
# from datasets import load_dataset

device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

assistant_model_id = "openai/whisper-tiny"

assistant_model = AutoModelForCausalLM.from_pretrained(
    assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
assistant_model.to(device)

model_id = "openai/whisper-large-v3"

model = AutoModelForSpeechSeq2Seq.from_pretrained(
    model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)

processor = AutoProcessor.from_pretrained(model_id)

pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    max_new_tokens=128,
    generate_kwargs={"assistant_model": assistant_model},
    torch_dtype=torch_dtype,
    device=device,
)

#dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
#sample = dataset[0]["audio"]

#result = pipe(sample)
#print(result["text"])

Aiii it seems like speculative decoding is accidentally broken on "main": https://github.com/huggingface/transformers/pull/26892#issuecomment-1813470053

Looking into reverting the PR: https://github.com/huggingface/transformers/pull/27523

Also Whisper-v3 doesn't yet work for speculative decoding because the model takes mel features of size 128 and not 80 and therefore there is a mismatch. We're working on getting Distil-Whisper-v3 out somewhat soon which should better enable speculative decoding.

Until Distil-Whisper-v3 is out, we'll have to stick to Whisper-v2 for speculative decoding, I'm afraid, see code here: https://github.com/huggingface/distil-whisper#speculative-decoding

Sign up or log in to comment