Load adapter

#1
by unanam - opened

Hi.
I'm on project with whisper fine-tuning(lora). I have a question.
How to load the adapter that you fine - tuned? Is the result that you expected?

It's a code I used. but there is hallucination like text repetition..

base_model = PeftConfig.from_pretrained(PERT_NAME).base_model_name_or_path

processor = WhisperProcessor.from_pretrained(base_model, language=language, task=task)
tokenizer = WhisperTokenizer.from_pretrained(base_model, language=language, task=task)
feature_extractor = WhisperFeatureExtractor.from_pretrained(base_model)

model = WhisperForConditionalGeneration.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, PERT_NAME)

model = model.merge_and_unload(progressbar=True)

forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task)

pipe = AutomaticSpeechRecognitionPipeline(model=model, tokenizer=tokenizer, feature_extractor=feature_extractor,device = device)

def transcribe(audio):

with torch.cuda.amp.autocast():

    result = pipe(audio, generate_kwargs={"forced_decoder_ids": forced_decoder_ids}, max_new_tokens=225, chunk_length_s=30)
    text = result["text"]

return text

So, could you let me know how to load your adapter using whisper?

Sign up or log in to comment