Edit model card

TalkBank Batchalign CHATWhisper

CHATWhisper is a series of ASR models specifically designed for the task for Language Sample Analysis (LSA) released by the TalkBank project, which delivers superior performance in the analysis of conversational speech transcripts, especially with regards to the analysis of filled pauses, retraicings, and stuttering.

The models are based on openai/whisper-large-v2 trained using an alpha=32, rank=16 LoRA. We will update the model card with evaluation performance shortly.

Usage

The models can be used directly as a Whisper-class ASR model following the same instructions released by OpenAI. Alternatively, to get the full analysis possible with the model, it is best combined with the TalkBank Batchalign suite of analysis software, available here, using transcribe mode with the --whisper flag.

Data

The models are trained with a combination of English Control Protocol samples from the AphasiaBank corpus of conversational speech from three seperate corpora.

Downloads last month
160
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.