YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Fine-tuned LongT5 for Conversational QA
This model is a fine-tuned version of long-t5-tglobal-base for the task of Conversational QA. The model was fine-tuned on the SQuADv2 and CoQA datasets and on Tryolabs' own custom dataset, TryoCoQA.
An export of this model to the ONNX format is available at tryolabs/long-t5-tglobal-base-blogpost-cqa-onnx.
You can find the details on how we fine-tuned the model and built TryoCoQA on our blog post!
You can also play with the model on the following space.
Results
- Fine-tuning for 3 epochs on SQuADv2 and CoQA combined achieved a 74.29 F1 score on the test set.
- Fine-tuning for 166 epochs on TryoCoQA achieved a 54.77 F1 score on the test set.
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.