Edit model card

RoBERTa Large trained on Social Interaction QA dataset using HuggingFace's Script for training Multiple Choice QA models. The model was trained for the EACL 2023 paper: MetaQA: Combining Expert Agents for Multi-Skill Question Answering. More information on: https://arxiv.org/abs/2112.01922 The average performance of five models trained with different random seeds on the on the test set is 74.17 ± 0.64

Downloads last month
4
Inference API (serverless) does not yet support transformers models for this pipeline type.

Dataset used to train haritzpuerto/roberta_large_social_i_qa