Edit model card

Description

Checkpoint of MetaQA from MetaQA: Combining Expert Agents for Multi-Skill Question Answering (https://arxiv.org/abs/2112.01922)

How to Use

from inference import MetaQA, PredictionRequest
metaqa = MetaQA("haritzpuerto/MetaQA")
# run the QA Agents with the input question and context. For this example, I will show mockup outputs from extractive QA agents.
list_preds = [('Utah', 0.1442876160144806),
                ('DOC] [TLE] 1886', 0.10822545737028122),
                ('Utah Territory', 0.6455602645874023),
                ('Eli Murray opposed the', 0.352359801530838),
                ('Utah', 0.48052430152893066),
                ('Utah Territory', 0.35186105966567993),
                ('Utah', 0.8328599333763123),
                ('Utah', 0.3405868709087372),
            ]
# add ("", 0.0) to the list of predictions until the size is 16 (because MetaQA was trained on 16 datasets/agents including other formats, not only extractive)
for i in range(16-len(list_preds)):
    list_preds.append(("", 0.0))

request = PredictionRequest()
request.input_question = "While serving as Governor of this territory, 1880-1886, Eli Murray opposed the advancement of polygamy?"
request.input_predictions = list_preds

(pred, agent_name, metaqa_score, agent_score) = metaqa.run_metaqa(request)
Downloads last month
5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train haritzpuerto/MetaQA