Edit model card

Big-Bird-base Span-detection model (based off extractive QA) for source detection. Similar to alex2awesome/quote-attribution__qa-model, except it's trained on a lot more data.

Scores this accuracy on the original gold-label evaluation dataset mentioned in: https://arxiv.org/pdf/2305.14904.pdf

{'QUOTE_loss': 0.8826233148574829,
 'QUOTE_f1': 0.7606302094654609,
 'QUOTE_e': 0.8011722272317403,
 'BACKGROUND_loss': 1.0125975608825684,
 'BACKGROUND_f1': 0.7663278679052017,
 'BACKGROUND_e': 0.8025316455696202,
 'PUBLISHED WORK_loss': 2.235799551010132,
 'PUBLISHED WORK_f1': 0.5839164237123421,
 'PUBLISHED WORK_e': 0.6632653061224489,
 'STATEMENT_loss': 0.9887452125549316,
 'STATEMENT_f1': 0.6281586686183751,
 'STATEMENT_e': 0.821656050955414,
 'DOCUMENT_loss': 3.033191442489624,
 'DOCUMENT_f1': 0.37,
 'DOCUMENT_e': 0.5,
 'SOCIAL MEDIA POST_loss': 2.6609137058258057,
 'SOCIAL MEDIA POST_f1': 0.07142857142857142,
 'SOCIAL MEDIA POST_e': 0.6428571428571429,
 'PRESS REPORT_loss': 4.626288414001465,
 'PRESS REPORT_f1': 0.4195856873822975,
 'PRESS REPORT_e': 0.4915254237288136,
 'DECLINED COMMENT_loss': 1.823654055595398,
 'DECLINED COMMENT_f1': 0.5,
 'DECLINED COMMENT_e': 0.6875,
 'PROPOSAL/ORDER/LAW_loss': 1.8824642896652222,
 'PROPOSAL/ORDER/LAW_f1': 0.3897058823529412,
 'PROPOSAL/ORDER/LAW_e': 0.5882352941176471,
 'PRICE SIGNAL_loss': 3.1955513954162598,
 'PRICE SIGNAL_f1': 0.3581029725649435,
 'PRICE SIGNAL_e': 0.42105263157894735,
 'NARRATIVE_loss': 1.0812034606933594,
 'NARRATIVE_f1': 0.7716535433070866,
 'NARRATIVE_e': 0.8582677165354331,
 'Other: DIRECT OBSERVATION_loss': 2.9162957668304443,
 'Other: DIRECT OBSERVATION_f1': 0.0,
 'Other: DIRECT OBSERVATION_e': 0.5,
 'COMMUNICATION, NOT TO JOURNO_loss': 2.4340972900390625,
 'COMMUNICATION, NOT TO JOURNO_f1': 0.6199886204481793,
 'COMMUNICATION, NOT TO JOURNO_e': 0.703125,
 'PUBLIC SPEECH, NOT TO JOURNO_loss': 1.0328387022018433,
 'PUBLIC SPEECH, NOT TO JOURNO_f1': 0.3680555555555555,
 'PUBLIC SPEECH, NOT TO JOURNO_e': 0.8333333333333334,
 'PROPOSAL_loss': 9.785638809204102,
 'PROPOSAL_f1': 0.18582375478927204,
 'PROPOSAL_e': 0.3103448275862069,
 'TWEET_loss': 1.5086543560028076,
 'TWEET_f1': 0.29411764705882354,
 'TWEET_e': 0.29411764705882354,
 'VOTE/POLL_loss': 2.717064619064331,
 'VOTE/POLL_f1': 0.2842423311312542,
 'VOTE/POLL_e': 0.30434782608695654,
 'LAWSUIT_loss': 4.084840297698975,
 'LAWSUIT_f1': 0.24777537277537276,
 'LAWSUIT_e': 0.4791666666666667,
 'DIRECT OBSERVATION_loss': 2.662404775619507,
 'DIRECT OBSERVATION_f1': 0.02,
 'DIRECT OBSERVATION_e': 0.52,
 'Other: PROPOSAL_loss': 1.9756362438201904,
 'Other: PROPOSAL_f1': 0.4,
 'Other: PROPOSAL_e': 0.4,
 'Other: LAWSUIT_loss': 2.9898979663848877,
 'Other: LAWSUIT_f1': 0.35294117647058826,
 'Other: LAWSUIT_e': 0.35294117647058826,
 'Other: Campaign filing_loss': 2.1050233840942383,
 'Other: Campaign filing_f1': 0.375,
 'Other: Campaign filing_e': 0.375,
 'Other: Campaign Filing_loss': 0.9306182861328125,
 'Other: Campaign Filing_f1': 0.4013157894736842,
 'Other: Campaign Filing_e': 0.375,
 'Other: Data Analysis_loss': 0.32309359312057495,
 'Other: Data Analysis_f1': 0.0,
 'Other: Data Analysis_e': 1.0,
 'Other: Evaluation_loss': 2.189145088195801,
 'Other: Evaluation_f1': 0.0,
 'Other: Evaluation_e': 0.5555555555555556,
 'full_loss': 1.2835853099822998,
 'full_f1': 0.6861547922871403,
 'full_e': 0.7609188882586501}
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .