tdiggelm's picture
updated about and added podcast urls
5800a52
metadata
license: apache-2.0
language: en
tags:
  - generated_from_trainer
datasets:
  - squad_v2
model-index:
  - name: distilroberta-base-squad_v2
    results:
      - task:
          name: Question Answering
          type: question-answering
        dataset:
          type: squad_v2
          name: The Stanford Question Answering Dataset
          args: en
        metrics:
          - type: eval_exact
            value: 65.2405
          - type: eval_f1
            value: 68.6265
          - type: eval_HasAns_exact
            value: 67.5776
          - type: eval_HasAns_f1
            value: 74.3594
          - type: eval_NoAns_exact
            value: 62.91
          - type: eval_NoAns_f1
            value: 62.91

distilroberta-base-squad_v2

This model is a fine-tuned version of distilroberta-base on the squad_v2 dataset.

Model description

This model is fine-tuned on the extractive question answering task -- The Stanford Question Answering Dataset -- SQuAD2.0.

For convenience this model is prepared to be used with the frameworks PyTorch, Tensorflow and ONNX.

Intended uses & limitations

This model can handle mismatched question-context pairs. Make sure to specify handle_impossible_answer=True when using QuestionAnsweringPipeline.

Example usage:

>>> from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
>>> model = AutoModelForQuestionAnswering.from_pretrained("squirro/distilroberta-base-squad_v2")
>>> tokenizer = AutoTokenizer.from_pretrained("squirro/distilroberta-base-squad_v2")
>>> qa_model = QuestionAnsweringPipeline(model, tokenizer)
>>> qa_model(
>>>    question="What's your name?",
>>>    context="My name is Clara and I live in Berkeley.",
>>>    handle_impossible_answer=True  # important!
>>> )
{'score': 0.9498472809791565, 'start': 11, 'end': 16, 'answer': 'Clara'}

Training and evaluation data

Training and evaluation was done on SQuAD2.0.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: tpu
  • num_devices: 8
  • total_train_batch_size: 512
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Metric Value
epoch 3
eval_HasAns_exact 67.5776
eval_HasAns_f1 74.3594
eval_HasAns_total 5928
eval_NoAns_exact 62.91
eval_NoAns_f1 62.91
eval_NoAns_total 5945
eval_best_exact 65.2489
eval_best_exact_thresh 0
eval_best_f1 68.6349
eval_best_f1_thresh 0
eval_exact 65.2405
eval_f1 68.6265
eval_samples 12165
eval_total 11873
train_loss 1.40336
train_runtime 1365.28
train_samples 131823
train_samples_per_second 289.662
train_steps_per_second 0.567

Framework versions

  • Transformers 4.17.0.dev0
  • Pytorch 1.9.0+cu111
  • Datasets 1.18.3
  • Tokenizers 0.11.6

About Us

Squirro Logo

Squirro marries data from any source with your intent, and your context to intelligently augment decision-making - right when you need it!

An Insight Engine at its core, Squirro works with global organizations, primarily in financial services, public sector, professional services, and manufacturing, among others. Customers include Bank of England, European Central Bank (ECB), Deutsche Bundesbank, Standard Chartered, Henkel, Armacell, Candriam, and many other world-leading firms.

Founded in 2012, Squirro is currently present in Z眉rich, London, New York, and Singapore. Further information about AI-driven business insights can be found at http://squirro.com.

Social media profiles: