pszemraj's picture
Update README.md
a01310f
|
raw
history blame
2.25 kB
metadata
license: apache-2.0
language:
  - en
tags:
  - t5
  - qa
  - askscience
  - lfqa
  - information retrieval
datasets:
  - vblagoje/lfqa
metrics:
  - rouge
widget:
  - text: why aren't there more planets in our solar system?
    example_title: solar system
  - text: >-
      question: what is a probability distribution? context: I am just learning
      about statistics.
    example_title: probability distribution
  - text: >-
      question: What are the underlying physical processes by which exercise
      helps us lose weight? context: I started working out two weeks ago and
      already feel a lot better, and started to think about it and became deeply
      confused.
    example_title: pumpen
  - text: what is a neural network?
    example_title: deep learning
  - text: >-
      What are the primary mechanisms that computers use to understand human
      language?
    example_title: NLP
inference:
  parameters:
    max_length: 128
    no_repeat_ngram_size: 2
    encoder_no_repeat_ngram_size: 4
    repetition_penalty: 3.51
    length_penalty: 0.8
    num_beams: 4
    early_stopping: true

checkpoints

This model is a fine-tuned version of google/t5-v1_1-base on the vblagoje/lfqa dataset, with training duration of 2 epochs. For a (somewhat) apples-to-apples comparison with t5-base on the standard eli5 dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 2

Training results

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.0+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0