pszemraj's picture
Librarian Bot: Add base_model information to model (#2)
d3bca73
metadata
language:
  - en
license: apache-2.0
tags:
  - t5
  - qa
  - askscience
  - lfqa
  - information retrieval
datasets:
  - vblagoje/lfqa
metrics:
  - rouge
widget:
  - text: why hasn't humanity expanded to live on other planets in our solar system?
    example_title: solar system
  - text: >-
      question: what is a probability distribution? context: I am just learning
      about statistics.
    example_title: probability distribution
  - text: >-
      question: What are the underlying physical processes by which exercise
      helps us lose weight? context: I started working out two weeks ago and
      already feel a lot better, and started to think about it and became deeply
      confused.
    example_title: pumpen
  - text: what is a neural network?
    example_title: deep learning
  - text: >-
      What is the process that computers use to understand human language in
      deep learning models?
    example_title: NLP
inference:
  parameters:
    max_length: 64
    no_repeat_ngram_size: 2
    encoder_no_repeat_ngram_size: 4
    repetition_penalty: 3.51
    length_penalty: 0.8
    num_beams: 4
    early_stopping: true
base_model: google/t5-v1_1-base

checkpoints

  • This model is a fine-tuned version of google/t5-v1_1-base on the vblagoje/lfqa dataset, with training duration of 2 epochs, for a (somewhat) apples-to-apples comparison with t5-base on the standard eli5 dataset.
    • This checkpoint does seem to be more coherent than t5-base on the original dataset.
  • Compared to bart on lfqa, it seems to be able to respond to some questions independently of retrieval.

NOTE: the inference API is limited to generating approx. 64 chars for runtime reasons, for longer outputs try using it in python as a transformers pipeline object.

Intended uses & limitations

  • Q&A, information retrieval
  • it is probably better to use it with a retrieval pipeline than alone

Training and evaluation data

  • see linked dataset. the dataset was filtered to only included the askscience subreddit in an attempt to focus on academic/technical queries.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 2

Training results

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.0+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0