Text2Text Generation
60 languages
ybelkada's picture
ybelkada HF staff
Update README.md
e3628c1
metadata
language:
  - en
  - sp
  - ja
  - pe
  - hi
  - fr
  - ch
  - be
  - gu
  - ge
  - te
  - it
  - ar
  - po
  - ta
  - ma
  - ma
  - or
  - pa
  - po
  - ur
  - ga
  - he
  - ko
  - ca
  - th
  - du
  - in
  - vi
  - bu
  - fi
  - ce
  - la
  - tu
  - ru
  - cr
  - sw
  - yo
  - ku
  - bu
  - ma
  - cz
  - fi
  - so
  - ta
  - sw
  - si
  - ka
  - zh
  - ig
  - xh
  - ro
  - ha
  - es
  - sl
  - li
  - gr
  - ne
  - as
  - 'no'
widget:
  - text: 'Translate to German:  My name is Arthur'
    example_title: Translation
  - text: >-
      Please answer to the following question. Who is going to be the next
      Ballon d'or?
    example_title: Question Answering
  - text: >-
      Q: Can Geoffrey Hinton have a conversation with George Washington? Give
      the rationale before answering.
    example_title: Logical reasoning
  - text: >-
      Please answer the following question. What is the boiling point of
      Nitrogen?
    example_title: Scientific knowledge
  - text: >-
      Answer the following yes/no question. Can you write a whole Haiku in a
      single tweet?
    example_title: Yes/no question
  - text: >-
      Answer the following yes/no question by reasoning step-by-step. Can you
      write a whole Haiku in a single tweet?
    example_title: Reasoning task
  - text: 'Q: ( False or not False or False ) is? A: Let''s think step by step'
    example_title: Boolean Expressions
  - text: >-
      The square root of x is the cube root of y. What is y to the power of 2,
      if x = 4?
    example_title: Math reasoning
  - text: >-
      Premise:  At my age you will probably have learnt one lesson. Hypothesis: 
      It's not certain how many lessons you'll learn by your thirties. Does the
      premise entail the hypothesis?
    example_title: Premise and hypothesis
tags:
  - text2text-generation
datasets:
  - svakulenk0/qrecc
  - taskmaster2
  - djaym7/wiki_dialog
  - deepmind/code_contests
  - lambada
  - gsm8k
  - aqua_rat
  - esnli
  - quasc
  - qed
  - financial_phrasebank
license: apache-2.0

Model Card for LoRA-FLAN-T5 large

model image

This repository contains the LoRA (Low Rank Adapters) of flan-t5-large that has been fine-tuned on financial_phrasebank dataset.

Usage

Use this adapter with peft library

# pip install peft transformers
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

peft_model_id = "ybelkada/flan-t5-large-financial-phrasebank-lora"
config = PeftConfig.from_pretrained(peft_model_id)

model = AutoModelForSeq2SeqLM.from_pretrained(
    config.base_model_name_or_path, 
    torch_dtype='auto', 
    device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)

Enjoy!