Edit model card

Model Card for opt-6.7b-lora-sag-t3000-v300-v2

Adapter Description

This adapter was created as part of the SomosNLP Hackathon 2023 with the PEFT library and allowed the base model facebook/opt-6.7b to be fine-tuned on the SQUAD_ES (v1.1.0) and MLSUM by using the method LoRA.

  • Developed by:
    • 🇵🇪 Enrique Ubaldo
    • 🇵🇪 Fernando Alva-Manchego
    • 🇵🇪 @Levi111
    • 🇲🇽 @IvanHU
    • 🇨🇺 Alberto Carmona Barthelemy
  • Model type: Text2Text Generation
  • Language(s) (NLP): Spanish
  • License: apache-2.0

Uses

This model is designed for use in Spanish language instruction, specifically for tasks such as generating summaries, creating questions, and generating answers based on a given context and question.

Bias, Risks, and Limitations

Please note that this model inherits biases from its original base model. You can review these biases by visiting the following link.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig

peft_model_id = "hackathon-somos-nlp-2023/opt-6.7b-lora-sag-t3000-v300-v2"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)

model.config.use_cache = True

generation_config = GenerationConfig(temperature=.8,
                                     top_p=0.75,
                                     top_k=40)

def gen_summary(text):
  input_text = f'<s>Instruction: Elabora un resume del siguiente texto.\nInput: {text}\nOutput: '
  batch = tokenizer(input_text, return_tensors='pt')
  with torch.cuda.amp.autocast():
    output_tokens = model.generate(**batch, 
                                   max_new_tokens=256, 
                                   generation_config=generation_config)
  output = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
  return output

def gen_question(text):
  input_text = f'<s>Instruction: Dado el siguiente texto quiero que generes una pregunta cuya respuesta se encuentre en él.\nInput: {text}\nOutput: '
  batch = tokenizer(input_text, return_tensors='pt')
  with torch.cuda.amp.autocast():
    output_tokens = model.generate(**batch, 
                                   max_new_tokens=256, 
                                   generation_config=generation_config)
  output = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
  return output

def gen_qna(context, question):
  input_text = f"""<s>Instruction: Te voy a proporcionar un texto del cual deseo que me respondas una pregunta. 
    El texto es el siguiente: `{context}`\nInput: {question}\nOutput: """
  batch = tokenizer(input_text, return_tensors='pt')
  with torch.cuda.amp.autocast():
    output_tokens = model.generate(**batch, 
                                   max_new_tokens=256, 
                                   generation_config=generation_config)
  output = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
  return output

Training Details

Training Data

Training Procedure

We selected 1000 examples for each of the three tasks in the training dataset, and 100 examples for each task in the validation dataset. This resulted in a total of 3000 examples for training and 300 examples for validation.

The Colab used for training is here.

Training Hyperparameters

  • Training regime: fp16
  • Steps:: 80
  • Learning rate:: 2e-4
  • Training loss:: 1.1136
  • Validation loss:: 1.529

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: T4 16GB (Free Colab)
  • Hours used: 1 hour
  • Cloud Provider: Google Cloud Platform
  • Compute Region: us-central1?
  • Carbon Emitted: Total emissions are estimated to be 0.04 kgCO$_2$eq of which 100 percents were directly offset by the cloud provider.
Downloads last month
1
Inference Examples
Inference API (serverless) does not yet support adapter-transformers models for this pipeline type.

Datasets used to train somosnlp-hackathon-2023/opt-6.7b-lora-sag-t3000-v300-v2

Space using somosnlp-hackathon-2023/opt-6.7b-lora-sag-t3000-v300-v2 1