Edit model card

πŸ‡§πŸ‡· Opt-6.7B-lora-caramelo πŸ‡§πŸ‡·

Model Description

Opt-6.7B-lora-caramelo is further pre-train Facebook's OPT-6.78 model using casual language modeling on wikipedia-portuguese version [05/04/2023].

Limitations and Biases

This model is intended to be used with fine-tuning, supervision, and/or moderation. I recommend having a human curate or filter the outputs.

How to use

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = "arthurangelici/opt-6.7b-lora-caramelo"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)

batch = tokenizer("Caramelo Γ© um simbolo do: ", return_tensors='pt')

with torch.cuda.amp.autocast():
  output_tokens = model.generate(**batch, max_new_tokens=50)

print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))

License

The model is licensed under the OPT-6.75B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

BibTeX entry and citation info

@misc{zhang2022opt,
      title={OPT: Open Pre-trained Transformer Language Models}, 
      author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
      year={2022},
      eprint={2205.01068},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
3
Inference Examples
Inference API (serverless) does not yet support adapter-transformers models for this pipeline type.

Dataset used to train arthurangelici/opt-6.7b-lora-caramelo