yamete4's picture
Update README.md
35b3132 verified
|
raw
history blame
1.02 kB
metadata
library_name: peft
base_model: shpotes/codegen-350M-mono
datasets:
  - flytech/python-codes-25k
pipeline_tag: text-generation
tags:
  - code
license: mit

How to Get Started with the Model

  import torch
  from transformers import AutoModelForCausalLM, BitsAndBytesConfig
  from peft import PeftModel, PeftConfig

  config = PeftConfig.from_pretrained("yamete4/codegen-350M-mono-QLoRa-flytech")
  model = AutoModelForCausalLM.from_pretrained("shpotes/codegen-350M-mono",
                                                quantization_config=BitsAndBytesConfig(config),)
  peft_model = PeftModel.from_pretrained(model, "yamete4/codegen-350M-mono-QLoRa-flytech")

  text = "Help me manage my subscriptions!?"

  inputs = tokenizer(text, return_tensors="pt").to(0)
  outputs = perf_model.generate(inputs.input_ids, max_new_tokens=250, do_sample=False)

  print("After attaching Lora adapters:")
  print(tokenizer.decode(outputs[0], skip_special_tokens=False))

Framework versions

  • PEFT 0.9.0