Edit model card

Model Summary

Phi-mmlu-lora is a LORA model which fine-tuned on mmlu dataset. The base model is microsoft/phi-2.

How to Use

import torch
from transformers import AutoTokenizer
from peft import AutoPeftModelForCausalLM

torch.set_default_device("cuda")

model = AutoPeftModelForCausalLM.from_pretrained("liuchanghf/phi2-mmlu-lora")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)

inputs = tokenizer('''def print_prime(n):
   """
   Print all primes between 1 and n
   """''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .