license: mit | |
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE | |
language: | |
- en | |
pipeline_tag: text-generation | |
tags: | |
- nlp | |
- code | |
## Model Summary | |
Phi-mmlu-lora is a LORA model which fine-tuned on gsm8k dataset. The base model is [microsoft/phi-2](https://huggingface.co/microsoft/phi-2). | |
## How to Use | |
```python | |
import torch | |
from transformers import AutoTokenizer | |
from peft import AutoPeftModelForCausalLM | |
torch.set_default_device("cuda") | |
model = AutoPeftModelForCausalLM.from_pretrained("liuchanghf/phi2-mmlu-lora") | |
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True) | |
inputs = tokenizer('''def print_prime(n): | |
""" | |
Print all primes between 1 and n | |
"""''', return_tensors="pt", return_attention_mask=False) | |
outputs = model.generate(**inputs, max_length=200) | |
text = tokenizer.batch_decode(outputs)[0] | |
print(text) | |
``` |