liuchanghf commited on
Commit
9f33f03
1 Parent(s): 6b5c957

feature: update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
4
+
5
+ language:
6
+ - en
7
+
8
+ pipeline_tag: text-generation
9
+
10
+ tags:
11
+ - nlp
12
+ - code
13
  ---
14
+
15
+ ## Model Summary
16
+
17
+ Phi-mmlu-lora is a LORA model which fine-tuned on gsm8k dataset. The base model is [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
18
+
19
+
20
+ ## How to Use
21
+
22
+
23
+ ```python
24
+ import torch
25
+ from transformers import AutoTokenizer
26
+ from peft import AutoPeftModelForCausalLM
27
+
28
+ torch.set_default_device("cuda")
29
+
30
+ model = AutoPeftModelForCausalLM.from_pretrained("liuchanghf/phi2-mmlu-lora")
31
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
32
+
33
+ inputs = tokenizer('''def print_prime(n):
34
+ """
35
+ Print all primes between 1 and n
36
+ """''', return_tensors="pt", return_attention_mask=False)
37
+
38
+ outputs = model.generate(**inputs, max_length=200)
39
+ text = tokenizer.batch_decode(outputs)[0]
40
+ print(text)
41
+ ```