field2437 commited on
Commit
61d785c
1 Parent(s): 4541424

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md CHANGED
@@ -1,3 +1,57 @@
1
  ---
 
 
 
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ - kyujinpy/Open-platypus-Commercial
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
  license: mit
9
  ---
10
+
11
+ # **phi-2-test**
12
+
13
+ ## Model Details
14
+ **Model Developers**
15
+ - field2437
16
+
17
+ **Base Model**
18
+ - microsoft/phi-2(https://huggingface.co/microsoft/phi-2)
19
+
20
+ **Training Dataset**
21
+ - kyujinpy/Open-platypus-Commercial(https://huggingface.co/datasets/kyujinpy/Open-platypus-Commercial)
22
+
23
+ ---
24
+ # Model comparisons1
25
+
26
+
27
+ ---
28
+ # Model comparisons2
29
+ > AI-Harness evaluation; [link](https://github.com/EleutherAI/lm-evaluation-harness)
30
+
31
+ | Model | Copa | HellaSwag | BoolQ | MMLU |
32
+ | --- | --- | --- | --- | --- |
33
+ | | 0-shot | 0-shot | 0-shot | 0-shot |
34
+ | **field2437/phi-2-test** | 0.8900 | NaN | 0.5573 | NaN | 0.8260 | NaN | 0.5513 | NaN |
35
+
36
+ ---
37
+ # Sample Code
38
+ ```python
39
+ import torch
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+
42
+ torch.set_default_device("cuda")
43
+
44
+ model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
45
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
46
+
47
+ inputs = tokenizer('''def print_prime(n):
48
+ """
49
+ Print all primes between 1 and n
50
+ """''', return_tensors="pt", return_attention_mask=False)
51
+
52
+ outputs = model.generate(**inputs, max_length=200)
53
+ text = tokenizer.batch_decode(outputs)[0]
54
+ print(text)
55
+ ```
56
+
57
+ ---