File size: 2,036 Bytes
2cce235
ab45900
 
 
 
2cce235
ab45900
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: mit
train: false
inference: true
pipeline_tag: text-generation
---
*aanaphi2-v0.1* is a finetuned (SFT + DPO) chat model based on <a href="https://huggingface.co/microsoft/phi-2">Microsoft's Phi-2 base model</a> (2.8B parameters). 

## Performance
| Models            | phi-2            | aanaphi2-v0.1    |
|-------------------|------------------|------------------|
| ARC (25-shot)     | 61.09            | <b>63.73</b>     |
| HellaSwag (10-shot)| 75.11           | <b>78.30</b>     |
| MMLU (5-shot)     | <b>58.11</b>     | 57.70            |
| TruthfulQA-MC2    | 44.47            | <b>51.55</b>     |
| Winogrande (5-shot)| <b>74.35</b>    | 73.40            |
| GSM8K (5-shot)    | 54.81            | 58.60            |
| Average           | 61.33            | <b>63.88</b>     |


## Basic Usage
``` Python
#Load model
import transformers, torch
compute_dtype = torch.float16
cache_path    = ''
device        = 'cuda'
model_id      = "mobiuslabsgmbh/aanaphi2-v0.1"
model         = transformers.AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, 
                                                                  cache_dir=cache_path,
                                                                  device_map=device)
tokenizer     = transformers.AutoTokenizer.from_pretrained(model_id, cache_dir=cache_path)

#Set Prompt format
instruction_template = "### Human: "
response_template    = "### Assistant: "
def prompt_format(prompt):
    out = instruction_template + prompt + '\n' + response_template
    return out
model.eval();

@torch.no_grad()
def generate(prompt, max_length=1024):
    prompt_chat = prompt_format(prompt)
    inputs      = tokenizer(prompt_chat, return_tensors="pt", return_attention_mask=True).to('cuda')
    outputs     = model.generate(**inputs, max_length=max_length, eos_token_id= tokenizer.eos_token_id) 
    text        = tokenizer.batch_decode(outputs[:,:-1])[0]
    return text

#Generate
print(generate('If A+B=C and B=C, what would be the value of A?'))
```