File size: 1,250 Bytes
5a96a05
8005c4c
 
 
5a96a05
36deac6
 
8005c4c
 
 
 
 
5a96a05
 
8005c4c
5a96a05
65cfa32
5a96a05
8005c4c
5a96a05
 
8005c4c
5a96a05
8005c4c
5a96a05
 
8005c4c
5a96a05
8005c4c
 
5a96a05
8005c4c
 
 
5a96a05
8005c4c
fbaf750
5a96a05
8005c4c
 
 
 
 
 
 
 
5a96a05
8005c4c
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
language:
- en
license: other
library_name: transformers
tags:
- orpo
- llama 3
- rlhf
- sft
datasets:
- mlabonne/orpo-dpo-mix-40k
---

# UpshotLlama-3-8B

This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 2k sample of dpo_math_data from  [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).

It's a successful fine-tune that follows the ChatML template!


## 🔎 Application

This model uses a context window of 8k. It was trained with the ChatML template.


## 💻 Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Aditya685/UpshotLlama-3-8B"
messages = [{"role": "user", "content": "Given the equation 4x + 7 = 55. Find the value of x"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```