File size: 1,226 Bytes
0f3180c
a02053c
 
0f3180c
a02053c
 
 
 
5ff5720
 
0f3180c
 
5b3e2cd
0f3180c
0f4c6e7
5b3e2cd
0f3180c
0e720cd
0f3180c
5b3e2cd
0f3180c
5b3e2cd
 
 
0f3180c
5b3e2cd
 
0f3180c
5b3e2cd
a193dc2
5b3e2cd
 
 
 
 
 
 
0f3180c
5b3e2cd
c4ada51
5b3e2cd
 
 
 
 
 
eed94cf
0f3180c
5b3e2cd
 
0f3180c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
language:
- en
library_name: transformers
tags:
- orpo
- llama 3
- sft
datasets:
- Open-Orca/OpenOrca
---

# Model description

Meta-Llama-3-8B-OpenOrca is a fine-tuned version of the [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 1.5k subsamples of the 
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset. 

This model follows the ChatML chat template!

## How to use

````
import torch
from transformers import AutoTokenizer, pipeline

model = "MuntasirHossain/Meta-Llama-3-8B-OpenOrca"
tokenizer = AutoTokenizer.from_pretrained(model)

llm = pipeline(
    task = "text-generation",
    model=model,
    eos_token_id=tokenizer.eos_token_id,
    torch_dtype=torch.float16,
    max_new_tokens=256,
    do_sample=True,
    device_map="auto",
)

def generate(input_text):
  system_prompt = "You are a helpful AI assistant."
  messages = [
      {"role": "system", "content": system_prompt},
      {"role": "user", "content": input_text},
      ]
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
  outputs = llm(prompt)
  return outputs[0]["generated_text"][len(prompt):]

generate("What is a large language model?")
````