File size: 2,046 Bytes
d7aa854
0ccc156
d7aa854
0ccc156
d7aa854
 
 
 
 
 
 
 
 
177bfea
 
 
c74edcc
177bfea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d7aa854
 
fce70f2
d7aa854
0ccc156
138041e
 
 
 
d7aa854
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
base_model: meta-llama/Meta-Llama-3.1-8B
language:
- ko
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---

```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
base_model = 'bigdefence/Llama-3.1-8B-Ko-bigdefence'
device = 'cuda' if torch.cuda.is_available() else 'cpu'

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.float16, device_map="auto")
model.eval()
def generate_response(prompt, model, tokenizer, text_streamer,max_new_tokens=256):
    inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=True)
    inputs = inputs.to(model.device)

    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            streamer=text_streamer,
            max_new_tokens=max_new_tokens,
            do_sample=True,
            pad_token_id=tokenizer.eos_token_id
        )

    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return response.replace(prompt, '').strip()
key = "์•ˆ๋…•?"
prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{key}

### Response:
"""
text_streamer = TextStreamer(tokenizer)
response = generate_response(prompt, model, tokenizer,text_streamer)
print(response)
```
# Uploaded  model

- **Developed by:** Bigdefence
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3.1-8B
- **Dataset :** MarkrAI/KoCommercial-Dataset

# Thanks
- ํ•œ๊ตญ์–ด LLM ์˜คํ”ˆ์ƒํƒœ๊ณ„์— ๋งŽ์€ ๊ณตํ—Œ์„ ํ•ด์ฃผ์‹ , Beomi ๋‹˜๊ณผ maywell ๋‹˜, MarkrAI๋‹˜ ๊ฐ์‚ฌ์˜ ์ธ์‚ฌ ๋“œ๋ฆฝ๋‹ˆ๋‹ค.

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)