File size: 2,664 Bytes
650e23d
 
d0fdf11
 
 
 
 
 
650e23d
d0fdf11
058dd06
 
 
 
 
 
 
 
 
 
 
 
d0fdf11
 
 
 
058dd06
d0fdf11
 
 
058dd06
 
89f23a2
9a526db
 
 
89f23a2
 
9a526db
 
058dd06
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ac5886f
058dd06
 
 
 
 
 
 
 
 
 
 
 
ac5886f
 
ab8d438
 
 
058dd06
 
 
 
 
 
d0fdf11
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: apache-2.0
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- en
- ko
pipeline_tag: translation
---

# Gugugo-koen-7B-V1.1
Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
![Gugugo](./logo.png)

**Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)

**Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation).

I trained with 1x A6000 GPUs for 90 hours.

## **Prompt Template**
**KO->EN**
```
### ν•œκ΅­μ–΄: {sentence}</끝>
### μ˜μ–΄:
```
**EN->KO**
```
### μ˜μ–΄: {sentence}</끝>
### ν•œκ΅­μ–΄:
```

There are GPTQ, AWQ, and GGUF support.

[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ)

[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ)

[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF)

## **Implementation Code**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
import torch
repo = "squarelike/Gugugo-koen-7B-V1.1"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        load_in_4bit=True
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)

class StoppingCriteriaSub(StoppingCriteria):
    def __init__(self, stops = [], encounters=1):
        super().__init__()
        self.stops = [stop for stop in stops]

    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
        for stop in self.stops:
            if torch.all((stop == input_ids[0][-len(stop):])).item():
                return True

        return False

stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda")
stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])

def gen(lan="en", x=""):
    if (lan == "ko"):
        prompt = f"### ν•œκ΅­μ–΄: {x}</끝>\n### μ˜μ–΄:"
    else:
        prompt = f"### μ˜μ–΄: {x}</끝>\n### ν•œκ΅­μ–΄:"
    gened = model.generate(
        **tokenizer(
            prompt,
            return_tensors='pt',
            return_token_type_ids=False
        ).to("cuda"),
        max_new_tokens=2000,
        temperature=0.3,
        # no_repeat_ngram_size=5,
        num_beams=5,
        stopping_criteria=stopping_criteria
    )
    return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "")


print(gen(lan="en", x="Hello, world!"))
```