squarelike commited on
Commit
5b71c8f
β€’
1 Parent(s): 616668d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - squarelike/sharegpt_deepl_ko_translation
5
+ language:
6
+ - en
7
+ - ko
8
+ pipeline_tag: translation
9
  ---
10
+
11
+ # Gugugo-koen-7B-V1.1-GPTQ
12
+ Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
13
+ ![Gugugo](./logo.png)
14
+
15
+ This is GGUF model from [squarelike/Gugugo-koen-7B-V1.1](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1)
16
+
17
+ **Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
18
+
19
+ **Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation).
20
+
21
+ I trained with 1x A6000 GPUs for 90 hours.
22
+
23
+ ## **Prompt Template**
24
+ **KO->EN**
25
+ ```
26
+ ### ν•œκ΅­μ–΄: {sentence}</끝>
27
+ ### μ˜μ–΄:
28
+ ```
29
+ **EN->KO**
30
+ ```
31
+ ### μ˜μ–΄: {sentence}</끝>
32
+ ### ν•œκ΅­μ–΄:
33
+ ```
34
+
35
+ ## **Implementation Code**
36
+ ```python
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
38
+ import torch
39
+ repo = "squarelike/Gugugo-koen-7B-V1.1"
40
+ model = AutoModelForCausalLM.from_pretrained(
41
+ repo,
42
+ load_in_4bit=True
43
+ device_map='auto'
44
+ )
45
+ tokenizer = AutoTokenizer.from_pretrained(repo)
46
+
47
+ class StoppingCriteriaSub(StoppingCriteria):
48
+ def __init__(self, stops = [], encounters=1):
49
+ super().__init__()
50
+ self.stops = [stop for stop in stops]
51
+
52
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
53
+ for stop in self.stops:
54
+ if torch.all((stop == input_ids[0][-len(stop):])).item():
55
+ return True
56
+
57
+ return False
58
+
59
+ stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda")
60
+ stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])
61
+
62
+ def gen(lan="en", x=""):
63
+ if (lan == "ko"):
64
+ prompt = f"### ν•œκ΅­μ–΄: {x}</끝>\n### μ˜μ–΄:"
65
+ else:
66
+ prompt = f"### μ˜μ–΄: {x}</끝>\n### ν•œκ΅­μ–΄:"
67
+ gened = model.generate(
68
+ **tokenizer(
69
+ prompt,
70
+ return_tensors='pt',
71
+ return_token_type_ids=False
72
+ ).to("cuda"),
73
+ max_new_tokens=2000,
74
+ temperature=0.1,
75
+ do_sample=True,
76
+ stopping_criteria=stopping_criteria
77
+ )
78
+ return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "")
79
+
80
+
81
+ print(gen(lan="en", x="Hello, world!"))
82
+ ```