File size: 1,271 Bytes
ef7d053 05f2b1e ef7d053 05f2b1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: apache-2.0
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- en
- ko
pipeline_tag: translation
---
# Gugugo-koen-7B-V1.1
Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
![Gugugo](./logo.png)
**Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
**Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation).
I trained with 1x A6000 GPUs for 90 hours.
## **Prompt Template**
**KO->EN**
```
### νκ΅μ΄: {sentence}</λ>
### μμ΄:
```
**EN->KO**
```
### μμ΄: {sentence}</λ>
### νκ΅μ΄:
```
## **Implementation Code**
```python
from vllm import LLM, SamplingParams
def make_prompt(data):
prompts = []
for line in data:
prompts.append(f"### μμ΄: {line}</λ>\n### νκ΅μ΄:")
return prompts
texts = [
"Hello world!",
"Nice to meet you!"
]
prompts = make_prompt(texts)
sampling_params = SamplingParams(temperature=0.01, stop=["</λ>"], max_tokens=700)
llm = LLM(model="squarelike/Gugugo-koen-7B-V1.1-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
print(output.outputs[0].text)
``` |