RichardErkhov commited on
Commit
aa6eb1f
1 Parent(s): d9ce49e

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ gemma-2b-translation-v0.103 - GGUF
11
+ - Model creator: https://huggingface.co/lemon-mint/
12
+ - Original model: https://huggingface.co/lemon-mint/gemma-2b-translation-v0.103/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [gemma-2b-translation-v0.103.Q2_K.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q2_K.gguf) | Q2_K | 1.08GB |
18
+ | [gemma-2b-translation-v0.103.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
19
+ | [gemma-2b-translation-v0.103.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.IQ3_S.gguf) | IQ3_S | 1.2GB |
20
+ | [gemma-2b-translation-v0.103.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
21
+ | [gemma-2b-translation-v0.103.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.IQ3_M.gguf) | IQ3_M | 1.22GB |
22
+ | [gemma-2b-translation-v0.103.Q3_K.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q3_K.gguf) | Q3_K | 1.29GB |
23
+ | [gemma-2b-translation-v0.103.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
24
+ | [gemma-2b-translation-v0.103.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
25
+ | [gemma-2b-translation-v0.103.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
26
+ | [gemma-2b-translation-v0.103.Q4_0.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q4_0.gguf) | Q4_0 | 1.44GB |
27
+ | [gemma-2b-translation-v0.103.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
28
+ | [gemma-2b-translation-v0.103.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
29
+ | [gemma-2b-translation-v0.103.Q4_K.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q4_K.gguf) | Q4_K | 1.52GB |
30
+ | [gemma-2b-translation-v0.103.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
31
+ | [gemma-2b-translation-v0.103.Q4_1.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q4_1.gguf) | Q4_1 | 1.56GB |
32
+ | [gemma-2b-translation-v0.103.Q5_0.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q5_0.gguf) | Q5_0 | 1.68GB |
33
+ | [gemma-2b-translation-v0.103.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
34
+ | [gemma-2b-translation-v0.103.Q5_K.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q5_K.gguf) | Q5_K | 1.71GB |
35
+ | [gemma-2b-translation-v0.103.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
36
+ | [gemma-2b-translation-v0.103.Q5_1.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q5_1.gguf) | Q5_1 | 1.79GB |
37
+ | [gemma-2b-translation-v0.103.Q6_K.gguf](https://huggingface.co/RichardErkhov/lemon-mint_-_gemma-2b-translation-v0.103-gguf/blob/main/gemma-2b-translation-v0.103.Q6_K.gguf) | Q6_K | 1.92GB |
38
+
39
+
40
+
41
+
42
+ Original model description:
43
+ ---
44
+ library_name: transformers
45
+ language:
46
+ - ko
47
+ license: gemma
48
+ tags:
49
+ - gemma
50
+ - pytorch
51
+ - instruct
52
+ - finetune
53
+ - translation
54
+ widget:
55
+ - messages:
56
+ - role: user
57
+ content: "Hamsters don't eat cats."
58
+ inference:
59
+ parameters:
60
+ max_new_tokens: 2048
61
+ base_model: beomi/gemma-ko-2b
62
+ datasets:
63
+ - traintogpb/aihub-flores-koen-integrated-sparta-30k
64
+ pipeline_tag: text-generation
65
+ ---
66
+
67
+
68
+ # Gemma 2B Translation v0.103
69
+
70
+ - Eval Loss: `1.34507`
71
+ - Train Loss: `1.40326`
72
+ - lr: `3e-05`
73
+ - optimizer: adamw
74
+ - lr_scheduler_type: cosine
75
+
76
+ ## Prompt Template
77
+
78
+ ```
79
+ <bos>### English
80
+
81
+ Hamsters don't eat cats.
82
+
83
+ ### Korean
84
+
85
+ 햄스터는 고양이를 먹지 않습니다.<eos>
86
+ ```
87
+
88
+ ## Model Description
89
+
90
+ - **Developed by:** `lemon-mint`
91
+ - **Model type:** Gemma
92
+ - **Language(s) (NLP):** English
93
+ - **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
94
+ - **Finetuned from model:** [beomi/gemma-ko-2b](https://huggingface.co/beomi/gemma-ko-2b)
95
+
96
+