lmg-anon commited on
Commit
f791d7d
Β·
verified Β·
1 Parent(s): 79930fd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ datasets:
4
+ - lmg-anon/VNTL-v5-1k
5
+ language:
6
+ - ja
7
+ - en
8
+ library_name: transformers
9
+ base_model: rinna/llama-3-youko-8b
10
+ pipeline_tag: translation
11
+ ---
12
+
13
+ # Summary
14
+
15
+ This is a [LLaMA 3 Youko](https://huggingface.co/rinna/llama-3-youko-8b) qlora fine-tune, created using a new version of the VNTL dataset. The purpose of this fine-tune is to improve performance of LLMs at translating Japanese visual novels to English.
16
+
17
+ Unlike the previous version, this one doesn't includes the "chat mode".
18
+
19
+ ## Notes
20
+
21
+ For this new version of VNTL 8B, I've rebuilt and expanded VNTL's dataset from the groud up, and I'm happy to say it performs really well, outperforming the previous version when it comes to accuracy and stability, it makes far fewer mistakes than it even when running at high temperatures (though I still recommend temperature 0 for the best accuracy).
22
+
23
+ Some major changes in this version:
24
+ - **Switched to the default LLaMA3 prompt format since people had trouble with the custom one**
25
+ - **Added proper support for multi-line translations** (the old version only handled single lines)
26
+ - Overall better translation accuracy
27
+
28
+ One thing to note: while the translations are more accurate, they tend to be more literal compared to the previous version.
29
+
30
+
31
+ ## Sampling Recommendations
32
+
33
+ For optimal results, it's highly recommended to use neutral sampling parameters (temperature 0 with no repetition penalty) when using this model.
34
+
35
+ ## Training Details
36
+
37
+ This fine-tune was done using similar hyperparameters as the [previous version](https://huggingface.co/lmg-anon/vntl-llama3-8b-qlora). The only difference is the dataset, which is a brand-new one.
38
+
39
+ - Rank: 128
40
+ - Alpha: 32
41
+ - Effective Batch Size: **45**
42
+ - Warmup Ratio: 0.02
43
+ - Learning Rate: **6e-5**
44
+ - Embedding Learning Rate: **1e-5**
45
+ - Optimizer: **grokadamw**
46
+ - LR Schedule: cosine
47
+ - Weight Decay: 0.01
48
+
49
+ **Train Loss**: 0.42
50
+
51
+ ## Translation Prompt
52
+
53
+ This fine-tune uses the LLaMA 3 prompt format, this is an prompt example for translation:
54
+ ```
55
+ <|begin_of_text|><|start_header_id|>Metadata<|end_header_id|>
56
+
57
+ [character] Name: Uryuu Shingo (η“œη”Ÿ 新吾) | Gender: Male | Aliases: Onii-chan (γŠε…„γ‘γ‚ƒγ‚“)
58
+ [character] Name: Uryuu Sakuno (η“œη”Ÿ ζ‘œδΉƒ) | Gender: Female<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
59
+
60
+ [ζ‘œδΉƒ]: γ€Žβ€¦β€¦γ”γ‚γ‚“γ€<|eot_id|><|start_header_id|>English<|end_header_id|>
61
+
62
+ [Sakuno]: γ€Ž... Sorry.』<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
63
+
64
+ [新吾]: γ€Œγ†γ†γ‚“γ€γ“γ†θ¨€γ£γ‘γ‚ƒγͺγ‚“γ γ‘γ©γ€θΏ·ε­γ§γ‚ˆγ‹γ£γŸγ‚ˆγ€‚ζ‘œδΉƒγ―ε―ζ„›γ„γ‹γ‚‰γ€γ„γ‚γ„γ‚εΏƒι…γ—γ‘γ‚ƒγ£γ¦γŸγ‚“γ γžδΏΊγ€<|eot_id|><|start_header_id|>English<|end_header_id|>
65
+
66
+ [Shingo]: "Nah, I know it’s weird to say this, but I’m glad you got lost. You’re so cute, Sakuno, so I was really worried about you."<|eot_id|>
67
+ ```
68
+
69
+ The generated translation for that prompt, with temperature 0, is:
70
+ ```
71
+ [Shingo]: "Nah, I know it’s weird to say this, but I’m glad you got lost. You’re so cute, Sakuno, so I was really worried about you."
72
+ ```
73
+
74
+ ### Trivia
75
+
76
+ The Metadata section isn't limited to character information - you can also add trivia and teach the model the correct way to pronounce words it struggles with.
77
+
78
+ Here's an example:
79
+ ```
80
+ <|begin_of_text|><|start_header_id|>Metadata<|end_header_id|>
81
+
82
+ [character] Name: Uryuu Shingo (η“œη”Ÿ 新吾) | Gender: Male | Aliases: Onii-chan (γŠε…„γ‘γ‚ƒγ‚“)
83
+ [character] Name: Uryuu Sakuno (η“œη”Ÿ ζ‘œδΉƒ) | Gender: Female
84
+ [element] Name: Murasamemaru (叒雨丸) | Type: Quality<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
85
+
86
+ [ζ‘œδΉƒ]: γ€Žβ€¦β€¦γ”γ‚γ‚“γ€<|eot_id|><|start_header_id|>English<|end_header_id|>
87
+
88
+ [Sakuno]: γ€Ž... Sorry.』<|eot_id|><|start_header_id|>Japanese<|end_header_id|>
89
+
90
+ [新吾]: γ€Œγ†γ†γ‚“γ€γ“γ†θ¨€γ£γ‘γ‚ƒγͺγ‚“γ γ‘γ©γ€θΏ·ε­γ§γ‚ˆγ‹γ£γŸγ‚ˆγ€‚ζ‘œδΉƒγ―ε’ι›¨δΈΈγ„γ‹γ‚‰γ€γ„γ‚γ„γ‚εΏƒι…γ—γ‘γ‚ƒγ£γ¦γŸγ‚“γ γžδΏΊγ€<|eot_id|><|start_header_id|>English<|end_header_id|>
91
+ ```
92
+
93
+ The generated translation for that prompt, with temperature 0, is:
94
+ ```
95
+ [Shingo]: "Nah, I know it’s not the best thing to say, but I’m glad you got lost. Sakuno’s Murasamemaru, so I was really worried about you, you know?"
96
+ ```