This is an experimental LLaMA2 7B lora created using the VNTL-v2.5-1k dataset.
This is a update of version 0.3:
- adamw_bnb_8bit -> adamw_8bit (this is the default in unsloth)
- 2 epoches -> 1 epoch (2 epoches seemed to increase eval loss)
- Added EOS after each translation pair.
Eval Loss: 0.72
This is an prompt example:
<<START>>
Name: Uryuu Shingo (ηη ζ°εΎ) | Gender: Male | Aliases: Onii-chan (γε
γ‘γγ)
Name: Uryuu Sakuno (ηη ζ‘δΉ) | Gender: Female
<<JAPANESE>>
[ζ‘δΉ]: γβ¦β¦γγγγ
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: γ... Sorry.γ</s>
<<JAPANESE>>
[ζ°εΎ]: γγγγγγγθ¨γ£γ‘γγͺγγ γγ©γθΏ·εγ§γγγ£γγγζ‘δΉγ―ε―ζγγγγγγγγεΏι
γγ‘γγ£γ¦γγγ γδΏΊγ
<<ENGLISH>> (fidelity = high)
The generated translation for that prompt, with temperature 0, is:
[Shingo]: γNo, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.γ