This is an experimental llama2 13B qlora created using the VNTL-v2-1k dataset.

This fine-tune was done to see if 13B would give better results than 7B, but the improvements of it over the base model seem negligible to me, and the eval scores corroborate this. This fine-tune still needs more testing though, I still have no way to automatically give a score to the models (other than the eval score), so I will probably look into that soon.

This is an prompt example:

<<START>>
Name: Uryuu Shingo (η“œη”Ÿ 新吾) | Gender: Male | Aliases: Onii-chan (γŠε…„γ‘γ‚ƒγ‚“)
Name: Uryuu Sakuno (η“œη”Ÿ ζ‘œδΉƒ) | Gender: Female
<<JAPANESE>>
[ζ‘œδΉƒ]: γ€Žβ€¦β€¦γ”γ‚γ‚“γ€
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: γ€Ž... Sorry.』
<<JAPANESE>>
[新吾]: γ€Œγ†γ†γ‚“γ€γ“γ†θ¨€γ£γ‘γ‚ƒγͺγ‚“γ γ‘γ©γ€θΏ·ε­γ§γ‚ˆγ‹γ£γŸγ‚ˆγ€‚ζ‘œδΉƒγ―ε―ζ„›γ„γ‹γ‚‰γ€γ„γ‚γ„γ‚εΏƒι…γ—γ‘γ‚ƒγ£γ¦γŸγ‚“γ γžδΏΊγ€
<<ENGLISH>> (fidelity = high)

The generated translation for that prompt, with temperature 0, is:

{TBD}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train lmg-anon/vntl-13b-v0.1-qlora