|
--- |
|
language: |
|
- en |
|
- ja |
|
license: apache-2.0 |
|
tags: |
|
- llama |
|
base_model: NilanE/tinyllama-relora-merge |
|
datasets: |
|
- NilanE/ParallelFiction-Ja_En-100k |
|
--- |
|
|
|
Trained for 2 epochs on NilanE/ParallelFiction-Ja_En-100k using QLoRA. CPO tune is in-progress. |
|
|
|
Input should be 500-1000 tokens long. Make sure to set 'do_sample = False' if using HF transformers for inference, or otherwise set temperature to 0 for deterministic outputs. |
|
|
|
## Prompt format: |
|
``` |
|
Translate this from Japanese to English: |
|
### JAPANESE: |
|
{source_text} |
|
### ENGLISH: |
|
|
|
``` |