NilanE commited on
Commit
6df9c4f
1 Parent(s): 80f6ec2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -11
README.md CHANGED
@@ -1,22 +1,18 @@
1
  ---
2
  language:
3
  - en
 
4
  license: apache-2.0
5
  tags:
6
- - text-generation-inference
7
- - transformers
8
- - unsloth
9
  - llama
10
- - trl
11
  base_model: NilanE/tinyllama-relora-merge
 
 
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** NilanE
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** NilanE/tinyllama-relora-merge
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
-
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
  language:
3
  - en
4
+ - ja
5
  license: apache-2.0
6
  tags:
 
 
 
7
  - llama
 
8
  base_model: NilanE/tinyllama-relora-merge
9
+ datasets:
10
+ - NilanE/ParallelFiction-Ja_En-100k
11
  ---
12
 
13
+ Trained for 2 epochs on NilanE/ParallelFiction-Ja_En-100k using QLoRA. CPO tune is in-progress.
14
 
15
+ Input should be 500-1000 tokens long. Make sure to set 'do_sample = False' if using HF transformers for inference, or otherwise set temperature to 0 for deterministic outputs.
 
 
16
 
17
+ ## Prompt format
18
+ """Translate this from Japanese to English:\n### JAPANESE:\n{source_text}\n### ENGLISH:\n"""