Thimira commited on
Commit
4fe3950
1 Parent(s): 6050b14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -3
README.md CHANGED
@@ -4,12 +4,16 @@ tags:
4
  - trl
5
  - sft
6
  - generated_from_trainer
 
7
  base_model: NousResearch/Llama-2-7b-chat-hf
8
  datasets:
9
  - generator
 
10
  model-index:
11
  - name: sinhala-llama-2-7b-chat-hf
12
  results: []
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,15 +21,34 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # sinhala-llama-2-7b-chat-hf
19
 
20
- This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the generator dataset.
21
 
22
  ## Model description
23
 
24
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Intended uses & limitations
27
 
28
- More information needed
 
 
29
 
30
  ## Training and evaluation data
31
 
 
4
  - trl
5
  - sft
6
  - generated_from_trainer
7
+ - text-generation-inference
8
  base_model: NousResearch/Llama-2-7b-chat-hf
9
  datasets:
10
  - generator
11
+ - Thimira/sinhala-llama-2-data-format
12
  model-index:
13
  - name: sinhala-llama-2-7b-chat-hf
14
  results: []
15
+ language:
16
+ - si
17
  ---
18
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
21
 
22
  # sinhala-llama-2-7b-chat-hf
23
 
24
+ This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the [Thimira/sinhala-llama-2-data-format](https://huggingface.co/datasets/Thimira/sinhala-llama-2-data-format) dataset.
25
 
26
  ## Model description
27
 
28
+ This is a model for Sinhala language text generation which is fine-tuned from the base llama-2-7b-chat-hf model.
29
+
30
+ Currently the capabilities of themodel are extremely limited, and requires further data and fine-tuning to be useful. Feel free to experiment with the model and provide feedback.
31
+
32
+ ### Usage example
33
+
34
+ ```
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
38
+ model = AutoModelForCausalLM.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
39
+
40
+ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
41
+
42
+ prompt = "ඔබට සිංහල භාෂාව තේරුම් ගත හැකිද?"
43
+ result = pipe(f"<s>[INST] {prompt} [/INST]")
44
+ print(result[0]['generated_text'])
45
+ ```
46
 
47
  ## Intended uses & limitations
48
 
49
+ The Sinhala-LLaMA models are intended for assistant-like chat in the Sinhala language.
50
+
51
+ To get the expected features and performance from these models the LLaMA 2 prompt format needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between.
52
 
53
  ## Training and evaluation data
54