Thimira commited on
Commit
5451f69
1 Parent(s): cc011da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -4
README.md CHANGED
@@ -4,12 +4,16 @@ tags:
4
  - trl
5
  - sft
6
  - generated_from_trainer
 
7
  base_model: NousResearch/Llama-2-7b-chat-hf
8
  datasets:
9
- - generator
10
  model-index:
11
  - name: sinhala-llama-2-7b-chat-hf
12
  results: []
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,15 +21,34 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # sinhala-llama-2-7b-chat-hf
19
 
20
- This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the generator dataset.
21
 
22
  ## Model description
23
 
24
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Intended uses & limitations
27
 
28
- More information needed
 
 
29
 
30
  ## Training and evaluation data
31
 
 
4
  - trl
5
  - sft
6
  - generated_from_trainer
7
+ - text-generation-inference
8
  base_model: NousResearch/Llama-2-7b-chat-hf
9
  datasets:
10
+ - Thimira/sinhala-llm-dataset-llama-prompt-format
11
  model-index:
12
  - name: sinhala-llama-2-7b-chat-hf
13
  results: []
14
+ license: llama2
15
+ language:
16
+ - si
17
  ---
18
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
21
 
22
  # sinhala-llama-2-7b-chat-hf
23
 
24
+ This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the [Thimira/sinhala-llm-dataset-llama-prompt-format](https://huggingface.co/datasets/Thimira/sinhala-llm-dataset-llama-prompt-format) dataset.
25
 
26
  ## Model description
27
 
28
+ This is a model for Sinhala language text generation which is fine-tuned from the base llama-2-7b-chat-hf model.
29
+
30
+ Currently the capabilities of themodel are extremely limited, and requires further data and fine-tuning to be useful. Feel free to experiment with the model and provide feedback.
31
+
32
+ ### Usage example
33
+
34
+ ```python
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
38
+ model = AutoModelForCausalLM.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
39
+
40
+ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
41
+
42
+ prompt = "ඔබට සිංහල භාෂාව තේරුම් ගත හැකිද?"
43
+ result = pipe(f"<s>[INST] {prompt} [/INST]")
44
+ print(result[0]['generated_text'])
45
+ ```
46
 
47
  ## Intended uses & limitations
48
 
49
+ The Sinhala-LLaMA models are intended for assistant-like chat in the Sinhala language.
50
+
51
+ To get the expected features and performance from these models the LLaMA 2 prompt format needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between.
52
 
53
  ## Training and evaluation data
54