Thimira commited on
Commit
4027968
1 Parent(s): d525ff2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -2
README.md CHANGED
@@ -25,11 +25,30 @@ This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://
25
 
26
  ## Model description
27
 
28
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## Intended uses & limitations
31
 
32
- More information needed
 
 
33
 
34
  ## Training and evaluation data
35
 
 
25
 
26
  ## Model description
27
 
28
+ This is a model for Sinhala language text generation which is fine-tuned from the base llama-2-7b-chat-hf model.
29
+
30
+ Currently the capabilities of themodel are extremely limited, and requires further data and fine-tuning to be useful. Feel free to experiment with the model and provide feedback.
31
+
32
+ ### Usage example
33
+
34
+ ```python
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
38
+ model = AutoModelForCausalLM.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
39
+
40
+ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
41
+
42
+ prompt = "ඔබට සිංහල භාෂාව තේරුම් ගත හැකිද?"
43
+ result = pipe(f"<s>[INST] {prompt} [/INST]")
44
+ print(result[0]['generated_text'])
45
+ ```
46
 
47
  ## Intended uses & limitations
48
 
49
+ The Sinhala-LLaMA models are intended for assistant-like chat in the Sinhala language.
50
+
51
+ To get the expected features and performance from these models the LLaMA 2 prompt format needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between.
52
 
53
  ## Training and evaluation data
54