asif00 commited on
Commit
e2445ff
1 Parent(s): 4ebe724

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -1
README.md CHANGED
@@ -11,4 +11,65 @@ base_model: unsloth/llama-3-8b-bnb-4bit
11
  pipeline_tag: question-answering
12
  ---
13
 
14
- - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  pipeline_tag: question-answering
12
  ---
13
 
14
+ # How to Use:
15
+
16
+ You can use the model with a pipeline for a high-level helper or load the model directly. Here's how:
17
+
18
+ ```python
19
+ # Use a pipeline as a high-level helper
20
+ from transformers import pipeline
21
+ pipe = pipeline("question-answering", model="asif00/bangla-llama-16bit")
22
+ ```
23
+
24
+ ```python
25
+ # Load model directly
26
+ from transformers import AutoTokenizer, AutoModelForCausalLM
27
+ tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-16bit")
28
+ model = AutoModelForCausalLM.from_pretrained("asif00/bangla-llama-16bit")
29
+ ```
30
+
31
+ # General Prompt Structure:
32
+
33
+ ```python
34
+ prompt = """Below is an instruction in Bengali language that describes a task, paired with an input also in Bengali language that provides further context. Write a response in Bengali language that appropriately completes the request.
35
+
36
+ ### Instruction:
37
+ {}
38
+
39
+ ### Input:
40
+ {}
41
+
42
+ ### Response:
43
+ {}
44
+ """
45
+ ```
46
+
47
+ # To get a cleaned up version of the response, you can use the `generate_response` function:
48
+
49
+ ```python
50
+ def generate_response(question, context):
51
+ inputs = tokenizer([prompt.format(question, context, "")], return_tensors="pt").to("cuda")
52
+ outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True)
53
+ responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
54
+ response_start = responses.find("### Response:") + len("### Response:")
55
+ response = responses[response_start:].strip()
56
+ return response
57
+ ```
58
+
59
+ # Example Usage:
60
+
61
+ ```python
62
+ question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
63
+ context = "২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
64
+ answer = generate_response(question, context)
65
+ print(answer)
66
+ ```
67
+
68
+
69
+ # Disclaimer:
70
+
71
+ The Bangla LLaMA-4bit model has been trained on a limited dataset, and its responses may not always be perfect or accurate. The model's performance is dependent on the quality and quantity of the data it has been trained on. Given more resources, such as high-quality data and longer training time, the model's performance can be significantly improved.
72
+
73
+
74
+ # Resources:
75
+ Work in progress...