asif00 commited on
Commit
66dfc83
1 Parent(s): 67354a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -15
README.md CHANGED
@@ -12,40 +12,76 @@ library_name: transformers
12
  pipeline_tag: question-answering
13
  ---
14
 
15
- How to use it:
16
 
17
- # Use a pipeline as a high-level helper
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ```python
 
20
  from transformers import pipeline
21
-
22
  pipe = pipeline("question-answering", model="asif00/bangla-llama-4bit")
23
  ```
24
 
25
- # Load model directly
26
  ```python
 
27
  from transformers import AutoTokenizer, AutoModelForCausalLM
28
-
29
  tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-4bit")
30
  model = AutoModelForCausalLM.from_pretrained("asif00/bangla-llama-4bit")
31
  ```
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
- # To get a cleaned up version of the response, you can use:
35
 
36
  ```python
37
  def generate_response(question, context):
38
- inputs = tokenizer([
39
- prompt.format(
40
- question,
41
- context,
42
- ""
43
- )
44
- ], return_tensors="pt").to("cuda")
45
-
46
  outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True)
47
- responses = tokenizer.batch_decode(outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
48
  response_start = responses.find("### Response:") + len("### Response:")
49
  response = responses[response_start:].strip()
50
  return response
51
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  pipeline_tag: question-answering
13
  ---
14
 
15
+ Model Description:
16
 
17
+ Bangla LLaMA-4bit is a specialized model for context-based question answering and Bengali retrieval augment generation. It is derived from LLaMA 3 7B and trained on the iamshnoo/alpaca-cleaned-bengali dataset. This model is designed to provide accurate responses in Bengali with relevant contextual information. It is integrated with the transformers library, making it easy to use for context-based question answering and Bengali retrieval augment generation in projects.
18
+
19
+ Model Details:
20
+
21
+ - Model Family: Llama 3 7B
22
+ - Language: Bengali
23
+ - Use Case: Context-Based Question Answering, Bengali Retrieval Augment Generation
24
+ - Dataset: iamshnoo/alpaca-cleaned-bengali (51,760 samples)
25
+ - Training Loss: 0.4038
26
+ - Global Steps: 647
27
+ - Batch Size: 80
28
+ - Epoch: 1
29
+
30
+ How to Use:
31
+
32
+ You can use the model with a pipeline for a high-level helper or load the model directly. Here's how:
33
 
34
  ```python
35
+ # Use a pipeline as a high-level helper
36
  from transformers import pipeline
 
37
  pipe = pipeline("question-answering", model="asif00/bangla-llama-4bit")
38
  ```
39
 
 
40
  ```python
41
+ # Load model directly
42
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
43
  tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-4bit")
44
  model = AutoModelForCausalLM.from_pretrained("asif00/bangla-llama-4bit")
45
  ```
46
 
47
+ General Prompt Structure:
48
+
49
+ ```python
50
+ prompt = """Below is an instruction in Bengali language that describes a task, paired with an input also in Bengali language that provides further context. Write a response in Bengali language that appropriately completes the request.
51
+
52
+ ### Instruction:
53
+ {}
54
+
55
+ ### Input:
56
+ {}
57
+
58
+ ### Response:
59
+ {}
60
+ """
61
+ ```
62
 
63
+ To get a cleaned up version of the response, you can use the `generate_response` function:
64
 
65
  ```python
66
  def generate_response(question, context):
67
+ inputs = tokenizer([prompt.format(question, context, "")], return_tensors="pt").to("cuda")
 
 
 
 
 
 
 
68
  outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True)
69
+ responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
70
  response_start = responses.find("### Response:") + len("### Response:")
71
  response = responses[response_start:].strip()
72
  return response
73
  ```
74
+
75
+ Example Usage:
76
+
77
+ ```python
78
+ question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
79
+ context = "২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
80
+ answer = generate_response(question, context)
81
+ print(answer)
82
+ ```
83
+
84
+
85
+ Disclaimer:
86
+
87
+ The Bangla LLaMA-4bit model has been trained on a limited dataset, and its responses may not always be perfect or accurate. The model's performance is dependent on the quality and quantity of the data it has been trained on. Given more resources, such as high-quality data and longer training time, the model's performance can be significantly improved.