File size: 3,624 Bytes
4491db9
 
6134d2b
4491db9
 
 
 
 
8f91399
4491db9
6134d2b
e734db6
41a3e10
 
4491db9
da1261c
66dfc83
92c39fe
66dfc83
da1261c
66dfc83
 
 
 
 
 
 
 
92c39fe
 
66dfc83
 
692d5a8
 
66dfc83
26afe6e
 
692d5a8
dd0445f
692d5a8
66dfc83
26afe6e
 
 
692d5a8
67354a2
92c39fe
66dfc83
 
 
 
 
 
 
 
 
 
 
 
 
 
67354a2
92c39fe
67354a2
 
 
66dfc83
67354a2
66dfc83
67354a2
 
 
 
66dfc83
92c39fe
66dfc83
 
 
 
 
 
 
 
 
92c39fe
66dfc83
 
92c39fe
 
 
41a3e10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
language:
- bn
license: apache-2.0
tags:
- transformers
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
library_name: transformers
pipeline_tag: question-answering
datasets:
- iamshnoo/alpaca-cleaned-bengali
---
Bangla LLaMA-4bit is a specialized model for context-based question answering and Bengali retrieval augment generation. It is derived from LLaMA 3 8B and trained on the iamshnoo/alpaca-cleaned-bengali dataset. This model is designed to provide accurate responses in Bengali with relevant contextual information. It is integrated with the transformers library, making it easy to use for context-based question answering and Bengali retrieval augment generation in projects.

# Model Details:

- Model Family: Llama 3 8B
- Language: Bengali
- Use Case: Context-Based Question Answering, Bengali Retrieval Augment Generation
- Dataset: iamshnoo/alpaca-cleaned-bengali (51,760 samples)
- Training Loss: 0.4038
- Global Steps: 647
- Batch Size: 80
- Epoch: 1


# How to Use:

You can use the model with a pipeline for a high-level helper or load the model directly. Here's how:

```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="asif00/bangla-llama-4bit")
```

```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-4bit")
model = AutoModelForCausalLM.from_pretrained("asif00/bangla-llama-4bit")
```

# General Prompt Structure: 

```python
prompt = """Below is an instruction in Bengali language that describes a task, paired with an input also in Bengali language that provides further context. Write a response in Bengali language that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}
"""
```

# To get a cleaned up version of the response, you can use the `generate_response` function:

```python
def generate_response(question, context):
    inputs = tokenizer([prompt.format(question, context, "")], return_tensors="pt").to("cuda")
    outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True)
    responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
    response_start = responses.find("### Response:") + len("### Response:")
    response = responses[response_start:].strip()
    return response
```

# Example Usage:

```python
question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
context = "২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
answer = generate_response(question, context)
print(answer)
```


# Disclaimer:

The Bangla LLaMA-4bit model has been trained on a limited dataset, and its responses may not always be perfect or accurate. The model's performance is dependent on the quality and quantity of the data it has been trained on. Given more resources, such as high-quality data and longer training time, the model's performance can be significantly improved.


# Resources: 
Work in progress...