File size: 8,729 Bytes
7e87a5a
 
 
 
 
 
 
 
 
 
 
63e9025
7e87a5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9524525
3b0af10
 
7e87a5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
---
base_model: abhinand/dr-llama-ta-instruct-v0
model-index:
- name: tamil-llama-instruct-v0.2
  results: []
license: gpl-3.0
language:
- en
- ta
---

# Tamil LLaMA 7B Instruct v0.2

Welcome to the inaugural release of the Tamil LLaMA 7B instruct model – an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.

To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.

> **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon.

If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.

[!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/abhinand.b)

## Model description

The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2.

- **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Tamil samples. (Dataset will be released soon)
- **Language(s):** Bilingual. English and Tamil.
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [To be released soon]()
- **Training Precision:** `bfloat16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon)

## Prompt Template: ChatML

```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```

## Benchmark Results

Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/).

| Benchmark     | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct |
|---------------|--------------|---------------------------|-----------------------|--------------------------|
| ARC Challenge (25-shot) | 52.9         | **53.75**                     | 52.47                 | 52.82                    |
| TruthfulQA (0-shot)    | 45.57        | 47.23                     | **48.47**                 | 47.46                    |
| Hellaswag (10-shot)    | **78.55**        | 76.11                     | 76.13                 | 76.91                    |
| Winogrande (5-shot)   | 71.74        | **73.95**                     | 71.74                 | 73.16                    |
| AGI Eval (0-shot)     | 29.3         | **30.95**                     | 28.44                 | 29.6                     |
| BigBench (0-shot)     | 32.6         | 33.08                     | 32.99                 | **33.26**                    |
| Average       | 51.78        | **52.51**                     | 51.71                 | 52.2                     |


## Related Models

| Model                    | Type                        | Data              | Base Model           | # Params | Download Links                                                         |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B v0.1 Base      | Base model                  | 12GB              | LLaMA 7B             | 7B   | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1)     |
| Tamil LLaMA 13B v0.1 Base     | Base model                  | 4GB               | LLaMA 13B            | 13B  | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1)    |
| Tamil LLaMA 7B v0.1 Instruct  | Instruction following model | 145k instructions | Tamil LLaMA 7B Base  | 7B   | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B  | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-instruct-v0.1)                       |
| Telugu LLaMA 7B v0.1 Instruct | Instruction/Chat model | 420k instructions | Telugu LLaMA 7B Base v0.1 | 7B  | [HF Hub](https://huggingface.co/abhinand/telugu-llama-instruct-v0.1) |
| Malayalam LLaMA 7B v0.2 Instruct | Instruction/Chat model | 420k instructions | Malayalam LLaMA 7B Base v0.1 | 7B  | [HF Hub](https://huggingface.co/abhinand/malayalam-llama-instruct-v0.1) |

## Example Usage

```python
from transformers import LlamaForCausalLM, AutoTokenizer, pipeline

model = LlamaForCausalLM.from_pretrained(
    "abhinand/tamil-llama-instruct-v0.2",
    #load_in_8bit=True, # Set this depending on the GPU you have
    torch_dtype=torch.bfloat16,
    device_map={"": 0}, # Set this depending on the number of GPUs you have
    local_files_only=False # Optional
)
model.eval()

tokenizer = AutoTokenizer.from_pretrained("abhinand/tamil-llama-instruct-v0.2")

inf_pipeline = pipeline("conversational", model=model, tokenizer=tokenizer)


def format_instruction(system_prompt, question, return_dict=False):
	if system_prompt is None:
		messages = [
			{'content': question, 'role': 'user'},
		]
	else:
		messages = [
			{'content': system_prompt, 'role': 'system'},
			{'content': question, 'role': 'user'},
		]

	if return_dict:
		return messages

	prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

	return prompt


# Set the generation configuration according to your needs
temperature = 0.6
repetition_penalty = 1.1
max_new_tokens = 256

SYSTEM_PROMPT = "You are an AI assistant who follows instructions extremely well. Do your best your best to help."
INPUT = "Who were the Nizams of Hyderabad?"

instruction = format_instruction(
    system_prompt=SYSTEM_PROMPT,
    question=INPUT,
    return_dict=True,
)

output = inf_pipeline(
    instruction,
    temperature=temperature,
    max_new_tokens=max_new_tokens,
    repetition_penalty=repetition_penalty
)
print(output)

# =========== EXAMPLE OUTPUT ===========
# Conversation id: d57cdf33-01ff-4328-8efe-5c4fefdd6e77
# system: You are an AI assistant who follows instructions extremely well. Do your best your best to help.
# user: Can you explain the significance of Tamil festival Pongal?
# assistant: Pongal is a significant harvest festival celebrated in Tamil Nadu and other parts of southern India. It marks the end of the rainy season and beginning of the agricultural year. The festival primarily revolves around giving gratitude to nature, particularly the Sun God Surya for his bountiful gifts like agriculture and health. People offer prayers to cattle, which play a significant role in agriculture, as well as their families for their continued support during the harvest season. The festival is marked by various colorful events, including preparing traditional Pongal dishes like rice cooked with milk, sugarcane, and banana, followed by exchanging gifts and celebrating among family members and friends. It also serves as a time for unity and strengthens the bond between people in their communities.
# ======================================
```

## Usage Note

It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.

## Meet the Developers

Get to know the creators behind this innovative model and follow their contributions to the field:

- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)

## Citation

If you use this model or any of the the Tamil-Llama related work in your research, please cite:

```bibtex
@misc{balachandran2023tamilllama,
      title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, 
      author={Abhinand Balachandran},
      year={2023},
      eprint={2311.05845},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.