lumaticai commited on
Commit
1e7f4fe
1 Parent(s): b0307fe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +331 -0
README.md ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - lumatic-ai/BongChat-v0-10k
5
+ language:
6
+ - bn
7
+ - en
8
+ metrics:
9
+ - accuracy
10
+ library_name: transformers
11
+ pipeline_tag: text-generation
12
+ tags:
13
+ - text-generation-inference
14
+ - sft
15
+ - llama
16
+ - bongllama
17
+ - tinyllama
18
+ - llm
19
+ ---
20
+
21
+ <style>
22
+ img{
23
+ width: 45vw;
24
+ height: 45vh;
25
+ margin: 0 auto;
26
+ display: flex;
27
+ align-items: center;
28
+ justify-content: center;
29
+ }
30
+ </style>
31
+
32
+ # lumaticai/BongLlama-1.1B-Chat-alpha-v0
33
+
34
+ Introducing BongLlama by LumaticAI. A finetuned version of TinyLlama 1.1B Chat on Bengali Dataset.
35
+
36
+
37
+ <img class="custom-image" src="bong_llama.png" alt="BongLlama">
38
+
39
+
40
+ # Model Details
41
+
42
+ ## Model Description
43
+
44
+ Bongllama is a sub-part of our company&#39;s initiative for developing Indic and Regional Large Language Models. We are LumaticAI continuously working on helping our clients build Custom AI Solutions for their organization.
45
+ We have taken an initiative to launch open source models specific to regions and languages.
46
+
47
+ Bongllama is a LLM built for West Bengal on Bengali dataset. It&#39;s a 1.1B parameters model. We have used a Bengali dataset of 10k data i.e lumatic-ai/BongChat-10k-v0 and finetuned on TinyLlama/TinyLlama-1.1B-Chat-v1.0 model to get our BongLlama 1.1B Chat Alpha v0 model.
48
+
49
+ We are continuously working on training and developing this model and improve it. We are also going to launch this model with various sizes of different LLM&#39;s and Datasets.
50
+
51
+ - **Developed by:** LumaticAI
52
+ - **Shared by [Optional]:** LumaticAI
53
+ - **Model type:** Language model
54
+ - **Language(s) (NLP):** en, bn
55
+ - **License:** apache-2.0
56
+ - **Parent Model:** TinyLlama/TinyLlama-1.1B-Chat-v1.0
57
+
58
+
59
+ # Uses
60
+
61
+ ## Direct Use
62
+
63
+ - base model for further finetuning
64
+ - get an overview of how indic LLM work on specific language
65
+ - for fun
66
+
67
+
68
+ ## Downstream Use [Optional]
69
+
70
+ - can be deployed with api
71
+ - used to create webapp or app to show demo
72
+
73
+
74
+ ## Out-of-Scope Use
75
+
76
+ - cannot be used for production purpose
77
+ - cannot be used to generate text for research or academic purposes
78
+
79
+
80
+ # Bias, Risks, and Limitations
81
+
82
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
83
+
84
+
85
+ # Training Details
86
+
87
+ ## Training Data
88
+
89
+ we used our dataset of 10k data which consists of Questions and Responses. The dataset name is lumatic-ai/BongChat-v0-10k.
90
+
91
+ **Example Data**
92
+ - &lt;|im_start|&gt;user এটি কি ইতিবাচক পর্যালোচনা? এই ফ্র্যাঞ্চাইজি কখনও হার্ট স্টপ স্টান্টস, ফাইট সিকোয়েন্স এবং তারকা শক্তি বন্ধ করে হতাশ করে না।এবার মূল চারটি ছাড়াও নতুন মুখগুলি হলেন আবুয়েলিতা চরিত্রে রিতা মোরেনো এবং দান্তে জেসন মোমোয়া যিনি শোটি চুরি করেছেন।&lt;|im_end|&gt; &lt;|im_start|&gt;assistant এটি একটি খুব ইতিবাচক পর্যালোচনা, এটি ইঙ্গিত করে যে ফ্র্যাঞ্চাইজি শ্রোতাদের জন্য আরও পছন্দসই চরিত্রগুলি যুক্ত করার পাশাপাশি অতীতের প্রত্যাশার ভিত্তিতে বিতরণ করেছে।&lt;|im_end|&gt;
93
+
94
+
95
+ ## Training Procedure
96
+
97
+ ### Preprocessing
98
+
99
+ - Dataset Format
100
+ &lt;|im_start|&gt;user &lt;question&gt;&lt;|im_end|&gt; &lt;|im_start|&gt;assistant &lt;response&gt;&lt;|im_end|&gt;
101
+
102
+ ### Training hyperparameters
103
+
104
+ The following hyperparameters were used during training:
105
+ - learning_rate: 0.0002
106
+ - train_batch_size: 4
107
+ - eval_batch_size: 8
108
+ - seed: 42
109
+ - gradient_accumulation_steps: 2
110
+ - total_train_batch_size: 8
111
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
112
+ - lr_scheduler_type: cosine
113
+ - lr_scheduler_warmup_ratio: 0.03
114
+ - num_epochs: 3
115
+ - mixed_precision_training: Native AMP
116
+
117
+ ### Framework versions
118
+
119
+ - Transformers 4.35.2
120
+ - Pytorch 2.1.0+cu121
121
+ - Datasets 2.16.1
122
+ - Tokenizers 0.15.0
123
+
124
+ # Evaluation
125
+
126
+ ### Metrics
127
+
128
+ - train/loss
129
+ - steps
130
+
131
+ ## Results
132
+
133
+ ||\_runtime|\_timestamp|train/epoch|train/total\_flos|train/train\_loss|train/global\_step|train/train\_steps\_per\_second|train/loss|train/train\_samples\_per\_second|train/train\_runtime|\_step|train/learning\_rate|
134
+ |---|---|---|---|---|---|---|---|---|---|---|---|---|
135
+ |0|205\.76071906089783|1705483341\.4811552|0\.08|||100||1\.2865|||0|0\.0001869158878504673|
136
+ |1|406\.9242510795593|1705483542\.6446872|0\.17|||200||1\.0698|||1|0\.00019964245392895794|
137
+ |2|607\.5763952732086|1705483743\.2968314|0\.25|||300||1\.0457|||2|0\.00019846317589644678|
138
+ |3|808\.9941129684448|1705483944\.714549|0\.34|||400||1\.0131|||3|0\.00019646988832610704|
139
+ |4|1012\.7936038970947|1705484148\.51404|0\.42|||500||1\.0|||4|0\.00019367907001906532|
140
+ |5|1217\.8231673240662|1705484353\.5436034|0\.51|||600||0\.9913|||5|0\.0001901137930801933|
141
+ |6|1422\.651272058487|1705484558\.3717082|0\.59|||700||0\.9904|||6|0\.00018580353217762766|
142
+ |7|1624\.9901471138|1705484760\.7105832|0\.67|||800||0\.9705|||7|0\.0001807839208713596|
143
+ |8|1827\.1909170150757|1705484962\.911353|0\.76|||900||0\.9661|||8|0\.00017509645702535999|
144
+ |9|2033\.6470217704773|1705485169\.3674579|0\.84|||1000||0\.9588|||9|0\.00016878815973864268|
145
+ |10|2241\.5517098903656|1705485377\.272146|0\.93|||1100||0\.9469|||10|0\.00016191118063146672|
146
+ |11|2446\.751221895218|1705485582\.471658|1\.01|||1200||0\.9453|||11|0\.0001545223727002313|
147
+ |12|2648\.367230653763|1705485784\.0876667|1\.09|||1300||0\.9329|||12|0\.0001466828203054036|
148
+ |13|2849\.9791855812073|1705485985\.6996217|1\.18|||1400||0\.9299|||13|0\.0001384573341781387|
149
+ |14|3050\.282051086426|1705486186\.0024872|1\.26|||1500||0\.9181|||14|0\.00012991391562044527|
150
+ |15|3252\.6823406219482|1705486388\.4027767|1\.35|||1600||0\.917|||15|0\.00012112319432843371|
151
+ |16|3456\.3907039165497|1705486592\.11114|1\.43|||1700||0\.919|||16|0\.00011215784448624378|
152
+ |17|3658\.387463569641|1705486794\.1078997|1\.52|||1800||0\.9156|||17|0\.00010309198395788984|
153
+ |18|3860\.850716114044|1705486996\.5711522|1\.6|||1900||0\.9074|||18|9\.400056154399221e-05|
154
+ |19|4063\.906144142151|1705487199\.6265802|1\.68|||2000||0\.9072|||19|8\.49587373690336e-05|
155
+ |20|4266\.29203081131|1705487402\.012467|1\.77|||2100||0\.9061|||20|7\.604126152157019e-05|
156
+ |21|4468\.759161949158|1705487604\.479598|1\.85|||2200||0\.9104|||21|6\.732185608427e-05|
157
+ |22|4671\.109050750732|1705487806\.8294868|1\.94|||2300||0\.9016|||22|5\.8872605662626776e-05|
158
+ |23|4875\.181975841522|1705488010\.902412|2\.02|||2400||0\.8957|||23|5\.076336145093832e-05|
159
+ |24|5077\.5954213142395|1705488213\.3158574|2\.11|||2500||0\.8948|||24|4\.3061163762223156e-05|
160
+ |25|5280\.958572149277|1705488416\.6790082|2\.19|||2600||0\.8833|||25|3\.582968779610564e-05|
161
+ |26|5483\.901570320129|1705488619\.6220064|2\.27|||2700||0\.9019|||26|2\.912871722658781e-05|
162
+ |27|5684\.498034954071|1705488820\.218471|2\.36|||2800||0\.8921|||27|2\.30136499616351e-05|
163
+ |28|5885\.339627027512|1705489021\.0600631|2\.44|||2900||0\.8897|||28|1\.753504016053409e-05|
164
+ |29|6089\.49475812912|1705489225\.2151942|2\.53|||3000||0\.8765|||29|1\.2738180295232205e-05|
165
+ |30|6291\.281028032303|1705489427\.0014641|2\.61|||3100||0\.889|||30|8\.662726710819169e-06|
166
+ |31|6494\.627055644989|1705489630\.3474917|2\.69|||3200||0\.8846|||31|5\.342371780697386e-06|
167
+ |32|6695\.168158054352|1705489830\.8885942|2\.78|||3300||0\.8908|||32|2\.804565366782108e-06|
168
+ |33|6898\.186992406845|1705490033\.9074285|2\.86|||3400||0\.885|||33|1\.0702878874610523e-06|
169
+ |34|7099\.970013856888|1705490235\.69045|2\.95|||3500||0\.8871|||34|1\.5387686939386526e-07|
170
+ |35|7221\.330135822296|1705490357\.050572|3\.0|8\.3571998449877e+16|0\.9397975607756582|3561|0\.491||3\.926|7259\.0631|35||
171
+
172
+ # Model Examination
173
+
174
+ We will be further finetuning this model on large dataset to see how it performs
175
+
176
+ # Environmental Impact
177
+
178
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
179
+
180
+ - **Hardware Type:** 1 X Tesla T4
181
+ - **Hours used:** 2.21
182
+ - **Cloud Provider:** Google Colab
183
+ - **Compute Region:** India
184
+ - **Carbon Emitted:** 0.14
185
+
186
+ # Technical Specifications
187
+
188
+ ## Model Architecture and Objective
189
+
190
+ Finetuned on Tiny-Llama 1.1B Chat model
191
+
192
+
193
+ ### Hardware
194
+
195
+ 1 X Tesla T4
196
+
197
+
198
+ # Citation
199
+
200
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
201
+
202
+ **BibTeX:**
203
+
204
+ @misc{BongLlama-1.1B-Chat-alpha-v0,
205
+ url={[https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0](https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0)},
206
+ title={BongLlama 1.1B Chat Aplha V0},
207
+ author={LumaticAI, Rohan Shaw, Vivek Kushal, Jeet Ghosh},
208
+ year={2024}, month={Jan}
209
+ }
210
+
211
+
212
+ # Model Card Authors
213
+
214
+ lumatic-ai
215
+
216
+ # Model Card Contact
217
+
218
+ email : contact@lumaticai.com
219
+
220
+ # How to Get Started with the Model
221
+
222
+ Use the code below to get started with the model.
223
+
224
+ <details>
225
+ <summary> Click to expand </summary>
226
+
227
+ ### Pipeline
228
+
229
+ ```
230
+ import torch
231
+ from transformers import AutoModelForCausalLM, AutoTokenizer
232
+ from transformers import pipeline
233
+
234
+ def formatted_prompt(question)-> str:
235
+ return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:"
236
+
237
+ hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0"
238
+
239
+ tokenizer = AutoTokenizer.from_pretrained(hub_model_name)
240
+ pipe = pipeline(
241
+ "text-generation",
242
+ model=hub_model_name,
243
+ torch_dtype=torch.float16,
244
+ device_map="auto",
245
+ )
246
+
247
+ from time import perf_counter
248
+ start_time = perf_counter()
249
+
250
+ prompt = formatted_prompt('হ্যালো')
251
+ sequences = pipe(
252
+ prompt,
253
+ do_sample=True,
254
+ temperature=0.1,
255
+ top_p=0.9,
256
+ num_return_sequences=1,
257
+ eos_token_id=tokenizer.eos_token_id,
258
+ max_new_tokens=256
259
+ )
260
+ for seq in sequences:
261
+ print(f"Result: {seq['generated_text']}")
262
+
263
+ output_time = perf_counter() - start_time
264
+ print(f"Time taken for inference: {round(output_time,2)} seconds")
265
+ ```
266
+
267
+ ### Streaming Response (ChatGPT, Bard like)
268
+
269
+ ```
270
+ import torch
271
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
272
+
273
+ def formatted_prompt(question)-> str:
274
+ return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:"
275
+
276
+ hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0"
277
+
278
+ tokenizer = AutoTokenizer.from_pretrained(hub_model_name)
279
+ model = AutoModelForCausalLM.from_pretrained(hub_model_name)
280
+
281
+ prompt = formatted_prompt('prompt here')
282
+ inputs = tokenizer([prompt], return_tensors="pt")
283
+ streamer = TextStreamer(tokenizer)
284
+ _ = model.generate(**inputs, eos_token_id=[tokenizer.eos_token_id],streamer=streamer, max_new_tokens=256)
285
+ ```
286
+
287
+ ### Using Generation Config
288
+
289
+ ```
290
+ import torch
291
+ from transformers import GenerationConfig
292
+ from time import perf_counter
293
+
294
+ def formatted_prompt(question)-> str:
295
+ return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:"
296
+
297
+ hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0"
298
+
299
+ tokenizer = AutoTokenizer.from_pretrained(hub_model_name)
300
+ model = AutoModelForCausalLM.from_pretrained(hub_model_name)
301
+
302
+ prompt = formatted_prompt('হ্যালো')
303
+
304
+ # Check for GPU availability
305
+ if torch.cuda.is_available():
306
+ device = "cuda"
307
+ else:
308
+ device = "cpu"
309
+
310
+ # Move model and inputs to the GPU (if available)
311
+ model.to(device)
312
+ inputs = tokenizer(prompt, return_tensors="pt").to(device)
313
+
314
+ generation_config = GenerationConfig(
315
+ penalty_alpha=0.6,
316
+ do_sample=True,
317
+ top_k=5,
318
+ temperature=0.5,
319
+ repetition_penalty=1.2,
320
+ max_new_tokens=256,
321
+ pad_token_id=tokenizer.eos_token_id
322
+ )
323
+
324
+ start_time = perf_counter()
325
+ outputs = model.generate(**inputs, generation_config=generation_config)
326
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
327
+ output_time = perf_counter() - start_time
328
+ print(f"Time taken for inference: {round(output_time, 2)} seconds")
329
+ ```
330
+
331
+ </details>