sagarshf commited on
Commit
f840e44
1 Parent(s): 3412c56

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -83
README.md CHANGED
@@ -1,113 +1,39 @@
1
  ---
2
  datasets:
3
- - ai4bharat/IndicQuestionGeneration
4
- - ai4bharat/IndicSentiment
5
- - ai4bharat/IndicParaphrase
6
- - smallstepai/marathi-instruction-tuning-alpaca
7
-
8
  language:
9
  - mr
10
- metrics:
11
- - accuracy
12
  tags:
13
  - marathi
14
- - sentiment analysis
15
- - reading comprehension
16
- - paraphrasing
17
- - translation
18
  library_name: transformers
19
  pipeline_tag: text-generation
20
  license: apache-2.0
21
  ---
22
 
23
- # Misal-1B-instruct-v0.1
24
 
25
- Built by - [smallstep.ai](https://smallstep.ai/)
26
 
27
- ## What is Misal?
28
 
29
- Misal 1B, a pretrained and instruction tuned large language model based on TinyLlama 1B architecture for Marathi.
30
 
31
  ## Making of Misal?
32
 
33
  Detailed blog [here](https://smallstep.ai/making-misal).
34
 
35
- ## Evaluation :
36
- We did a manual round of evaluations using internet data. This is a fairly small dataset with 100 questions taken from the internet. We understand that a better evaluation method is needed to benchmark our model, this being the first iteration we decided to proceed with manual evaluation. Our main aim was to see if the model understands basic instructions, if so how well is it able to understand it, hence we have limited our evaluation to Reading comprehension, Translation, Sentiment Analysis, Paraphrasing like tasks.
37
-
38
- | Model | Reading Comprehension | Sentiment Analysis | Paraphrase | Translation | Average |
39
- |-------------|-----------------------|--------------------|------------|-------------|---------|
40
- | Misal-7B | 88 | 68 | 92 | 76 | 81 |
41
- | Misal-1B | 48 | 68 | 72 | 36 | 56 |
42
- | ChatGPT3.5 | 68 | 76 | 100 | 96 | 85 |
43
- | Krutrim | 40 | 60 | 88 | 80 | 67 |
44
- | MahaMarathi | 0 | 0 | 0 | 0 | 0 |
45
-
46
- We have released the evaluation data here:
47
- - [Manual Evaluation Set](https://huggingface.co/datasets/smallstepai/Misal-Evaluation-v0.1)
48
-
49
-
50
 
51
- ![image/png](https://framerusercontent.com/images/oYRJ925hmTBDjd6RMucvD1qtl7s.jpeg)
52
 
 
 
53
 
54
  ## License
55
 
56
  The model inherits the license from [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T).
57
 
58
 
59
- ## Usage
60
-
61
- ### Installation
62
-
63
- ```bash
64
- pip install transformers accelerate
65
- ```
66
-
67
- ### Prompt
68
-
69
- ```python
70
- आपण एक मदतगार, आदरणीय आणि प्रामाणिक सहाय्यक आहात.नेहमी शक्य तितकी उपयुक्त उत्तर द्या. तुमची उत्तरे हानिकारक, अनैतिक, वर्णद्वेषी, लैंगिकतावादी, हानिकारक, धोकादायक किंवा बेकायदेशीर नसावीत. कृपया खात्री करा की तुमची उत्तरे सामाजिक दृष्टिकोनाने निष्पक्ष आणि सकारात्मक स्वरूपाची आहेत. जर एखाद्या प्रश्नाला काही अर्थ नसेल किंवा वस्तुस्थितीशी सुसंगती नसेल, तर उत्तर देण्याऐवजी काहीतरी बरोबर का नाही हे स्पष्ट करा. तुम्हाला एखाद्या प्रश्नाचे उत्तर माहित नसल्यास, कृपया चुकीची माहिती देऊ नये.
71
-
72
- ### Instruction:
73
-
74
- <instruction>
75
-
76
- ### Input:
77
-
78
- <input data>
79
-
80
- ### Response:
81
- ```
82
-
83
- ### PyTorch
84
-
85
- ```python
86
- from transformers import AutoModelForCausalLM, AutoTokenizer
87
- device = "cuda"
88
- model = AutoModelForCausalLM.from_pretrained("smallstepai/Misal-1B-instruct-v0.1", torch_dtype=torch.bfloat16, device_map='auto')
89
- tokenizer = AutoTokenizer.from_pretrained("smallstepai/Misal-1B-instruct-v0.1")
90
-
91
- def ask_misal(model, tokenizer, instruction, inputs='', system_prompt='', max_new_tokens=200, device='cuda'):
92
-
93
- ip = dict(system_prompt=system_prompt, instruction=instruction, inputs=inputs)
94
- model_inputs = tokenizer.apply_chat_template(ip, return_tensors='pt')
95
- outputs = model.generate(model_inputs.to(device), max_new_tokens=max_new_tokens)
96
- response = tokenizer.decode(outputs[0]).split('### Response:')[1].strip()
97
- return response
98
-
99
- instruction="वाक्य सकारात्मक किंवा नकारात्मक आहे ते स्थिती निर्दिष्ट करा."
100
- inputs="मला हे आवडते त्या मार्गाने हे खूप उबदार आहे"
101
- resp = ask_misal(model, tokenizer, instruction=instruction, inputs=inputs, max_new_tokens=200)
102
- print(resp)
103
- ```
104
-
105
- ## Limitations
106
-
107
- - Misal-1B, built upon the TinyLlama model for Marathi, demonstrates an understanding of the language but currently falls short of Misal-7B in performance. This might be due to its smaller size and the data used for training TinyLlama.
108
- - However, we're actively working on improvements, we aim to significantly enhance Misal-1B's capabilities and bring it closer to its full potential.
109
-
110
-
111
  ## Team
112
 
113
  Sagar Sarkale, Abhijeet Katte, Prasad Mane, Shravani Chavan
 
1
  ---
2
  datasets:
3
+ - uonlp/CulturaX
4
+ - l3cube-pune/MarathiNLP
5
+ - ai4bharat/samanantar
 
 
6
  language:
7
  - mr
 
 
8
  tags:
9
  - marathi
 
 
 
 
10
  library_name: transformers
11
  pipeline_tag: text-generation
12
  license: apache-2.0
13
  ---
14
 
15
+ # Misal-1B-base-v0.1
16
 
17
+ It is a language model based on TinyLlama architecture, pretrained using Marathi Text Data.
18
 
19
+ Built by - [smallstep.ai](https://smallstep.ai/)
20
 
 
21
 
22
  ## Making of Misal?
23
 
24
  Detailed blog [here](https://smallstep.ai/making-misal).
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
+ ## Pretraining :
28
 
29
+ During the pretraining phase of our large language model, the model was exposed to a vast corpus of text data comprising approximately 2 billion Marathi tokens. This corpus primarily consisted of newspaper data spanning the years 2016 to 2022, sourced primarily from the CulturaX dataset. In addition to this, we supplemented our training data with additional sources such as l3cube, ai4bharat, and other internet-based datasets.
30
+ We chose bfloat16 as training precision due to stability issues with float16 precision.
31
 
32
  ## License
33
 
34
  The model inherits the license from [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T).
35
 
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ## Team
38
 
39
  Sagar Sarkale, Abhijeet Katte, Prasad Mane, Shravani Chavan