Kowsher commited on
Commit
b13800c
·
1 Parent(s): 639fcfe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: bn
3
+ tags:
4
+ - Bert base
5
+ - Bengali Bert
6
+ - Bengali lm
7
+ - Bangla Base Bert
8
+ - Bangla Bert language model
9
+ - Bangla Bert
10
+ license: MIT
11
+ datasets:
12
+ - BanglaLM dataset
13
+ ---
14
+ # Bangla BERT Base
15
+ Here we published a pretrained Bangla bert language model as **bert-base-bangla**! which is now available in huggingface model hub.
16
+ Here we described [bert-base-bangla](https://github.com/Kowsher/bert-base-bangla) which is a pretrained Bangla language model based on mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and the GitHub [repository](https://github.com/google-research/bert)
17
+ ## Corpus Details
18
+ We trained the Bangla bert language model using BanglaLM dataset from kaggle [BanglaLM](https://www.kaggle.com/gakowsher/bangla-language-model-dataset). There is 3 version of dataset which is almost 40GB.
19
+ After downloading the dataset, we went on the way of mask LM, described here [BERT](https://arxiv.org/abs/1810.04805)
20
+ ```
21
+
22
+ **Bangla Base BERT Tokenizer**
23
+ ```py
24
+ from transformers import AutoTokenizer, AutoModel
25
+ bnbert_tokenizer = AutoTokenizer.from_pretrained("Kowsher/bert-base-test")
26
+ text = "খাঁটি সোনার চাইতে খাঁটি আমার দেশের মাটি"
27
+ bnbert_tokenizer.tokenize(text)
28
+ # output: ['খাটি', 'সে', '##ানার', 'চাইতে', 'খাটি', 'আমার', 'দেশের', 'মাটি']
29
+ ```
30
+ **MASK Generation**
31
+ here, we can use bert base bangla model as for masked language modeling:
32
+ ```py
33
+ from transformers import BertForMaskedLM, BertTokenizer, pipeline
34
+ model = BertForMaskedLM.from_pretrained("Kowsher/bert-base-test")
35
+ tokenizer = BertTokenizer.from_pretrained("Kowsher/bert-base-test")
36
+
37
+ nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
38
+ for pred in nlp(f"আমি বাংলার গান {nlp.tokenizer.mask_token}"):
39
+ print(pred)
40
+ # {'sequence': 'আমি বাংলার গান লিখি', 'score': 0.17955434322357178, 'token': 24749, 'token_str': 'লিখি'}
41
+
42
+
43
+ nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
44
+ for pred in nlp(f"তুই রাজাকার তুই {nlp.tokenizer.mask_token}"):
45
+ print(pred)
46
+ # {'sequence': 'তই রাজাকার তই রাজাকার', 'score': 0.9975168704986572, 'token': 13401, 'token_str': 'রাজাকার'}
47
+
48
+
49
+ nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
50
+ for pred in nlp(f"বাংলা আমার {nlp.tokenizer.mask_token}"):
51
+ print(pred)
52
+ # {'sequence': 'বাংলা আমার অহংকার', 'score': 0.5679506063461304, 'token': 19009, 'token_str': 'অহংকার'}
53
+ ```
54
+ ## Author
55
+ [Kowsher](http://kowsher.org/)