Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,77 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: hi
|
3 |
+
---
|
4 |
+
|
5 |
+
# Hindi language model
|
6 |
+
## Trained with ELECTRA base size settings
|
7 |
+
|
8 |
+
<a href="https://colab.research.google.com/drive/1R8TciRSM7BONJRBc9CBZbzOmz39FTLl_">Tokenization and training CoLab</a>
|
9 |
+
|
10 |
+
## Example Notebooks
|
11 |
+
|
12 |
+
This model outperforms Multilingual BERT on <a href="https://colab.research.google.com/drive/1UYn5Th8u7xISnPUBf72at1IZIm3LEDWN">Hindi movie reviews / sentiment analysis</a> (using SimpleTransformers)
|
13 |
+
|
14 |
+
You can get higher accuracy using ktrain + TensorFlow, where you can adjust learning rate and
|
15 |
+
other hyperparameters: https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w?usp=sharing
|
16 |
+
|
17 |
+
Question-answering on MLQA dataset: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar#scrollTo=IcFoAHgKCUiQ
|
18 |
+
|
19 |
+
A smaller model (<a href="https://huggingface.co/monsoon-nlp/hindi-bert">Hindi-BERT</a>) performs better on a BBC news classification task.
|
20 |
+
|
21 |
+
## Corpus
|
22 |
+
|
23 |
+
The corpus is two files:
|
24 |
+
- Hindi CommonCrawl deduped by OSCAR https://traces1.inria.fr/oscar/
|
25 |
+
- latest Hindi Wikipedia ( https://dumps.wikimedia.org/hiwiki/ ) + WikiExtractor to txt
|
26 |
+
|
27 |
+
Bonus notes:
|
28 |
+
- Adding English wiki text or parallel corpus could help with cross-lingual tasks and training
|
29 |
+
|
30 |
+
## Vocabulary
|
31 |
+
|
32 |
+
https://drive.google.com/file/d/1-6tXrii3tVxjkbrpSJE9MOG_HhbvP66V/view?usp=sharing
|
33 |
+
|
34 |
+
Bonus notes:
|
35 |
+
- Created with HuggingFace Tokenizers; you can increase vocabulary size and re-train; remember to change ELECTRA vocab_size
|
36 |
+
|
37 |
+
## Training
|
38 |
+
|
39 |
+
Structure your files, with data-dir named "trainer" here
|
40 |
+
|
41 |
+
```
|
42 |
+
trainer
|
43 |
+
- vocab.txt
|
44 |
+
- pretrain_tfrecords
|
45 |
+
-- (all .tfrecord... files)
|
46 |
+
- models
|
47 |
+
-- modelname
|
48 |
+
--- checkpoint
|
49 |
+
--- graph.pbtxt
|
50 |
+
--- model.*
|
51 |
+
```
|
52 |
+
|
53 |
+
## Conversion
|
54 |
+
|
55 |
+
Use this process to convert an in-progress or completed ELECTRA checkpoint to a Transformers-ready model:
|
56 |
+
|
57 |
+
```
|
58 |
+
git clone https://github.com/huggingface/transformers
|
59 |
+
python ./transformers/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py
|
60 |
+
--tf_checkpoint_path=./models/checkpointdir
|
61 |
+
--config_file=config.json
|
62 |
+
--pytorch_dump_path=pytorch_model.bin
|
63 |
+
--discriminator_or_generator=discriminator
|
64 |
+
python
|
65 |
+
```
|
66 |
+
|
67 |
+
```
|
68 |
+
from transformers import TFElectraForPreTraining
|
69 |
+
model = TFElectraForPreTraining.from_pretrained("./dir_with_pytorch", from_pt=True)
|
70 |
+
model.save_pretrained("tf")
|
71 |
+
```
|
72 |
+
|
73 |
+
Once you have formed one directory with config.json, pytorch_model.bin, tf_model.h5, special_tokens_map.json, tokenizer_config.json, and vocab.txt on the same level, run:
|
74 |
+
|
75 |
+
```
|
76 |
+
transformers-cli upload directory
|
77 |
+
```
|