mlkorra commited on
Commit
de89287
1 Parent(s): 5f7d88e

Update About Page

Browse files
About/credits.md ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ ## Credits
2
+ Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week. Especially for providing such massive computing resource. Big thanks to [Suraj Patil](https://huggingface.co/valhalla) & [Patrick von Platen](https://huggingface.co/patrickvonplaten) for mentoring during the whole week.
3
+
About/intro.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
1
+ # RoBERTa base model for Hindi language
2
+
3
+ Pretrained model on Hindi language using a masked language modeling (MLM) objective. RoBERTa was introduced in
4
+ [this paper](https://arxiv.org/abs/1907.11692) and first released in
5
+ [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta).
6
+
7
+ > This is part of the
8
+ [Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-roberta-from-scratch-in-hindi/7091), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
About/model_description.md ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ ## Model description
2
+
3
+ It is a monolingual transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
About/results.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
1
+ ## Evaluation Results
2
+
3
+ RoBERTa Hindi is evaluated on downstream tasks. The results are summarized below.
4
+
5
+ | Task | Task Type | IndicBERT | HindiBERTa | Indic Transformers Hindi BERT | RoBERTa Hindi Guj San | RoBERTa Hindi |
6
+ |-------------------------|----------------------|-----------|------------|-------------------------------|-----------------------|---------------|
7
+ | BBC News Classification | Genre Classification | **76.44** | 66.86 | **77.6** | 64.9 | 73.67 |
8
+ | WikiNER | Token Classification | - | 90.68 | **95.09** | 89.61 | **92.76** |
9
+ | IITP Product Reviews | Sentiment Analysis | **78.01** | 73.23 | **78.39** | 66.16 | 75.53 |
10
+ | IITP Movie Reviews | Sentiment Analysis | 60.97 | 52.26 | **70.65** | 49.35 | **61.29** |
About/team.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
1
+ ## Team Members
2
+ - Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv))
3
+ - Aman K ([amankhandelia](https://huggingface.co/amankhandelia))
4
+ - Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk))
5
+ - Rahul Dev ([mlkorra](https://huggingface.co/mlkorra))
6
+ - Prateek Agrawal ([prateekagrawal](https://huggingface.co/prateekagrawal))
About/training_data.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Training data
2
+
3
+ The RoBERTa model was pretrained on the reunion of the following datasets:
4
+ - [OSCAR](https://huggingface.co/datasets/oscar) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
5
+ - [mC4](https://huggingface.co/datasets/mc4) is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus.
6
+ - [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) is a natural language understanding benchmark.
7
+ - [Samanantar](https://indicnlp.ai4bharat.org/samanantar/) is a parallel corpora collection for Indic language.
8
+ - [Hindi Wikipedia Articles - 172k](https://www.kaggle.com/disisbig/hindi-wikipedia-articles-172k) is a dataset with cleaned 172k Wikipedia articles.
9
+ - [Hindi Text Short and Large Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-and-large-summarization-corpus) is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.
10
+ - [Hindi Text Short Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-summarization-corpus) is a collection of ~330k articles with their headlines collected from Hindi News Websites.
11
+ - [Old Newspapers Hindi](https://www.kaggle.com/crazydiv/oldnewspapershindi) is a cleaned subset of HC Corpora newspapers.
About/training_procedure.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Training procedure
2
+ ### Preprocessing
3
+
4
+ The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
5
+ the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
6
+ with `<s>` and the end of one by `</s>`
7
+ The details of the masking procedure for each sentence are the following:
8
+ - 15% of the tokens are masked.
9
+ - In 80% of the cases, the masked tokens are replaced by `<mask>`.
10
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
11
+ - In the 10% remaining cases, the masked tokens are left as is.
12
+ Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
13
+
14
+ ### Pretraining
15
+ The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 42K steps with a batch size of 128 and a sequence length of 128. The
16
+ optimizer used is Adam with a learning rate of 6e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and
17
+ \\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning
18
+ rate after.
About/use.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### How to use
2
+
3
+ You can use this model directly with a pipeline for masked language modeling:
4
+
5
+ ```python
6
+ >>> from transformers import pipeline
7
+ >>> unmasker = pipeline('fill-mask', model='flax-community/roberta-hindi')
8
+ >>> unmasker("मुझे उनसे बात करना <mask> अच्छा लगा")
9
+
10
+ [{'score': 0.2096337080001831,
11
+ 'sequence': 'मुझे उनसे बात करना एकदम अच्छा लगा',
12
+ 'token': 1462,
13
+ 'token_str': ' एकदम'},
14
+ {'score': 0.17915162444114685,
15
+ 'sequence': 'मुझे उनसे बात करना तब अच्छा लगा',
16
+ 'token': 594,
17
+ 'token_str': ' तब'},
18
+ {'score': 0.15887945890426636,
19
+ 'sequence': 'मुझे उनसे बात करना और अच्छा लगा',
20
+ 'token': 324,
21
+ 'token_str': ' और'},
22
+ {'score': 0.12024253606796265,
23
+ 'sequence': 'मुझे उनसे बात करना लगभग अच्छा लगा',
24
+ 'token': 743,
25
+ 'token_str': ' लगभग'},
26
+ {'score': 0.07114479690790176,
27
+ 'sequence': 'मुझे उनसे बात करना कब अच्छा लगा',
28
+ 'token': 672,
29
+ 'token_str': ' कब'}]
30
+ ```
apps/about.py CHANGED
@@ -1,71 +1,20 @@
1
  import streamlit as st
 
 
2
 
 
 
 
3
 
4
  def app():
5
- # st.title("About")
6
- st.markdown("<h1 style='text-align: center;'>About</h1>", unsafe_allow_html=True)
7
- st.markdown("""## Introduction""")
8
- st.markdown(
9
- """**RoBERTa-hindi** is one of the many projects in the Flax/JAX community week organized by HuggingFace in collaboration with Google to make compute-intensive projects more practicable."""
10
- )
11
- st.markdown(
12
- """It is a monolingual transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts."""
13
- )
14
-
15
- st.markdown(
16
-
17
- """### How to use
18
-
19
- You can use this model directly with a pipeline for masked language modeling:
20
- ```python
21
- >>> from transformers import pipeline
22
- >>> unmasker = pipeline('fill-mask', model='flax-community/roberta-hindi')
23
- >>> unmasker("मुझे उनसे बात करना <mask> अच्छा लगा")
24
-
25
- [{'score': 0.2096337080001831,
26
- 'sequence': 'मुझे उनसे बात करना एकदम अच्छा लगा',
27
- 'token': 1462,
28
- 'token_str': ' एकदम'},
29
- {'score': 0.17915162444114685,
30
- 'sequence': 'मुझे उनसे बात करना तब अच्छा लगा',
31
- 'token': 594,
32
- 'token_str': ' तब'},
33
- {'score': 0.15887945890426636,
34
- 'sequence': 'मुझे उनसे बात करना और अच्छा लगा',
35
- 'token': 324,
36
- 'token_str': ' और'},
37
- {'score': 0.12024253606796265,
38
- 'sequence': 'मुझे उनसे बात करना लगभग अच्छा लगा',
39
- 'token': 743,
40
- 'token_str': ' लगभग'},
41
- {'score': 0.07114479690790176,
42
- 'sequence': 'मुझे उनसे बात करना कब अच्छा लगा',
43
- 'token': 672,
44
- 'token_str': ' कब'}]
45
- ```"""
46
- )
47
-
48
-
49
- st.markdown("""## Datasets used""")
50
- st.markdown(
51
- """RoBERTa-Hindi has been pretrained on a huge corpus consisting of multiple datasets. The entire list of datasets used is mentioned below : """
52
- )
53
- st.markdown(
54
- """
55
- 1. OSCAR
56
- 2. mC4
57
- 3. Indic-glue
58
- 4. Hindi-wikipedia-articles-172k
59
- 5. Hindi-text-short-summarization corpus
60
- 6. Hindi-text-short-and-large-summarization corpus
61
- 7. Oldnewspaperhindi
62
- 8. Samanantar
63
- """
64
- )
65
-
66
- st.markdown(
67
- """
68
- ***NOTE: Some of the datasets are readily available on the HuggingFace Datasets while the team developed the rest as per the docs.***
69
- """
70
- )
71
 
1
  import streamlit as st
2
+ import os
3
+ import json
4
 
5
+ def read_markdown(path, folder="./About/"):
6
+ with open(os.path.join(folder, path)) as f:
7
+ return f.read()
8
 
9
  def app():
10
+
11
+ st.write(read_markdown("intro.md"))
12
+ st.write(read_markdown("model_description.md"))
13
+ st.write(read_markdown("use.md"))
14
+ st.write(read_markdown("training_data.md"))
15
+ st.write(read_markdown("training_procedure.md"))
16
+ st.write(read_markdown("results.md"))
17
+ st.write(read_markdown("team.md"))
18
+ st.markdown(read_markdown("credits.md"))
19
+ st.markdown("![Alt Text](https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:medium)")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20