NebulaByte commited on
Commit
6cfc1c0
1 Parent(s): ac6e6d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -1
README.md CHANGED
@@ -10,4 +10,62 @@ widget:
10
  inference:
11
  parameters:
12
  max_length: 200
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  inference:
11
  parameters:
12
  max_length: 200
13
+ ---
14
+
15
+ # Model Overview:
16
+ The model is a language generation model designed for extending the GPT2 models to support Hindi language along with the original languages that it supports. It was fine-tuned on Hindi texts of [wikipedia](https://www.kaggle.com/datasets/disisbig/hindi-wikipedia-articles-55k) articles.
17
+
18
+ # Model Architecture and Parameters:
19
+ The model architecture is based on the GPT-2 framework, specifically using the parameters of the small version of the original OpenAI GPT2 model. It employs a Byte Pair Encoding (BPE) tokenizer.
20
+
21
+ # Corpus:
22
+ The training corpus for Hindi GPT2 consists of Wikipedia articles.
23
+
24
+ # Tokenizer:
25
+ A tokenizer is trained on Hindi Wikipedia Corpus. The new tokenizer vocabulary (5000 tokens) is merged with existing tokenizer. Hindi GPT2 uses a byte-level version of Byte Pair Encoding (BPE) for tokenizing Hindi text, including Unicode characters. The tokenizer has a vocabulary size of 53497, which allows it to effectively represent the Hindi language's rich vocabulary. Input sequences are formed by breaking the text into consecutive tokens with a maximum length of 1024 tokens.
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ More information needed
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0005
43
+ - train_batch_size: 64
44
+ - eval_batch_size: 64
45
+ - seed: 42
46
+ - gradient_accumulation_steps: 4
47
+ - total_train_batch_size: 256
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - lr_scheduler_warmup_steps: 500
51
+ - num_epochs: 1
52
+ - mixed_precision_training: Native AMP
53
+
54
+ ### Training results
55
+
56
+ | Step | Training Loss | Validation Loss |
57
+ | :---- | :------------- | :--------------- |
58
+ | 500 | 2.0016 | 1.066703 |
59
+ | 1000 | 1.0314 | 0.959653 |
60
+ | 1500 | 0.9593 | 0.918827 |
61
+ | 2000 | 0.922 | 0.889607 |
62
+ | 2500 | 0.8983 | 0.872523 |
63
+ | 3000 | 0.8852 | 0.863592 |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.30.2
69
+ - torch 1.13.1
70
+ - Datasets 2.13.1
71
+ - Tokenizers 0.13.3