Sakonii commited on
Commit
2531c5a
1 Parent(s): d15f0af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -14
README.md CHANGED
@@ -5,31 +5,83 @@ tags:
5
  model-index:
6
  - name: distilgpt2-nepali
7
  results: []
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
  # distilgpt2-nepali
14
 
15
- This model is a fine-tuned version of [Sakonii/distilgpt2-nepali](https://huggingface.co/Sakonii/distilgpt2-nepali) on the None dataset.
 
16
  It achieves the following results on the evaluation set:
17
- - Loss: 3.2705
 
 
 
18
 
19
  ## Model description
20
 
21
- More information needed
22
 
23
  ## Intended uses & limitations
24
 
25
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- ## Training and evaluation data
28
 
29
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ## Training procedure
32
 
 
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
@@ -39,15 +91,18 @@ The following hyperparameters were used during training:
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 2
43
  - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
47
- | Training Loss | Epoch | Step | Validation Loss |
48
- |:-------------:|:-----:|:------:|:---------------:|
49
- | 3.4688 | 1.0 | 94395 | 3.3439 |
50
- | 3.3968 | 2.0 | 188790 | 3.2705 |
 
 
 
51
 
52
 
53
  ### Framework versions
 
5
  model-index:
6
  - name: distilgpt2-nepali
7
  results: []
8
+ widget:
9
+ - text: "नेपालका धेरैजसो चाडपर्वहरूमध्ये"
10
+ example_title: "Example 1"
11
+ - text: "नेपाल र भारतबीच"
12
+ example_title: "Example 2"
13
+ - text: "प्रधानमन्त्री"
14
+ example_title: "Example 3"
15
+ - text: "दस वर्ष लामो "
16
+ example_title: "Example 4"
17
+ - text: "जापानमा आज "
18
+ example_title: "Example 5"
19
+
20
  ---
21
 
 
 
 
22
  # distilgpt2-nepali
23
 
24
+ This model is pre-trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset consisting of over 13 million Nepali text sequences using a Causal language modeling (CLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to [XLM-ROBERTa](https://arxiv.org/abs/1911.02116) and trains [distilgpt2](https://huggingface.co/distilgpt2) for language modeling.
25
+
26
  It achieves the following results on the evaluation set:
27
+
28
+ | Training Loss | Validation Loss | Perplexity
29
+ |:-------------:|:---------------:|:----------:|
30
+ | 3.3968 | 3.2705 | 26.3245
31
 
32
  ## Model description
33
 
34
+ Refer to original [distilgpt2](https://huggingface.co/distilgpt2)
35
 
36
  ## Intended uses & limitations
37
 
38
+ This raw model can be used for Nepali text generation and intends to be fine-tuned on Nepali language focused downstream task.
39
+ The language model being trained on a data with texts grouped to a block size of 512, it handles text sequence up to 512 tokens and may not perform satisfactorily on shorter sequences.
40
+
41
+ ## Usage
42
+
43
+ This model can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
44
+
45
+ ```python
46
+ >>> from transformers import pipeline, set_seed
47
+ >>> set_seed(42)
48
+ >>> generator = pipeline('text-generation', model='Sakonii/distilgpt2-nepali')
49
+ >>> generator("नेपालका धेरैजसो चाडपर्वहरूमध्ये,", max_length=30, num_return_sequences=5)
50
+
51
+ Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
52
+ [{'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, तिहार र छठपर्व विशेष रूपमा मनाइने भएकाले नेपाली मौलिक पर्व पनि हो । हिन्दू धर्म र संस्कृतिक... काठमाडौं ।'},
53
+ {'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, तिहारको मुख्य दिन आज साँझ अस्ताउँदो सूर्यलाई अर्घ्य दिइएको छ । वैदिक विधि...विस्तृतमा पढ्नुस् काठमाडौं । नेपाल चिकित्सक संघका'},
54
+ {'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, चाडपर्व, विवाह,... नेपाली काँग्रेसका प्रवक्ता विश्वप्रकाश शर्माले पार्टीभित्र आन्तरिक झगडा हुने निश्चित भएको र गुटबन्दीका कारण चुनावमा हार बेहोर्नु'},
55
+ {'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, दशैं नेपालीहरूको मौलिक पर्वका रूपमा मनाउँछन् । नेपालीहरूको दोस्रो महान् पर्व तिहार हो । तिहारले दाजुभाइ तथा दिदीबहिनीहरूको बीचमा प्रगाढ सम्बन्ध स्थापित'},
56
+ {'generated_text': 'नेपालका धेरैजसो चाडपर्वहरूमध्ये, माघे संक्रान्ति र माघे संक्रान्तिमा माघे संक्रान्तिमा मात्र नभएर फागुन महिनाभर नै विशेष महत्व रहने गरेको छ । काठमाडौं ।'}]
57
+ ```
58
 
59
+ Here is how we can use the model to get the features of a given text in PyTorch:
60
 
61
+ ```python
62
+ from transformers import AutoTokenizer, AutoModelForCausalLM
63
+
64
+ tokenizer = AutoTokenizer.from_pretrained('Sakonii/distilgpt2-nepali')
65
+ model = AutoModelForCausalLM.from_pretrained('Sakonii/distilgpt2-nepali')
66
+
67
+ # prepare input
68
+ text = "चाहिएको text यता राख्नु होला।"
69
+ encoded_input = tokenizer(text, return_tensors='pt')
70
+
71
+ # forward pass
72
+ output = model(**encoded_input)
73
+ ```
74
+
75
+ ## Training data
76
+
77
+ This model is trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) language modeling dataset which combines the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia.
78
+ As for training the language model, the texts are tokenized using Sentence Piece Model (SPM), a vocabulary size of 24,576 and texts are are grouped to a block of 512 tokens.
79
 
80
  ## Training procedure
81
 
82
+ The model is trained with the same configuration as the original [distilgpt2](https://huggingface.co/distilgpt2); but with 512 tokens per instance, 12 instances per batch, and around 188.8K training steps.
83
+
84
+
85
  ### Training hyperparameters
86
 
87
  The following hyperparameters were used during training:
 
91
  - seed: 42
92
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
93
  - lr_scheduler_type: linear
94
+ - num_epochs: 5
95
  - mixed_precision_training: Native AMP
96
 
97
  ### Training results
98
 
99
+ | Training Loss | Epoch | Step | Validation Loss | Perplexity |
100
+ |:-------------:|:-----:|:------:|:---------------:|:----------:|
101
+ | 3.7645 | 1.0 | 94395 | 3.6291 | 37.6789 |
102
+ | 3.5857 | 2.0 | 188790 | 3.4442 | 31.3182 |
103
+ | 3.505 | 3.0 | 283185 | 3.3749 | 29.2214 |
104
+ | 3.4688 | 4.0 | 377580 | 3.3439 | 28.3294 |
105
+ | 3.3968 | 5.0 | 471975 | 3.2705 | 26.3245 |
106
 
107
 
108
  ### Framework versions