w11wo commited on
Commit
276a22e
1 Parent(s): 7dcbf41

Initial README update

Browse files
Files changed (1) hide show
  1. README.md +67 -1
README.md CHANGED
@@ -1 +1,67 @@
1
- GPT-2 English trained on Indonesian Wikipedia articles.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: id
3
+ tags:
4
+ - indo-gpt2-small
5
+ license: mit
6
+ datasets:
7
+ - wikipedia
8
+ ---
9
+
10
+ ## Indo GPT-2 Small
11
+ Indo GPT-2 Small is a language model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on the latest (around late December) Indonesian Wikipedia articles.
12
+
13
+ The model was originally HuggingFace's pretrained [English GPT-2 model](https://huggingface.co/transformers/model_doc/gpt2.html) and is later fine-tuned on the Indonesian dataset. Many of the techniques used
14
+ are based on a [notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb)/[blog](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) shared by [Pierre Guillou](https://medium.com/@pierre_guillou), where Pierre Guillou fine-tuned the English GPT-2 model on a Portuguese dataset.
15
+
16
+ Frameworks used include HuggingFace's [Transformers](https://huggingface.co/transformers) and fast.ai's [Deep Learning library](https://docs.fast.ai/). PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
17
+
18
+ ## Model
19
+ | Model | #params | Arch. | Training /Validation data (text) |
20
+ |-------------------|---------|-------------|---------------------------------------|
21
+ | `indo-gpt2-small` | 124M | GPT-2 Small | Indonesian Wikipedia (275 MB of text) |
22
+
23
+ ## Evaluation Results
24
+ The model was trained for only 1 epoch and the following is the final result once the training ended.
25
+
26
+ | epoch | train loss | valid loss | perplexity | total time |
27
+ |-------|------------|------------|------------|------------|
28
+ | 0 | 2.981 | 2.936 | 18.85 | 2:45:25 |
29
+
30
+ ## How to Use (PyTorch)
31
+ ### Load Model and Byte-level Tokenizer
32
+ ```python
33
+ from transformers import GPT2TokenizerFast, GPT2LMHeadModel
34
+ pretrained_name = "w11wo/indo-gpt2-small"
35
+ tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
36
+ tokenizer.model_max_length = 1024
37
+ model = GPT2LMHeadModel.from_pretrained(pretrained_name)
38
+ ```
39
+
40
+ ### Generate a Sequence
41
+ ```python
42
+ # sample prompt
43
+ prompt = "Nama saya Budi, dari Indonesia"
44
+ input_ids = tokenizer.encode(prompt, return_tensors='pt')
45
+ model.eval()
46
+
47
+ # generate output using top-k sampling
48
+ sample_outputs = model.generate(input_ids,
49
+ pad_token_id=50256,
50
+ do_sample=True,
51
+ max_length=40,
52
+ min_length=40,
53
+ top_k=40,
54
+ num_return_sequences=1)
55
+
56
+ for i, sample_output in enumerate(sample_outputs):
57
+ print(tokenizer.decode(sample_output.tolist()))
58
+ ```
59
+
60
+ ## Disclaimer
61
+ Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
62
+
63
+ ## Credits
64
+ Major thanks to Pierre Guillou for sharing his work, which did not only enable me to realize this project but also taught me tons of new, exciting stuff.
65
+
66
+ ## Author
67
+ Indo GPT-2 Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.