w11wo commited on
Commit
fd2fb55
1 Parent(s): a932a01

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: su
3
+ tags:
4
+ - sundanese-gpt2-base
5
+ license: mit
6
+ datasets:
7
+ - mc4
8
+ - cc100
9
+ - oscar
10
+ - wikipedia
11
+ widget:
12
+ - text: "Nami abdi Budi, ti Indonésia"
13
+ ---
14
+
15
+ ## Sundanese GPT-2 Base
16
+
17
+ Sundanese GPT-2 Base is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. It was trained on four datasets: [OSCAR](https://hf.co/datasets/oscar)'s `unshuffled_deduplicated_su` subset, the Sundanese [mC4](https://hf.co/datasets/mc4) subset, the Sundanese [CC100](https://hf.co/datasets/cc100) subset, and Sundanese [Wikipedia](https://su.wikipedia.org/).
18
+
19
+ 10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 3.61 and an evaluation perplexity of 36.97.
20
+
21
+ This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the [Files and versions](https://hf.co/w11wo/sundanese-gpt2-base/tree/main) tab, as well as the [Training metrics](https://hf.co/w11wo/sundanese-gpt2-base/tensorboard) logged via Tensorboard.
22
+
23
+ ## Model
24
+
25
+ | Model | #params | Arch. | Training/Validation data (text) |
26
+ | --------------------- | ------- | ----- | ------------------------------------- |
27
+ | `sundanese-gpt2-base` | 124M | GPT-2 | OSCAR, mC4, CC100, Wikipedia (758 MB) |
28
+
29
+ ## Evaluation Results
30
+
31
+ The model was trained for 50 epochs and the following is the final result once the training ended.
32
+
33
+ | train loss | valid loss | valid PPL | total time |
34
+ | ---------- | ---------- | --------- | ---------- |
35
+ | 2.436 | 3.61 | 36.97 | 7:1:54 |
36
+
37
+ ## How to Use
38
+
39
+ ### As Causal Language Model
40
+
41
+ ```python
42
+ from transformers import pipeline
43
+
44
+ pretrained_name = "w11wo/sundanese-gpt2-base"
45
+
46
+ nlp = pipeline(
47
+ "text-generation",
48
+ model=pretrained_name,
49
+ tokenizer=pretrained_name
50
+ )
51
+
52
+ nlp("Nami abdi Budi, ti Indonésia")
53
+ ```
54
+
55
+ ### Feature Extraction in PyTorch
56
+
57
+ ```python
58
+ from transformers import GPT2Model, GPT2TokenizerFast
59
+
60
+ pretrained_name = "w11wo/sundanese-gpt2-base"
61
+ model = GPT2Model.from_pretrained(pretrained_name)
62
+ tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
63
+
64
+ prompt = "Nami abdi Budi, ti Indonésia"
65
+ encoded_input = tokenizer(prompt, return_tensors='pt')
66
+ output = model(**encoded_input)
67
+ ```
68
+
69
+ ## Disclaimer
70
+
71
+ Do consider the biases which came from all four datasets that may be carried over into the results of this model.
72
+
73
+ ## Author
74
+
75
+ Sundanese GPT-2 Base was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/).