w11wo commited on
Commit
02af4b3
1 Parent(s): 05f5321

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: th
3
+ tags:
4
+ - gpt2-base-thai
5
+ license: mit
6
+ datasets:
7
+ - oscar
8
+ widget:
9
+ - text: "สวัสดีตอนเช้า"
10
+ ---
11
+
12
+ ## GPT-2 Base Thai
13
+
14
+ GPT-2 Base Thai is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_deduplicated_th` subset. The model was trained from scratch and achieved an evaluation loss of 1.708 and an evaluation perplexity of 5.516.
15
+
16
+ This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by HuggingFace. All training was done on a TPUv3-8 VM, sponsored by the Google Cloud team.
17
+
18
+ All necessary scripts used for training could be found in the [Files and versions](https://hf.co/flax-community/gpt2-base-thai/tree/main) tab, as well as the [Training metrics](https://hf.co/flax-community/gpt2-base-thai/tensorboard) logged via Tensorboard.
19
+
20
+ ## Model
21
+
22
+ | Model | #params | Arch. | Training/Validation data (text) |
23
+ | ---------------- | ------- | ----- | ------------------------------------ |
24
+ | `gpt2-base-thai` | 124M | GPT-2 | `unshuffled_deduplicated_th` Dataset |
25
+
26
+ ## Evaluation Results
27
+
28
+ The model was trained for 3 epochs and the following is the final result once the training ended.
29
+
30
+ | train loss | valid loss | valid PPL | total time |
31
+ | ---------- | ---------- | --------- | ---------- |
32
+ | 1.638 | 1.708 | 5.516 | 6:12:34 |
33
+
34
+ ## How to Use
35
+
36
+ ### As Causal Language Model
37
+
38
+ ```python
39
+ from transformers import pipeline
40
+
41
+ pretrained_name = "flax-community/gpt2-base-thai"
42
+
43
+ nlp = pipeline(
44
+ "text-generation",
45
+ model=pretrained_name,
46
+ tokenizer=pretrained_name
47
+ )
48
+
49
+ nlp("สวัสดีตอนเช้า")
50
+ ```
51
+
52
+ ### Feature Extraction in PyTorch
53
+
54
+ ```python
55
+ from transformers import GPT2Model, GPT2TokenizerFast
56
+
57
+ pretrained_name = "flax-community/gpt2-base-thai"
58
+ model = GPT2Model.from_pretrained(pretrained_name)
59
+ tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
60
+
61
+ prompt = "สวัสดีตอนเช้า"
62
+ encoded_input = tokenizer(prompt, return_tensors='pt')
63
+ output = model(**encoded_input)
64
+ ```
65
+
66
+ ## Team Members
67
+
68
+ - Sakares Saengkaew ([@sakares](https://hf.co/sakares))
69
+ - Wilson Wongso ([@w11wo](https://hf.co/w11wo))