gabtan99 commited on
Commit
7d0c847
1 Parent(s): 2ae9d4d

initial commit: model files

Browse files
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - conversational
4
+ - tagalog
5
+ - filipino
6
+
7
+ language:
8
+ - tl
9
+
10
+ inference: false
11
+ ---
12
+
13
+ # Tagalog DialoGPT
14
+
15
+ A DialoGPT model fine-tuned on Tagalog conversational data scraped from the web. This model is an output of a research on BERT-based data augmentation for low resource languages. We fine-tuned DialoGPT-medium as our base model.
16
+
17
+ # Latest release: July 25, 2021
18
+ * As of the moment, the model is only able to respond based on the history of 3 previous utterances before being limited. This is a result of the scarce amount of Tagalog conversations in our dataset.
19
+
20
+
21
+ # Dataset and Scripts
22
+ The training data used was collected under the following categories:
23
+ * Food and Drinks
24
+ * Home and Garden
25
+ * Style and Fashion
26
+ * Travel and Leisure
27
+ * Visas and Immigration
28
+ * Health and Wellness
29
+ * Body and Fitness
30
+ * Small Talk
31
+
32
+ Pinoy Exchange (PEx) Conversational Dataset to be released soon.
33
+
34
+ # Usage
35
+
36
+ Here is an example of using Beam Search as the decoding method for our model.
37
+ ```
38
+ for step in range(2):
39
+ # encode the new user input, add the eos_token and return a tensor in Pytorch
40
+ new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
41
+
42
+ # append the new user input tokens to the chat history
43
+ bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
44
+
45
+ # we limit the generation to 512 tokens, each utterance in training had a maximum of 128 tokens
46
+ chat_history_ids = model.generate(
47
+ bot_input_ids, max_length=512,
48
+ pad_token_id=tokenizer.eos_token_id,
49
+ num_beams=5,
50
+ no_repeat_ngram_size=3
51
+ )
52
+
53
+ # pretty print last ouput tokens from bot
54
+ print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
55
+ ```
56
+
57
+
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "initializer_range": 0.02,
11
+ "layer_norm_epsilon": 1e-05,
12
+ "model_type": "gpt2",
13
+ "n_ctx": 1024,
14
+ "n_embd": 1024,
15
+ "n_head": 16,
16
+ "n_layer": 24,
17
+ "n_positions": 1024,
18
+ "resid_pdrop": 0.1,
19
+ "summary_activation": null,
20
+ "summary_first_dropout": 0.1,
21
+ "summary_proj_to_labels": true,
22
+ "summary_type": "cls_index",
23
+ "summary_use_proj": true,
24
+ "task_specific_params": {
25
+ "conversational": {
26
+ "max_length": 1000
27
+ }
28
+ },
29
+ "vocab_size": 50257
30
+ }
eval_results.txt ADDED
@@ -0,0 +1 @@
 
1
+ perplexity = tensor(4.1206)
merges.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f9b235d5f4062b2104bebedbbf8de1410a3615460f040f7f98ac5d62336fa53
3
+ size 1444581337
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4780f61cd642ebfbd3838e556ce3bc5525498cb9607cbf0f78d865ced16dac8c
3
+ size 1327
vocab.json ADDED
The diff for this file is too large to render. See raw diff