Supiri commited on
Commit
f6eae83
1 Parent(s): 8fbbd40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -0
README.md CHANGED
@@ -3,9 +3,147 @@ language: en
3
  datasets:
4
  - cornell_movie_dialog
5
  license: gpl-3.0
 
 
 
 
 
 
6
  widget:
7
  - text: "personality: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.</s> inquiry: What's your name?"
8
  example_title: "Talk to Hinata"
9
  - text: "personality: Voldemort is a raging psychopath, devoid of the normal human responses to other people's suffering. He has no conscience, feels no remorse or empathy, and does not recognize the worth and humanity of anybody except himself.</s> inquiry: What's your name?"
10
  example_title: "Talk to Voldemort"
 
 
 
 
 
11
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  datasets:
4
  - cornell_movie_dialog
5
  license: gpl-3.0
6
+ tags:
7
+ - NLP
8
+ - ChatBot
9
+ - Game AI
10
+ metrics:
11
+ - rouge
12
  widget:
13
  - text: "personality: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.</s> inquiry: What's your name?"
14
  example_title: "Talk to Hinata"
15
  - text: "personality: Voldemort is a raging psychopath, devoid of the normal human responses to other people's suffering. He has no conscience, feels no remorse or empathy, and does not recognize the worth and humanity of anybody except himself.</s> inquiry: What's your name?"
16
  example_title: "Talk to Voldemort"
17
+ inference:
18
+ parameters:
19
+ num_beams: 6
20
+ diversity_penalty: 5.0
21
+ num_beam_groups: 2
22
  ---
23
+ # FreeIsland AI
24
+
25
+ With the advancement of the graphical processing power of computers and sophisticated algorithms like [Nanite](https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Nanite/), simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games [showoff](https://www.youtube.com/watch?v=WU0gvPcc3jQ) the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it.
26
+
27
+ One of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to [talk to any person](https://www.youtube.com/watch?v=Z1OtYGzUoSo) in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game.
28
+
29
+ So the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games.
30
+
31
+ # Usage
32
+ ```py
33
+ from transformers import AutoModelForSeq2SeqLM
34
+
35
+ trained_model = AutoModelForSeq2SeqLM.from_pretrained(f"Supiri/t5-base-conversation")
36
+
37
+ prompt = "What's your name?"
38
+
39
+ context = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody."
40
+
41
+ input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids
42
+ outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2)
43
+
44
+ print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True))
45
+
46
+ # Answer: My name is Hinata
47
+ ```
48
+
49
+ # Evaluation
50
+
51
+ ## Test 1
52
+ For this test, I sampled input from the test dataset. For this question the actual response is
53
+
54
+ > "It works a little."
55
+
56
+ But models' response was
57
+
58
+ > "I don't want to flirt with you."
59
+
60
+ Which reflect its bio which was filled by GPT-3
61
+
62
+ > "He stands primarily to gain self-esteem, which he often receives through the submission of others"
63
+
64
+
65
+ In gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often try to assert his dominance over others.
66
+
67
+ ```py
68
+ prompt = dataset['test'][66]['request']
69
+ contexts = dataset['test'][66]['bio']
70
+
71
+ input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids
72
+ outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2)
73
+
74
+ print("Input to the Model")
75
+ print("Bio:\t",contexts)
76
+ print("\nPrompt:\t", prompt)
77
+
78
+ print("\nGround truth response")
79
+ print("\t", dataset['test'][66]['response'])
80
+
81
+ print("\nModel's Prediction")
82
+ print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True))
83
+
84
+ ```
85
+
86
+ ```txt
87
+ Input to the Model
88
+ Bio: Sebastian is a very extreme representation of the trope of the "Confidence Man", and acts it out to a degree that is sometimes comedic but mostly frightening. He stands primarily to gain self-esteem, which he often receives through the submission of others or solely through his own perceptions. An artful seducer, his incredible charisma is both his greatest weapon and most intoxicating weakness.
89
+
90
+ Prompt: You think you can come in here with that cute little smirk on your face and try and flirt with me. It doesn't work, Sebastian.
91
+
92
+ Ground truth response
93
+ It works a little.
94
+
95
+ Model's Prediction
96
+ Answer: I don't want to flirt with you.
97
+ ```
98
+
99
+
100
+ ### Test 2
101
+
102
+ Hinata is a kind-hearted girl from the anime series Naruto. I took her bio from [personality database](https://www.personality-database.com/profile/2790/hinata-hyga-naruto-shippden-mbti-personality-type) and ask a few questions about her.
103
+
104
+ Off the top, you can see the model understands the context since when I asked the model, "**What's your name?**" it responded with the name given with the context.
105
+
106
+ Also, notice when prompted the same question differently (**"Who are you?"**), it still manages to answer it well.
107
+
108
+ ```py
109
+ prompts = ["What's your name?", "How are you feeling?", "Do you like Star Wars?", "Who are you?", "Coffee or tea?"]
110
+
111
+ contexts = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody."
112
+
113
+ print("Bio:\t",contexts, "\n")
114
+
115
+ for prompt in prompts:
116
+ input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids
117
+ outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2)
118
+ print("Prompt:\t", prompt)
119
+ print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True), "\n")
120
+ ```
121
+
122
+ ```txt
123
+ Bio: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.
124
+
125
+ Prompt: What's your name?
126
+ Answer: My name is Hinata
127
+
128
+ Prompt: How are you feeling?
129
+ Answer: I'm fine.
130
+
131
+ Prompt: Do you like Star Wars?
132
+ Answer: No, I don't.
133
+
134
+ Prompt: Who are you?
135
+ Answer: My name is Hinata
136
+
137
+ Prompt: Coffee or tea?
138
+ Answer: No, I don't drink much.
139
+ ```
140
+
141
+
142
+ # Conclusion
143
+
144
+ After training the `t5-base` model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done.
145
+
146
+ 1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the **budget constraints**. So If I manage to cover at least half of the dataset this model will have come up with far better responses.
147
+ 2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and **generalization of model**.
148
+ 3. Using a bigger model like `t5-large` or `t5-3b` will certainly improve the performance.
149
+ 4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the **vocabulary size and trainable parameters**. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task.