deepparag commited on
Commit
6ada44f
1 Parent(s): c5f596b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: https://images-ext-2.discordapp.net/external/IaDAOIgiVKpnDGgsqAsVEW5jgwIHprFc3dSmlW3U0Ro/%3Fsize%3D4096/https/cdn.discordapp.com/avatars/931226824753700934/51db9904887a38dca03238f9b3479594.png
3
+ tags:
4
+ - conversational
5
+ license: mit
6
+ ---
7
+ An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
8
+
9
+ # AEONA
10
+ Note the AI is still learning so expect very frequent updates!
11
+ Soon a sample AIML project and a API will be released!
12
+ ## Goals
13
+ The goal is to create an AI which will work with AIML in order to create the most human like AI.
14
+
15
+ ####Why not an AI on its own?
16
+ For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code!
17
+ The goal of the AI is to generate responses where the AIML fails.
18
+
19
+ Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible!
20
+ So we use 3 dataset:-
21
+ 1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines!
22
+ 2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages!
23
+ 3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
24
+
25
+ ## Training
26
+ The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.
27
+ This leads to them covering each others issues!
28
+ ## Usage
29
+ Example:
30
+ ```python
31
+ from transformers import AutoTokenizer, AutoModelWithLMHead
32
+
33
+ tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot")
34
+ model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot")
35
+ # Let's chat for 4 lines
36
+ for step in range(4):
37
+ # encode the new user input, add the eos_token and return a tensor in Pytorch
38
+ new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
39
+ # print(new_user_input_ids)
40
+ # append the new user input tokens to the chat history
41
+ bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
42
+ # generated a response while limiting the total chat history to 1000 tokens,
43
+ chat_history_ids = model.generate(
44
+ bot_input_ids, max_length=200,
45
+ pad_token_id=tokenizer.eos_token_id,
46
+ no_repeat_ngram_size=4,
47
+ do_sample=True,
48
+ top_k=100,
49
+ top_p=0.7,
50
+ temperature=0.8
51
+ )
52
+
53
+ # pretty print last ouput tokens from bot
54
+ print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
55
+ ```