deepparag commited on
Commit
89473ac
1 Parent(s): f335a06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -10
README.md CHANGED
@@ -4,17 +4,27 @@ tags:
4
  - conversational
5
  license: mit
6
  ---
7
- An generative AI made using [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium).
 
 
 
 
 
 
 
 
 
 
8
 
9
  # AEONA
10
  Aeona is an chatbot which hope's to be able to talk with humans as if its an friend!
11
  It's main target platform is discord.
12
- You can invite the bot [here](https://aeona.repl.co/).
13
 
14
- To learn more about this project and chat with the ai, you can use this [website](https://aeona.repl.co/).
15
 
16
  Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user.
17
- contact: deep.p.sarda@gmail.com
18
  ## Goals
19
  The goal is to create an AI which will work with AIML in order to create the most human like AI.
20
 
@@ -29,15 +39,22 @@ contact: deep.p.sarda@gmail.com
29
  3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
30
 
31
  ## Training
32
- The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.
33
- This leads to them covering each others issues!
34
 
 
 
 
 
 
 
 
 
35
  ## Evaluation
36
-
37
  Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics.
38
 
39
- | Model | Perplexity
40
- |---|---|---
41
  | Seq2seq Baseline [3] | 29.8 |
42
  | Wolf et al. [5] | 16.3 |
43
  | GPT-2 baseline | 99.5 |
@@ -47,6 +64,8 @@ Below is a comparison of Aeona vs. other baselines on the mixed dataset given ab
47
  | **Aeona** | **7.9** |
48
 
49
  ## Usage
 
 
50
  Example:
51
  ```python
52
  from transformers import AutoTokenizer, AutoModelWithLMHead
@@ -72,5 +91,5 @@ for step in range(4):
72
  )
73
 
74
  # pretty print last ouput tokens from bot
75
- print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
76
  ```
 
4
  - conversational
5
  license: mit
6
  ---
7
+
8
+ # Aeona | Chatbot
9
+ ![Aeona Banner](https://github.com/deepsarda/Aeona/blob/master/dashboard/static/banner.png?raw=true)
10
+
11
+
12
+
13
+ An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
14
+
15
+
16
+ Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot.
17
+ Using an AIML Chatbot will allow you to hardcode some replies also.
18
 
19
  # AEONA
20
  Aeona is an chatbot which hope's to be able to talk with humans as if its an friend!
21
  It's main target platform is discord.
22
+ You can invite the bot [here](https://aeona.xyz).
23
 
24
+ To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyx/).
25
 
26
  Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user.
27
+
28
  ## Goals
29
  The goal is to create an AI which will work with AIML in order to create the most human like AI.
30
 
 
39
  3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
40
 
41
  ## Training
42
+ The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.
43
+ This leads to them covering each others issues!
44
 
45
+ The AI has a context of 6 messages which means it will reply until the 4th message from user.
46
+ [Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1)
47
+
48
+ ## Hugging Face interference, tips
49
+ I recommend send the user input,
50
+ previous 3 AI and human responses.
51
+
52
+ Using more context than this will lead to useless responses but using less is alright but the responses may be random.
53
  ## Evaluation
 
54
  Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics.
55
 
56
+ | Model | Perplexity |
57
+ |---|---|
58
  | Seq2seq Baseline [3] | 29.8 |
59
  | Wolf et al. [5] | 16.3 |
60
  | GPT-2 baseline | 99.5 |
 
64
  | **Aeona** | **7.9** |
65
 
66
  ## Usage
67
+
68
+
69
  Example:
70
  ```python
71
  from transformers import AutoTokenizer, AutoModelWithLMHead
 
91
  )
92
 
93
  # pretty print last ouput tokens from bot
94
+ print("Aeona: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
95
  ```