deepparag commited on
Commit
09ead1b
1 Parent(s): 0a48e2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -2,3 +2,48 @@
2
  pipeline_tag: text2text-generation
3
  ---
4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  pipeline_tag: text2text-generation
3
  ---
4
 
5
+ # Aeona
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+ ![Aeona Banner](https://wwww.aeona.xyz/banner.png)
9
+
10
+ # Model Details
11
+
12
+ An generative AI based on [google flan base](https://huggingface.co/google/flan-t5-base).
13
+
14
+ Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot.
15
+ Using an AIML Chatbot will allow you to hardcode some replies also.
16
+
17
+ ## Model Description
18
+
19
+ eona is an chatbot which hope's to be able to talk with humans as if its an friend!
20
+ It's main target platform is discord.
21
+ You can invite the bot [here](https://aeona.xyz).
22
+
23
+ To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyx/).
24
+
25
+ Aeona works by sending a query to the AIML Chatbot if the AIML chabot is unable to answer then it comes over here to ***apply some real brains*** so to speak.
26
+ The main reason for using google flan is its great performance at logical reasoning.
27
+ # Participate and Help the AI improve or just hang out at [hugging face discussions](https://huggingface.co/deepparag/Aeona/discussions)
28
+
29
+ ## Goals
30
+ The goal is to create an AI which will work with AIML in order to create the most human like AI.
31
+
32
+ #### Why not an AI on its own?
33
+ For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code!
34
+ The goal of the AI is to generate responses where the AIML fails.
35
+
36
+ Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible!
37
+ So we use 3 dataset:-
38
+ 1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines!
39
+ 2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages!
40
+ 3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
41
+
42
+ ## Training
43
+ The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.
44
+ This leads to them covering each others issues!
45
+
46
+ The AI has a context of 6 messages which means it will reply until the 4th message from user.
47
+ [Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1)
48
+
49
+