Karneus commited on
Commit
144851f
1 Parent(s): 7b40bb0

Renamed to Enhanced Roleplay AI with Memory and Interactivity

Browse files
Enhanced Roleplay AI with Memory and Interactivity ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import pipeline, set_seed
2
+
3
+ # Initialize the conversation pipeline
4
+ set_seed(42)
5
+ roleplay_bot = pipeline('conversational', model='microsoft/DialoGPT-medium')
6
+
7
+ # Memory to store past interactions
8
+ memory = []
9
+
10
+ def update_memory(user_input, bot_response):
11
+ memory.append({"user": user_input, "bot": bot_response})
12
+
13
+ def get_memory_context():
14
+ context = ""
15
+ for interaction in memory[-5:]: # limiting memory to last 5 interactions for simplicity
16
+ context += f"User: {interaction['user']}\nBot: {interaction['bot']}\n"
17
+ return context
18
+
19
+ def interact(user_input):
20
+ context = get_memory_context()
21
+ input_with_context = context + f"User: {user_input}\n"
22
+ bot_response = roleplay_bot(input_with_context)[0]['generated_text'].split('\n')[-1]
23
+ update_memory(user_input, bot_response)
24
+ return bot_response
25
+
26
+ # Example interaction
27
+ user_input = "Hi! How are you?"
28
+ print("User:", user_input)
29
+ bot_response = interact(user_input)
30
+ print("Bot:", bot_response)
31
+
32
+ # Continue with more interactions as needed
README.md DELETED
@@ -1,54 +0,0 @@
1
- ---
2
- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
3
- tags:
4
- - conversational
5
- license: mit
6
- ---
7
-
8
- ## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
9
-
10
- DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
11
- The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
12
- The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
13
-
14
- * Multi-turn generation examples from an interactive environment:
15
-
16
- |Role | Response |
17
- |---------|--------|
18
- |User | Does money buy happiness? |
19
- | Bot | Depends how much money you spend on it .|
20
- |User | What is the best way to buy happiness ? |
21
- | Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
22
- |User |This is so difficult ! |
23
- | Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
24
-
25
- Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
26
-
27
- ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
28
-
29
- ### How to use
30
-
31
- Now we are ready to try out how the model works as a chatting partner!
32
-
33
- ```python
34
- from transformers import AutoModelForCausalLM, AutoTokenizer
35
- import torch
36
-
37
-
38
- tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
39
- model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
40
-
41
- # Let's chat for 5 lines
42
- for step in range(5):
43
- # encode the new user input, add the eos_token and return a tensor in Pytorch
44
- new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
45
-
46
- # append the new user input tokens to the chat history
47
- bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
48
-
49
- # generated a response while limiting the total chat history to 1000 tokens,
50
- chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
51
-
52
- # pretty print last ouput tokens from bot
53
- print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
54
- ```