bohmian commited on
Commit
4fd4ce2
1 Parent(s): bd36580

Upload 2 files

Browse files
2_LLM_Agent_Countries_ESG_Comparison.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
3_[Experimental] Three_Agents_Countries_ESG_Comparison.ipynb ADDED
@@ -0,0 +1,795 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "UwPf_0rc4KZJ"
7
+ },
8
+ "source": [
9
+ "# Notebook which Uses 3 Agents to Come to a Conclusion for a Query Regarding 2 Countries\n",
10
+ "\n",
11
+ "This is just an experimental notebook to explore interacting agents for the above purpose, not implemented in Streamlit app for reasons below,.\n",
12
+ "\n",
13
+ "Idea: 3 agents will be used, the first 2 agents will be ESG policy experts for each of the 2 countries that the user selects, they contribute to the conversation by using their RetrievalQA tools to find out the relevant ESG policy for their own countries. The 3rd agent will be the moderator without access to any tool which ideally uses the first 2 agent's input to come to a conclusion about the user query/topic of interest.\n",
14
+ "\n",
15
+ "Result: However this doesn't work very well, each of the agents tend to continue to simulate the conversation for other parties after giving their own input, despite being prompted not to do so. The moderator also tends to hallucinate no matter which temperature isused. For now, too many things can go wrong in the middle of the conversation between the 2 agents, and too many prompts are needed to try to get everything right.\n",
16
+ "\n",
17
+ "But in this case, even with the current free LLM API, it is clear from the second notebook that a single agent has the ability to reason all the steps out to answer a query regarding 2 countries. Hence the previous notebook of having a single agent equipped with the RetrievalQA tool is used eventually in the app.\n",
18
+ "\n",
19
+ "Maybe multiple agents like these can be used for a debate between ESG policies instead, where each agent tries to convince the other of its views."
20
+ ]
21
+ },
22
+ {
23
+ "cell_type": "code",
24
+ "execution_count": 1,
25
+ "metadata": {},
26
+ "outputs": [],
27
+ "source": [
28
+ "import os\n",
29
+ "from langchain.llms import HuggingFaceHub # for calling HuggingFace Inference API (free for our use case)\n",
30
+ "from langchain.embeddings import HuggingFaceEmbeddings # to let program know what embeddings the vector store was embedded in earlier\n",
31
+ "\n",
32
+ "# to set up the agent and tools which will be used to answer questions later\n",
33
+ "from langchain.agents import initialize_agent \n",
34
+ "from langchain.agents import tool # decorator so each function will be recognized as a tool\n",
35
+ "from langchain.chains.retrieval_qa.base import RetrievalQA # to answer questions from vector store retriever\n",
36
+ "# from langchain.chains.question_answering import load_qa_chain # to further customize qa chain if needed\n",
37
+ "from langchain.vectorstores import Chroma # vector store for retriever\n",
38
+ "import ast # to parse user string input to list for one of the tools (agent tools do not support 2 inputs)\n",
39
+ "#from langchain.memory import ConversationBufferMemory # not used as of now\n",
40
+ "import pickle # for loading the bm25 retriever\n",
41
+ "from langchain.retrievers import EnsembleRetriever # to use chroma and \n",
42
+ "\n",
43
+ "# for defining a generic LLMChain as a generic chat tool (if needed)\n",
44
+ "from langchain.prompts import PromptTemplate \n",
45
+ "from langchain.chains import LLMChain\n",
46
+ "\n",
47
+ "from langchain.schema import (AIMessage, HumanMessage, SystemMessage,) # for dialogue agent messages\n",
48
+ "\n",
49
+ "import warnings\n",
50
+ "warnings.filterwarnings(\"ignore\", category=FutureWarning)\n",
51
+ "warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n",
52
+ "\n",
53
+ "# os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'your_api_key' # for using HuggingFace Inference API\n",
54
+ "# alternatively set your env variable above"
55
+ ]
56
+ },
57
+ {
58
+ "cell_type": "code",
59
+ "execution_count": 2,
60
+ "metadata": {
61
+ "id": "JxAiGlN74KZM"
62
+ },
63
+ "outputs": [],
64
+ "source": [
65
+ "from typing import Callable, List\n",
66
+ "\n",
67
+ "from langchain.memory import ConversationBufferMemory"
68
+ ]
69
+ },
70
+ {
71
+ "cell_type": "markdown",
72
+ "metadata": {
73
+ "id": "9dJtw15E4KZN"
74
+ },
75
+ "source": [
76
+ "# Configuration"
77
+ ]
78
+ },
79
+ {
80
+ "cell_type": "code",
81
+ "execution_count": 3,
82
+ "metadata": {
83
+ "id": "skvoIwYU4KZO"
84
+ },
85
+ "outputs": [],
86
+ "source": [
87
+ "chunk_size = 1000 # choose one of [500, 600, 700, 800, 900, 1000, 1250, 1500, 1750, 2000, 2250, 2500, 2750, 3000]\n",
88
+ "chunk_overlap = 100 # choose one of [50, 100, 150, 200]\n",
89
+ "\n",
90
+ "chroma_n_similar_documents = 6 # number of chunks returned by chroma vector store retriever\n",
91
+ "bm25_n_similar_documents = 6 # number of chunks returned by bm25 retriever\n",
92
+ "\n",
93
+ "# llm for HuggingFace Inference API\n",
94
+ "# model = \"mistralai/Mistral-7B-Instruct-v0.2\"\n",
95
+ "model = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\n",
96
+ "\n",
97
+ "# what retrievers to use, if ensemble is chosen then both semantic and keyword similarity documents will be used and results reranked\n",
98
+ "retriever_config = 'ensemble' # choose one of ['semantic', 'keyword', 'ensemble'] # ensemble means both\n",
99
+ "keyword_retriever_weight = 0.3 # choose between 0 and 1\n",
100
+ "semantic_retriever_weight = 1 - keyword_retriever_weight\n",
101
+ "\n",
102
+ "# llm configuration\n",
103
+ "temperature = 0.1\n",
104
+ "max_new_tokens = 500\n",
105
+ "\n",
106
+ "# in main app, add configuration for user to scrape new data from DuckDuckGo\n",
107
+ "# in main app, add configuration for user to upload PDF to override country's existing policies in vectorstore"
108
+ ]
109
+ },
110
+ {
111
+ "cell_type": "markdown",
112
+ "metadata": {},
113
+ "source": [
114
+ "# Conversation Topic\n",
115
+ "\n",
116
+ "Ideally this will be changed when user asks a query to match the query."
117
+ ]
118
+ },
119
+ {
120
+ "cell_type": "code",
121
+ "execution_count": 4,
122
+ "metadata": {},
123
+ "outputs": [],
124
+ "source": [
125
+ "topic = \"Compare the carbon emissions policies between Singapore and Malaysia.\"\n",
126
+ "countries_in_query = ['Singapore', 'Malaysia']"
127
+ ]
128
+ },
129
+ {
130
+ "cell_type": "markdown",
131
+ "metadata": {},
132
+ "source": [
133
+ "# Countries of Interest\n",
134
+ "\n",
135
+ "- This is the list of countries present in the vectore store, since the vector store is prepared from the previous Scrape_Web_and_Process_Data_for_Countries_ESG_Policies notebook, it should be the same list of countries in that notebook. \n",
136
+ "- This is for coding the RetrievalQA tool later to check, because even if the country given to it is not in the vector store, it would still filter the vector store with this country and give an empty result, instead of giving an error. We have to manually return the error to let the agent using the tool know.\n",
137
+ "- The countries were reduced to just 6 as the time taken to get the embeddings to build up the chunks is too long. However, having more countries **will not affect** the quality of the answers in comparing between 2 countries in the RAG application as the RAG only picks out document chunks for the 2 countries of interest.\n",
138
+ "\n",
139
+ "- In the main app, user can choose which countries to scrape data for, and this list would be adjusted accordingly."
140
+ ]
141
+ },
142
+ {
143
+ "cell_type": "code",
144
+ "execution_count": 5,
145
+ "metadata": {},
146
+ "outputs": [],
147
+ "source": [
148
+ "countries = [\n",
149
+ " \"Australia\",\n",
150
+ " \"China\",\n",
151
+ " \"Japan\",\n",
152
+ " \"Malaysia\",\n",
153
+ " \"Singapore\",\n",
154
+ " \"Germany\",\n",
155
+ " ]"
156
+ ]
157
+ },
158
+ {
159
+ "cell_type": "markdown",
160
+ "metadata": {},
161
+ "source": [
162
+ "# LLM from HuggingFace Inference Endpoint API"
163
+ ]
164
+ },
165
+ {
166
+ "cell_type": "code",
167
+ "execution_count": 6,
168
+ "metadata": {},
169
+ "outputs": [],
170
+ "source": [
171
+ "# This is an inference endpoint API from huggingface, the model is not run locally, it is run on huggingface\n",
172
+ "# It is a free API that is very good for deploying online for quick testing without users having to deploy a local LLM\n",
173
+ "llm = HuggingFaceHub(repo_id=model,model_kwargs={'temperature':temperature,\"max_new_tokens\":max_new_tokens})"
174
+ ]
175
+ },
176
+ {
177
+ "cell_type": "markdown",
178
+ "metadata": {},
179
+ "source": [
180
+ "# Initialize Chroma Vector Store\n",
181
+ "\n",
182
+ "As per the Scrape_Web_and_Process_Data_for_Countries_ESG_Policies notebook. Chromadb vector stores have **already been pre-created** for the countries above for each of the different chunk sizes and overlaps, to save time when experimenting as the embeddings take a long time to generate. When using the existing vector stores, we just need to change the name of the persist directory when selecting the different chunk sizes and overlaps.\n",
183
+ "\n",
184
+ "Not in this notebook: Later in the main app if the **user choose to scrape new data, or override with their own PDF, a new chromadb directory would be created** and the chroma_db parameter would be changed accordingly."
185
+ ]
186
+ },
187
+ {
188
+ "cell_type": "code",
189
+ "execution_count": 7,
190
+ "metadata": {},
191
+ "outputs": [
192
+ {
193
+ "name": "stderr",
194
+ "output_type": "stream",
195
+ "text": [
196
+ "D:\\Machine Learning Projects\\m\\llm\\venv\\lib\\site-packages\\transformers\\utils\\generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.\n",
197
+ " torch.utils._pytree._register_pytree_node(\n"
198
+ ]
199
+ }
200
+ ],
201
+ "source": [
202
+ "# We use HuggingFaceEmbeddings() as it is open source and free to use.\n",
203
+ "# Initialize the default hf model for embedding the tokenized texts into vectors with semantic meanings\n",
204
+ "hf_embeddings = HuggingFaceEmbeddings()\n",
205
+ "persist_directory = f\"chromadb/chromadb_esg_countries_chunk_{chunk_size}_overlap_{chunk_overlap}\"\n",
206
+ "\n",
207
+ "# in main app this chroma_db\n",
208
+ "chroma_db = Chroma(persist_directory=persist_directory,embedding_function=hf_embeddings)"
209
+ ]
210
+ },
211
+ {
212
+ "cell_type": "markdown",
213
+ "metadata": {},
214
+ "source": [
215
+ "# Initialize BM25 Retriever\n",
216
+ "\n",
217
+ "- BM25 is a keyword-based algorithm that performs well on queries containing keywords without capturing the semantic meaning of the query terms, hence there is no need to embed the text with HuggingFaceEmbeddings and it is relatively faster to set up. The retrievers with different chunking sizes and overlaps and countries are saved as pickle files in the earlier notebook.\n",
218
+ "\n",
219
+ "- Need to initialize one BM25Retriever for each country so the search results later in the main app can be limited to just a particular country. (Chroma DB gives an option to filter metadata for just a particular country during the retrieval processbut BM25 does not because it makes use of external ranking library.) A separate retriever was saved for each country in the earlier notebook."
220
+ ]
221
+ },
222
+ {
223
+ "cell_type": "code",
224
+ "execution_count": 8,
225
+ "metadata": {},
226
+ "outputs": [],
227
+ "source": [
228
+ "bm25_retrievers = {}\n",
229
+ "\n",
230
+ "for country in countries:\n",
231
+ " bm25_filename = f\"bm25/bm25_esg_countries_{country}_chunk_{chunk_size}_overlap_{chunk_overlap}.pickle\"\n",
232
+ " with open(bm25_filename, 'rb') as handle:\n",
233
+ " bm25_retriever = pickle.load(handle) \n",
234
+ " bm25_retriever.k = bm25_n_similar_documents\n",
235
+ " bm25_retrievers[country] = bm25_retriever"
236
+ ]
237
+ },
238
+ {
239
+ "cell_type": "markdown",
240
+ "metadata": {},
241
+ "source": [
242
+ "# Potential Tools for LLM Agent to Use\n",
243
+ "\n",
244
+ "Please see comments above each tool. The first tool is the most important one."
245
+ ]
246
+ },
247
+ {
248
+ "cell_type": "code",
249
+ "execution_count": 9,
250
+ "metadata": {},
251
+ "outputs": [],
252
+ "source": [
253
+ "# The most important tool is the first one, which uses a RetrievalQA chain to answer a question about a specific country's ESG policies, \n",
254
+ "# e.g. carbon emissions policy of Singapore. \n",
255
+ "# By calling this tool multiple times, the agent is able to look at the responses from this tool for both countries and compare them. \n",
256
+ "# This is far better than just retrieving relevant chunks for the user's query and throw everything to a single RetrievalQA chain to process\n",
257
+ "# Multi input tools are not available, hence we have to prompt the agent to give an input list as a string\n",
258
+ "# then use ast.literal_eval to convert it back into a list\n",
259
+ "@tool \n",
260
+ "def retrieve_answer_for_country(query_and_country: str) -> str: # TODO, change diff chain type diff version answers, change \n",
261
+ " \"\"\"Gives answer to a query about a single country's public ESG policy.\n",
262
+ " The input list should be of the following format:\n",
263
+ " [country, query]\n",
264
+ " The first element of the list is the user query, surrounded by double quotes.\n",
265
+ " The second element is the full name of the country involved, surrounded by double quotes, for example \"Singapore\".\n",
266
+ " The 2 inputs are separated by a comma. Do not write a list comprehension.\n",
267
+ " The 2 inputs, together, are surrounded by square brackets as it is a list.\n",
268
+ " Do not put multiple countries into the input at once. Instead use this tool multiple times, one time for each country.\n",
269
+ " If you have multiple queries to ask about a country, break the query into separate parts and use this tool multiple times, one for each query.\n",
270
+ " \"\"\"\n",
271
+ " try:\n",
272
+ " query_and_country_list = ast.literal_eval(query_and_country)\n",
273
+ " query = query_and_country_list[0]\n",
274
+ " country = query_and_country_list[1].capitalize() # in case LLM did not capitalize first letter as filtering for metadata is case sensitive\n",
275
+ " if not country in countries:\n",
276
+ " return \"\"\"The country that you input into the tool cannot be found. \n",
277
+ " If you did not make a mistake and the country that you input is indeed what the user asked,\n",
278
+ " then there is no record for the country and no answer can be obtained.\"\"\"\n",
279
+ "\n",
280
+ " # different retrievers\n",
281
+ " bm = bm25_retrievers[country] # keyword based\n",
282
+ " bm.k = bm25_n_similar_documents \n",
283
+ " chroma = chroma_db.as_retriever(search_kwargs={'filter': {'country':country}, 'k': chroma_n_similar_documents}) # semantic\n",
284
+ " ensemble = EnsembleRetriever(retrievers=[bm, chroma], weights=[keyword_retriever_weight, semantic_retriever_weight]) # ensemble reranks results from both retrievers\n",
285
+ " retrievers = {'ensemble': ensemble, 'semantic': chroma, 'keyword': bm} \n",
286
+ " \n",
287
+ " qa = RetrievalQA.from_chain_type(\n",
288
+ " llm=llm,\n",
289
+ " chain_type='stuff',\n",
290
+ " retriever=retrievers[retriever_config], # selected retriever based on user config\n",
291
+ " )\n",
292
+ " result = qa.run(query)\n",
293
+ " return result\n",
294
+ " except Exception as e:\n",
295
+ " return f\"\"\"There is an error using this tool: {e}. Check if you have input anything wrongly and try again. \n",
296
+ " If you have already tried 2 times, do not try anymore, there is no answer for your input. \n",
297
+ " Move on to the next step of your plan.\"\"\"\n",
298
+ "\n",
299
+ "# if a user tries to casually chat with the agent chatbot, the LLM will be able to use this tool to reply instead\n",
300
+ "# this is optional, better to let user's know the chatbot is not for casual chatting\n",
301
+ "@tool\n",
302
+ "def generic_chat_llm(query: str) -> str:\n",
303
+ " \"\"\"Use this tool for general queries and casual chat. Forward the user input directly into this tool, do not come up with your own input.\n",
304
+ " This tool IS NOT FOR MAKING COMPARISONS of anything.\n",
305
+ " This tool IS NOT FOR FINDING ESG POLICY of any country!\n",
306
+ " It is only for casual chat! Do not use this tool unnecessarily!\n",
307
+ " \"\"\"\n",
308
+ " try:\n",
309
+ " # Second Generic Tool\n",
310
+ " prompt = PromptTemplate(\n",
311
+ " input_variables=[\"query\"],\n",
312
+ " template=\"{query}\"\n",
313
+ " )\n",
314
+ " \n",
315
+ " llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
316
+ " return llm_chain.run(query)\n",
317
+ "\n",
318
+ " except Exception as e:\n",
319
+ " return f\"\"\"There is an error using this tool: {e}. Check if you have input anything wrongly and try again. \n",
320
+ " If you have already tried 2 times, do not try anymore, there is no response for your input.\n",
321
+ " Move on to the next step of your plan.\"\"\"\n",
322
+ "\n",
323
+ "# sometimes the agent will suddenly ask for a 'compare' tool even though it was not given this tool\n",
324
+ "# hence I have decided to give it this tool that gives a prompt to remind it to look at past information\n",
325
+ "# and decide whether it is time to darw a conclusion\n",
326
+ "# tools cannot have no input, hence I let the agent input a 'query' parameter even though it is not used\n",
327
+ "# having the query as input let the LLM 'recall' what is being asked \n",
328
+ "# instead of it being lost all the way at the start of the ReAct process\n",
329
+ "# this is optional\n",
330
+ "@tool\n",
331
+ "def compare(query:str) -> str: \n",
332
+ " \"\"\"Use this tool to give you hints and instructions on how you can compare between policies of countries.\n",
333
+ " Use this tool only at one of your final steps, do not use it at the start.\n",
334
+ " When putting the query into this tool, look at the entire query that the user has asked at the start,\n",
335
+ " do not leave any details in the query out.\n",
336
+ " \"\"\"\n",
337
+ " return f\"\"\"Once again, check through all your previous observations to answer the user query. \n",
338
+ " Make sure every part of the query is addressed by the context, or that you have at least tried to do so.\n",
339
+ " Make sure you have not forgotten to address anything in the query.\n",
340
+ " If you still need more details, you can use another tool to find out more if you have not tried using the same tool with the necessary input earlier. \n",
341
+ " If you have enough information, use your reasoning to answer them to the best of your ability. \n",
342
+ " Give as much elaboration in your answer as possible but they MUST be from the earlier context.\n",
343
+ " Do not give details that cannot be found in the earlier context.\"\"\"\n",
344
+ "\n",
345
+ "\n",
346
+ "tools = [retrieve_answer_for_country] # tools that the dialogueAgentWithTools below will use"
347
+ ]
348
+ },
349
+ {
350
+ "cell_type": "markdown",
351
+ "metadata": {
352
+ "id": "bIRlTdk84KZO"
353
+ },
354
+ "source": [
355
+ "## DialogueAgent, DialogueSimulator and DialogueAgentWithTools classes\n",
356
+ "\n",
357
+ "These are agents which will speak to one another. These are adapted from the Langchain cookbook [Multi-Player Authoritarian Speaker Selection](https://python.langchain.com/en/latest/use_cases/agent_simulations/multiagent_authoritarian.html).\n",
358
+ "\n",
359
+ "The DialogueAgent class has 2 main methods:\r\n",
360
+ "\r\n",
361
+ "- send(): applies the chatmodel to the message history and returns the message string generated by LLM.\r\n",
362
+ "- receive(name, message): adds the message spoken by name to message hist.ory\r\n",
363
+ "\r\n",
364
+ "\r\n",
365
+ "The DialogueSimulator class takes a list of agents. At each step, it performs the following:\r\n",
366
+ "\r\n",
367
+ "- Select he next. speaker\r\n",
368
+ "- Calls the next speaker to send .a message\r\n",
369
+ "- Broadcasts the message to all ot.her agents\r\n",
370
+ "- Update the strough the a the aagents.toryory"
371
+ ]
372
+ },
373
+ {
374
+ "cell_type": "code",
375
+ "execution_count": 10,
376
+ "metadata": {
377
+ "id": "wl-goiWJ4KZO"
378
+ },
379
+ "outputs": [],
380
+ "source": [
381
+ "class DialogueAgent:\n",
382
+ " def __init__(\n",
383
+ " self,\n",
384
+ " name: str,\n",
385
+ " system_message: SystemMessage,\n",
386
+ " model: llm,\n",
387
+ " ) -> None:\n",
388
+ " self.name = name\n",
389
+ " self.system_message = system_message\n",
390
+ " self.model = model\n",
391
+ " self.prefix = f\"{self.name}: \"\n",
392
+ " self.reset()\n",
393
+ "\n",
394
+ " def reset(self):\n",
395
+ " self.message_history = [\"Here is the conversation so far.\"]\n",
396
+ "\n",
397
+ " def send(self) -> str:\n",
398
+ " \"\"\"\n",
399
+ " Applies the chatmodel to the message history\n",
400
+ " and returns the message string\n",
401
+ " \"\"\"\n",
402
+ " message = self.model(\n",
403
+ " \"\\n\".join([self.system_message.content] + self.message_history + [self.prefix]))\n",
404
+ " return message\n",
405
+ "\n",
406
+ " def receive(self, name: str, message: str) -> None:\n",
407
+ " \"\"\"\n",
408
+ " Concatenates {message} spoken by {name} into message history\n",
409
+ " \"\"\"\n",
410
+ " self.message_history.append(f\"{name}: {message}\")\n",
411
+ "\n",
412
+ "\n",
413
+ "class DialogueSimulator:\n",
414
+ " def __init__(\n",
415
+ " self,\n",
416
+ " agents: List[DialogueAgent],\n",
417
+ " selection_function: Callable[[int, List[DialogueAgent]], int],\n",
418
+ " ) -> None:\n",
419
+ " self.agents = agents\n",
420
+ " self._step = 0\n",
421
+ " self.select_next_speaker = selection_function\n",
422
+ "\n",
423
+ " def reset(self):\n",
424
+ " for agent in self.agents:\n",
425
+ " agent.reset()\n",
426
+ "\n",
427
+ " def inject(self, name: str, message: str):\n",
428
+ " \"\"\"\n",
429
+ " Initiates the conversation with a {message} from {name}\n",
430
+ " \"\"\"\n",
431
+ " for agent in self.agents:\n",
432
+ " agent.receive(name, message)\n",
433
+ "\n",
434
+ " # increment time\n",
435
+ " self._step += 1\n",
436
+ "\n",
437
+ " def step(self) -> tuple[str, str]:\n",
438
+ " # 1. choose the next speaker\n",
439
+ " speaker_idx = self.select_next_speaker(self._step, self.agents)\n",
440
+ " speaker = self.agents[speaker_idx]\n",
441
+ " print(f\"\\n\\n############ {speaker.name} is thinking.....\")\n",
442
+ "\n",
443
+ " # 2. next speaker sends message\n",
444
+ " message = speaker.send()\n",
445
+ "\n",
446
+ " # 3. everyone receives message\n",
447
+ " for receiver in self.agents:\n",
448
+ " receiver.receive(speaker.name, message)\n",
449
+ "\n",
450
+ " # 4. increment time\n",
451
+ " self._step += 1\n",
452
+ "\n",
453
+ " return speaker.name, message\n",
454
+ "\n",
455
+ "\n",
456
+ "class DialogueAgentWithTools(DialogueAgent): # inherit from DialogueAgent class but with tools\n",
457
+ " def __init__(\n",
458
+ " self,\n",
459
+ " name: str,\n",
460
+ " system_message: SystemMessage,\n",
461
+ " #model: MyHuggingFaceModel(),\n",
462
+ " model: llm,\n",
463
+ " tool_names: List[str],\n",
464
+ " **tool_kwargs,\n",
465
+ " ) -> None:\n",
466
+ " super().__init__(name, system_message, model)\n",
467
+ " #self.tools = load_tools(tool_names, **tool_kwargs)\n",
468
+ " self.tools = tool_names\n",
469
+ "\n",
470
+ " def send(self) -> str:\n",
471
+ " \"\"\"\n",
472
+ " Applies the chatmodel to the message history\n",
473
+ " and returns the message string\n",
474
+ " \"\"\"\n",
475
+ " agent_chain = initialize_agent(\n",
476
+ " self.tools,\n",
477
+ " self.model,\n",
478
+ " agent = \"zero-shot-react-description\",\n",
479
+ " #agent = \"chat-zero-shot-react-description\",\n",
480
+ " verbose=True,\n",
481
+ " # memory=ConversationBufferMemory(\n",
482
+ " # memory_key=\"chat_history\", return_messages=True\n",
483
+ " # ),\n",
484
+ " #max_iterations=2\n",
485
+ " )\n",
486
+ " message = AIMessage(\n",
487
+ " content=agent_chain.run(\n",
488
+ " input=\"\\n\".join(\n",
489
+ " [self.system_message.content] + self.message_history + [self.prefix]\n",
490
+ " )\n",
491
+ " )\n",
492
+ " )\n",
493
+ "\n",
494
+ " return message.content"
495
+ ]
496
+ },
497
+ {
498
+ "cell_type": "markdown",
499
+ "metadata": {
500
+ "id": "M26YKubf4KZP"
501
+ },
502
+ "source": [
503
+ "# Define Names of Each Agent\n",
504
+ "\n",
505
+ "Each speaker agent will be the expert of its own country's ESG policies and has access to a tool to find out more about the country's policies. The moderator has no access to tool."
506
+ ]
507
+ },
508
+ {
509
+ "cell_type": "code",
510
+ "execution_count": 11,
511
+ "metadata": {
512
+ "id": "ILbRwM1a4KZP"
513
+ },
514
+ "outputs": [],
515
+ "source": [
516
+ "# speaker agents names\n",
517
+ "speaker_names = [f\"{country} Public ESG Policies Searcher\" for country in countries_in_query]\n",
518
+ "moderator_name = \"Moderator\""
519
+ ]
520
+ },
521
+ {
522
+ "cell_type": "markdown",
523
+ "metadata": {
524
+ "id": "JMbpuV3F4KZR"
525
+ },
526
+ "source": [
527
+ "# Generate System Messages (Role Descriptions for Each Agent)\n",
528
+ "\n",
529
+ "This is used to describe the roles of each agent. Despite writing the description and prompt below, the agents will still continue to simulate the conversation for other members in the chat or try to draw conclusions on its own."
530
+ ]
531
+ },
532
+ {
533
+ "cell_type": "code",
534
+ "execution_count": 12,
535
+ "metadata": {
536
+ "id": "jt4rhtEt4KZR"
537
+ },
538
+ "outputs": [],
539
+ "source": [
540
+ "conversation_description = f\"\"\"Here is the topic of conversation: {topic}\n",
541
+ "The participants are: {(', ').join(speaker_names)}\"\"\"\n",
542
+ "\n",
543
+ "def generate_system_message(name, country):\n",
544
+ " return f\"\"\"Here is the topic of conversation: {topic}\n",
545
+ "\n",
546
+ " Your name is {name}. \n",
547
+ " You ONLY give information about this country: {country}.\n",
548
+ " \n",
549
+ " Your description is as follows:\n",
550
+ " \n",
551
+ " Your goal is to contribute meaningfully to an ESG discussion \n",
552
+ " and inform the other parties about the relevant ESG policies of your country, NOTHING MORE.\n",
553
+ " You only give information about your country. You can only obtain information for your country by looking them up with your tool.\n",
554
+ " If no information is returned by your tool, say you do not know and cannot obtain the information. \n",
555
+ " Do not come up with information yourself without using a tool.\n",
556
+ " Do not try to look up information of another country.\n",
557
+ " Do not speak about another country.\n",
558
+ " Stop speaking once you have arrived at a final answer.\n",
559
+ " Do not come up with what other speakers will say.\n",
560
+ " Do not draw any comparisons or conclusions yourself.\n",
561
+ " Stop speaking the moment you finish speaking from your perspective. \n",
562
+ " \"\"\"\n",
563
+ "\n",
564
+ "agent_system_messages = {\n",
565
+ " name: generate_system_message(name, country)\n",
566
+ " for name, country in zip(speaker_names, countries_in_query)\n",
567
+ "}"
568
+ ]
569
+ },
570
+ {
571
+ "cell_type": "markdown",
572
+ "metadata": {},
573
+ "source": [
574
+ "### Role Description for Moderator"
575
+ ]
576
+ },
577
+ {
578
+ "cell_type": "code",
579
+ "execution_count": 13,
580
+ "metadata": {
581
+ "colab": {
582
+ "base_uri": "https://localhost:8080/"
583
+ },
584
+ "id": "bMjiwBaM4KZS",
585
+ "outputId": "3d833bf5-511b-44fa-f745-de7058a44f28"
586
+ },
587
+ "outputs": [],
588
+ "source": [
589
+ "moderator_system_message = f\"\"\"{conversation_description}\n",
590
+ " You are a moderator moderating a conversation described above.\n",
591
+ " You can speak directly to each of the participants: {(', ').join(speaker_names)}.\n",
592
+ " All your knowledge and information comes from the context above.\n",
593
+ " You do not have any knowledge yourself, so DO NOT add in any knowledge without referencing the message history.\n",
594
+ " Do not simulate any conversations between speakers.\n",
595
+ " \n",
596
+ " When you have enough information FROM THE MESSAGE HISTORY to conclude about the topic, \n",
597
+ " give a conclusion about the topic and then inform all speakers to stop speaking by ending specifically with \"No further elaboration is needed\".\n",
598
+ " If you need more information, wait for other participants to speak.\n",
599
+ " \n",
600
+ " Stop speaking the moment you finish speaking from your perspective. \n",
601
+ " Do not say anything else after that. \n",
602
+ " Do not speak for any other speakers. \n",
603
+ " Once again, do not simulate any conversations between speakers.\n",
604
+ " \"\"\""
605
+ ]
606
+ },
607
+ {
608
+ "cell_type": "markdown",
609
+ "metadata": {
610
+ "id": "syQ6Mpg74KZS"
611
+ },
612
+ "source": [
613
+ "# Set Up the 3 Agents"
614
+ ]
615
+ },
616
+ {
617
+ "cell_type": "code",
618
+ "execution_count": 14,
619
+ "metadata": {
620
+ "id": "22I4D9Lb4KZS"
621
+ },
622
+ "outputs": [],
623
+ "source": [
624
+ "agents = [\n",
625
+ " DialogueAgentWithTools(\n",
626
+ " name=name,\n",
627
+ " system_message=SystemMessage(content=system_message),\n",
628
+ " model=llm,\n",
629
+ " tool_names=tools,\n",
630
+ " )\n",
631
+ " for name, system_message in agent_system_messages.items()\n",
632
+ "]\n",
633
+ "\n",
634
+ "agents += [\n",
635
+ " DialogueAgent(\n",
636
+ " name=\"Moderator\",\n",
637
+ " system_message=SystemMessage(content=moderator_system_message),\n",
638
+ " model=llm,\n",
639
+ " )\n",
640
+ "]"
641
+ ]
642
+ },
643
+ {
644
+ "cell_type": "code",
645
+ "execution_count": 15,
646
+ "metadata": {
647
+ "id": "y2LfHH4h4KZS"
648
+ },
649
+ "outputs": [],
650
+ "source": [
651
+ "# logic to let the next agent starts speaking\n",
652
+ "def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int:\n",
653
+ " idx = (step) % len(agents)\n",
654
+ " return idx"
655
+ ]
656
+ },
657
+ {
658
+ "cell_type": "markdown",
659
+ "metadata": {},
660
+ "source": [
661
+ "## Start the Conversation\n",
662
+ "\n",
663
+ "Sequence of speaking is country1 agent, country2 agent then moderator and repeat. \n",
664
+ "\n",
665
+ "The moderator is told to end off with \"No further elaboration is needed\" if it has enough information to make a conclusion about the topic, upon seeing this, the while loop below will be stopped.\n",
666
+ "\n",
667
+ "The maximum iterations is 9, which gives each agent the chance to talk 3 times before the moderator concludes."
668
+ ]
669
+ },
670
+ {
671
+ "cell_type": "code",
672
+ "execution_count": 16,
673
+ "metadata": {
674
+ "colab": {
675
+ "base_uri": "https://localhost:8080/",
676
+ "height": 1000
677
+ },
678
+ "id": "7mJGufEU4KZS",
679
+ "outputId": "1320046c-d243-4338-a582-f6310084a179"
680
+ },
681
+ "outputs": [
682
+ {
683
+ "name": "stdout",
684
+ "output_type": "stream",
685
+ "text": [
686
+ "\n",
687
+ "\n",
688
+ "\n",
689
+ "\n",
690
+ "############ Singapore Public ESG Policies Searcher is thinking.....\n",
691
+ "\n",
692
+ "\n",
693
+ "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
694
+ "\u001b[32;1m\u001b[1;3m I need to find out about Singapore's carbon emissions policies.\n",
695
+ "Action: retrieve_answer_for_country\n",
696
+ "Action Input: [\"What are the carbon emissions policies in Singapore?\", \"Singapore\"]\u001b[0m\n",
697
+ "Observation: \u001b[36;1m\u001b[1;3m Singapore has implemented various policies to address carbon emissions. The country has updated its Nationally Determined Contribution (NDC) to reduce emissions to around 60 million tonnes of carbon dioxide equivalent (MtCO2e) in 2030 after peaking emissions earlier. Singapore has also raised its national climate target to achieve net zero by 2050. The Ministry of Trade & Industry launched Singapore’s National Hydrogen Strategy to provide a roadmap on utilizing hydrogen as a decarbonization pathway. The Monetary Authority of Singapore established the Sustainable Finance Advisory Panel to help guide MAS on its strategies and initiatives to build a credible and vibrant sustainable finance ecosystem. Manufacturing sectors like electronics, precision engineering, energy and chemicals, aerospace, and logistics aim to grow value-added manufacturing by 50% from 2020 to 2030. Singapore has also signed the Green Economy Agreement with Australia to foster common rules and standards to promote trade and investment in green goods and services. The public sector has committed to achieving net zero emissions around 2045, five years ahead of Singapore’s national target of net zero by 2050, reducing energy and water use by 10% and waste disposed of by 30% respectively in 2030. The public sector has put in measures to green buildings and vehicle fleets, reduce waste, and increase solar energy adoption.\u001b[0m\n",
698
+ "Thought:\u001b[32;1m\u001b[1;3m I now know the carbon emissions policies in Singapore.\n",
699
+ "Final Answer: Singapore has implemented various policies to address carbon emissions. The country has updated its Nationally Determined Contribution (NDC) to reduce emissions to around 60 million tonnes of carbon dioxide equivalent (MtCO2e) in 2030 after peaking emissions earlier. Singapore has also raised its national climate target to achieve net zero by 2050. The Ministry of Trade & Industry launched Singapore’s National Hydrogen Strategy to provide a roadmap on utilizing hydrogen as a decarbonization pathway. The Monetary Authority of Singapore established the Sustainable Finance Advisory Panel to help guide MAS on its strategies and initiatives to build a credible and vibrant sustainable finance ecosystem. Manufacturing sectors like electronics, precision engineering, energy and chemicals, aerospace, and logistics aim to grow value-added manufacturing by 50% from 2020 to 2030. Singapore has also signed the Green Economy Agreement with Australia to foster common rules and standards to promote trade and investment in green goods and services. The public sector has committed to achieving net zero emissions around 2045, five years ahead of Singapore’s national target of net zero by 2050, reducing energy and water use by 10% and waste disposed of by 30% respectively in 2030. The public sector has put in measures to green buildings and vehicle fleets, reduce waste, and increase solar energy adoption.\u001b[0m\n",
700
+ "\n",
701
+ "\u001b[1m> Finished chain.\u001b[0m\n",
702
+ "Singapore Public ESG Policies Searcher Speaks: Singapore has implemented various policies to address carbon emissions. The country has updated its Nationally Determined Contribution (NDC) to reduce emissions to around 60 million tonnes of carbon dioxide equivalent (MtCO2e) in 2030 after peaking emissions earlier. Singapore has also raised its national climate target to achieve net zero by 2050. The Ministry of Trade & Industry launched Singapore’s National Hydrogen Strategy to provide a roadmap on utilizing hydrogen as a decarbonization pathway. The Monetary Authority of Singapore established the Sustainable Finance Advisory Panel to help guide MAS on its strategies and initiatives to build a credible and vibrant sustainable finance ecosystem. Manufacturing sectors like electronics, precision engineering, energy and chemicals, aerospace, and logistics aim to grow value-added manufacturing by 50% from 2020 to 2030. Singapore has also signed the Green Economy Agreement with Australia to foster common rules and standards to promote trade and investment in green goods and services. The public sector has committed to achieving net zero emissions around 2045, five years ahead of Singapore’s national target of net zero by 2050, reducing energy and water use by 10% and waste disposed of by 30% respectively in 2030. The public sector has put in measures to green buildings and vehicle fleets, reduce waste, and increase solar energy adoption.\n",
703
+ "\n",
704
+ "\n",
705
+ "\n",
706
+ "\n",
707
+ "############ Malaysia Public ESG Policies Searcher is thinking.....\n",
708
+ "\n",
709
+ "\n",
710
+ "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
711
+ "\u001b[32;1m\u001b[1;3m I need to find out about Malaysia's carbon emissions policies.\n",
712
+ "Action: retrieve_answer_for_country\n",
713
+ "Action Input: [\"What are Malaysia's carbon emissions policies?\", \"Malaysia\"]\u001b[0m\n",
714
+ "Observation: \u001b[36;1m\u001b[1;3m Malaysia has committed to reducing greenhouse gas emissions to 45% of GDP by 2030 in line with the Paris Agreement. The country also aims to achieve carbon neutrality by 2050 and has proposed a tax deduction of up to RM300,000 for companies that invest in measurement reporting and verification (MRV) related to the development of carbon projects. These expenses can be deducted from the sale of carbon credits traded on the Bursa Carbon Exchange (BCX). Additionally, Malaysia has emphasized waste management practices, learning from global trends and experiences, to further enhance its ESG efforts.\u001b[0m\n",
715
+ "Thought:\u001b[32;1m\u001b[1;3m I now know Malaysia's carbon emissions policies.\n",
716
+ "Final Answer: Malaysia has committed to reducing greenhouse gas emissions to 45% of GDP by 2030 in line with the Paris Agreement. The country also aims to achieve carbon neutrality by 2050 and has proposed a tax deduction of up to RM300,000 for companies that invest in measurement reporting and verification (MRV) related to the development of carbon projects. These expenses can be deducted from the sale of carbon credits traded on the Bursa Carbon Exchange (BCX). Additionally, Malaysia has emphasized waste management practices, learning from global trends and experiences, to further enhance its ESG efforts.\u001b[0m\n",
717
+ "\n",
718
+ "\u001b[1m> Finished chain.\u001b[0m\n",
719
+ "Malaysia Public ESG Policies Searcher Speaks: Malaysia has committed to reducing greenhouse gas emissions to 45% of GDP by 2030 in line with the Paris Agreement. The country also aims to achieve carbon neutrality by 2050 and has proposed a tax deduction of up to RM300,000 for companies that invest in measurement reporting and verification (MRV) related to the development of carbon projects. These expenses can be deducted from the sale of carbon credits traded on the Bursa Carbon Exchange (BCX). Additionally, Malaysia has emphasized waste management practices, learning from global trends and experiences, to further enhance its ESG efforts.\n",
720
+ "\n",
721
+ "\n",
722
+ "\n",
723
+ "\n",
724
+ "############ Moderator is thinking.....\n",
725
+ "Moderator Speaks: \n",
726
+ "Singapore Public ESG Policies Searcher, Malaysia Public ESG Policies Searcher, thank you for sharing your knowledge on the carbon emissions policies of your respective countries.\n",
727
+ "Based on the information provided, it appears that both Singapore and Malaysia have committed to reducing greenhouse gas emissions and achieving carbon neutrality by 2050.\n",
728
+ "Singapore has implemented various policies to address carbon emissions, including updating its NDC, establishing a National Hydrogen Strategy, and launching initiatives to build a sustainable finance ecosystem. The public sector in Singapore has also committed to achieving net zero emissions around 2045, reducing energy and water use, and increasing solar energy adoption.\n",
729
+ "On the other hand, Malaysia has committed to reducing greenhouse gas emissions to 45% of GDP by 2030 and achieving carbon neutrality by 2050. The country has also proposed a tax deduction for companies that invest in MRV related to the development of carbon projects and has emphasized waste management practices.\n",
730
+ "In conclusion, both Singapore and Malaysia have taken significant steps to address carbon emissions and achieve carbon neutrality by 2050. However, the specific policies and initiatives implemented by each country differ.\n",
731
+ "No further elaboration is needed.\n"
732
+ ]
733
+ }
734
+ ],
735
+ "source": [
736
+ "# sequence of speaking is country1 agent, country2 agent then moderator and repeat\n",
737
+ "max_iters = 9 # provides 3 rounds of conversation for all 3 parties, 3 ending with the moderator\n",
738
+ "n = 0\n",
739
+ "\n",
740
+ "simulator = DialogueSimulator(agents=agents, selection_function=select_next_speaker)\n",
741
+ "simulator.reset()\n",
742
+ "print(\"\\n\")\n",
743
+ "\n",
744
+ "while n < max_iters:\n",
745
+ " name, message = simulator.step()\n",
746
+ " print(f\"{name} Speaks: {message}\")\n",
747
+ " #The moderator is told to end off with \"No further elaboration is needed\" if it has enough information to make a conclusion about the topic.\n",
748
+ " if \"No further elaboration is needed\" in message:\n",
749
+ " break\n",
750
+ " print(\"\\n\")\n",
751
+ " n += 1"
752
+ ]
753
+ },
754
+ {
755
+ "cell_type": "code",
756
+ "execution_count": null,
757
+ "metadata": {
758
+ "id": "WqOhge7cyQpC"
759
+ },
760
+ "outputs": [],
761
+ "source": []
762
+ },
763
+ {
764
+ "cell_type": "code",
765
+ "execution_count": null,
766
+ "metadata": {},
767
+ "outputs": [],
768
+ "source": []
769
+ }
770
+ ],
771
+ "metadata": {
772
+ "colab": {
773
+ "provenance": []
774
+ },
775
+ "kernelspec": {
776
+ "display_name": "Python 3 (ipykernel)",
777
+ "language": "python",
778
+ "name": "python3"
779
+ },
780
+ "language_info": {
781
+ "codemirror_mode": {
782
+ "name": "ipython",
783
+ "version": 3
784
+ },
785
+ "file_extension": ".py",
786
+ "mimetype": "text/x-python",
787
+ "name": "python",
788
+ "nbconvert_exporter": "python",
789
+ "pygments_lexer": "ipython3",
790
+ "version": "3.12.1"
791
+ }
792
+ },
793
+ "nbformat": 4,
794
+ "nbformat_minor": 4
795
+ }