id
stringlengths 14
16
| text
stringlengths 45
2.73k
| source
stringlengths 49
114
|
---|---|---|
5c280d14223f-4 | from langchain.embeddings import OpenAIEmbeddings
from langchain.tools.human.tool import HumanInputRun
embeddings_model = OpenAIEmbeddings()
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
Setup model and AutoGPT#
Model set-up
tools = [
web_search,
WriteFileTool(),
ReadFileTool(),
process_csv,
query_website_tool,
# HumanInputRun(), # Activate if you want the permit asking for help from the human
]
agent = AutoGPT.from_llm_and_tools(
ai_name="Tom",
ai_role="Assistant",
tools=tools,
llm=llm,
memory=vectorstore.as_retriever(search_kwargs={"k": 8}),
# human_in_the_loop=True, # Set to True if you want to add feedback at each step.
)
# agent.chain.verbose = True
AutoGPT as a research / data munger#
inflation and college tuition#
Let’s use AutoGPT as researcher and data munger / cleaner.
I spent a lot of time over the years crawling data sources and cleaning data.
Let’s see if AutoGPT can do all of this for us!
Here is the prompt comparing inflation and college tuition.
agent.run(["What were the winning boston marathon times for the past 5 years? Generate a table of the names, countries of origin, and times."])
WARNING:root:Failed to persist run: Object of type 'FAISS' is not JSON serializable
{
"thoughts": {
"text": "I need to find the winning Boston Marathon times for the past 5 years.", | https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html |
5c280d14223f-5 | "reasoning": "I'll start by conducting a web search for the requested information.",
"plan": "- Conduct a web search\n- Query relevant webpage\n- Generate table\n- Save data to file",
"criticism": "None",
"speak": "I will begin by conducting a web search to find the past 5 years' Boston Marathon winning times."
},
"command": {
"name": "web_search",
"args": {
"query": "winning Boston Marathon times for the past 5 years"
}
}
}
WARNING:root:Failed to persist run: Object of type 'FAISS' is not JSON serializable
{
"thoughts": {
"text": "I found several relevant search results, and I will use the query_webpage command on a specific URL to gather the information.",
"reasoning": "The Boston Athletic Association's official website (www.baa.org) is likely the most accurate source.",
"plan": "- Query the Boston Athletic Association webpage\n- Filter and parse the data\n- Generate table and save to file",
"criticism": "None",
"speak": "I will now query the Boston Athletic Association webpage to retrieve the information on the past 5 years' winning times."
},
"command": {
"name": "query_webpage",
"args": {
"url": "https://www.baa.org/races/boston-marathon/results/champions",
"question": "winning times of the Boston Marathon for the past 5 years"
}
}
}
WARNING:root:Failed to persist run: Object of type 'FAISS' is not JSON serializable
{
"thoughts": { | https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html |
5c280d14223f-6 | {
"thoughts": {
"text": "I have the winning times of the Boston Marathon for the past 5 years. I need to create a table with the names, countries of origin, and times.",
"reasoning": "I can use the information I've retrieved to generate a CSV file, then process the CSV file to create the table.",
"plan": "- Save data to a CSV file\n- Process CSV file to generate table",
"criticism": "None",
"speak": "I will save the Boston Marathon data to a CSV file and then process it to generate a table."
},
"command": {
"name": "write_file",
"args": {
"file_path": "boston_marathon_data.csv",
"text": "Year,Name,Country,Time\n2022,Evans Chebet,Kenya,2:06:51\n2021,Benson Kipruto,Kenya,2:09:51\n2019,Lawrence Cherono,Kenya,2:07:57\n2018,Yuki Kawauchi,Japan,2:15:58\n2017,Geoffrey Kirui,Kenya,2:09:37"
}
}
}
WARNING:root:Failed to persist run: Object of type 'FAISS' is not JSON serializable
{
"thoughts": {
"text": "I have the winning times for the Boston Marathon for the past 5 years. Now, I need to turn it into a table, save it as a CSV, and process it.",
"reasoning": "Saving data as a CSV will allow for a clean table format and output.", | https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html |
5c280d14223f-7 | "plan": "- Format the data as a CSV\n- Write the CSV to disk\n- Process the CSV and generate a table",
"criticism": "None",
"speak": "I will now format the winning times data as a CSV, save it, and process it to generate a table."
},
"command": {
"name": "write_file",
"args": {
"file_path": "boston_marathon_winners.csv",
"text": "Year,Name,Country,Time\n2022,Evans Chebet,Kenya,2:06:51\n2021,Benson Kipruto,Kenya,2:09:51\n2019,Lawrence Cherono,Kenya,2:07:57\n2018,Yuki Kawauchi,Japan,2:15:58\n2017,Geoffrey Kirui,Kenya,2:09:37"
}
}
}
WARNING:root:Failed to persist run: Object of type 'FAISS' is not JSON serializable
{
"thoughts": {
"text": "I have the winning times for the past 5 years, now I need to organize the information into a table and save it as a file.",
"reasoning": "Storing the information in a table file format will make it easier to access and present the data.",
"plan": "- Save data in a CSV file\n- Generate a table",
"criticism": "None",
"speak": "I will save the winning marathon times in a table format as a CSV file and generate a table."
},
"command": {
"name": "write_file",
"args": {
"file_path": "boston_marathon_winners.csv", | https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html |
5c280d14223f-8 | "args": {
"file_path": "boston_marathon_winners.csv",
"text": "Year,Name,Country,Time\n2022,Evans Chebet,Kenya,2:06:51\n2021,Benson Kipruto,Kenya,2:09:51\n2019,Lawrence Cherono,Kenya,2:07:57\n2018,Yuki Kawauchi,Japan,2:15:58\n2017,Geoffrey Kirui,Kenya,2:09:37"
}
}
}
WARNING:root:Failed to persist run: Object of type 'FAISS' is not JSON serializable
{
"thoughts": {
"text": "I previously found the winning times of the Boston Marathon for the past 5 years and saved it as a CSV file. Now, I need to process the CSV to create a table with the names, countries of origin, and times.",
"reasoning": "Processing the CSV will help generate a clean table to provide the answer.",
"plan": "- Process the CSV file\n- Generate a table with the required columns",
"criticism": "None",
"speak": "I will process the Boston Marathon winners' CSV file to create a table with the names, countries of origin, and times."
},
"command": {
"name": "process_csv",
"args": {
"csv_file_path": "boston_marathon_winners.csv",
"instructions": "Generate a table with columns Year, Name, Country, and Time."
}
}
}
> Entering new AgentExecutor chain...
Thought: I need to convert the provided dataframe to a string in Markdown table format.
Action: python_repl_ast | https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html |
5c280d14223f-9 | Action: python_repl_ast
Action Input: df.to_markdown(index=False)
Observation: | Year | Name | Country | Time |
|-------:|:-----------------|:----------|:--------|
| 2022 | Evans Chebet | Kenya | 2:06:51 |
| 2021 | Benson Kipruto | Kenya | 2:09:51 |
| 2019 | Lawrence Cherono | Kenya | 2:07:57 |
| 2018 | Yuki Kawauchi | Japan | 2:15:58 |
| 2017 | Geoffrey Kirui | Kenya | 2:09:37 |
Thought:I now know the final answer.
Final Answer:
| Year | Name | Country | Time |
|-------:|:-----------------|:----------|:--------|
| 2022 | Evans Chebet | Kenya | 2:06:51 |
| 2021 | Benson Kipruto | Kenya | 2:09:51 |
| 2019 | Lawrence Cherono | Kenya | 2:07:57 |
| 2018 | Yuki Kawauchi | Japan | 2:15:58 |
| 2017 | Geoffrey Kirui | Kenya | 2:09:37 |
> Finished chain.
WARNING:root:Failed to persist run: Object of type 'FAISS' is not JSON serializable
{
"thoughts": {
"text": "I have already retrieved the winning times for the past 5 years of the Boston Marathon and created a table. My objective is complete.", | https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html |
5c280d14223f-10 | "reasoning": "I have completed my task on this topic, so I don't need to use any other commands.",
"plan": "- Inform the user that the task is complete",
"criticism": "None",
"speak": "I have found the winning times for the past 5 years of the Boston Marathon and created a table. My task is complete."
},
"command": {
"name": "finish",
"args": {
"response": "I have found the winning times for the past 5 years of the Boston Marathon and created a table. My task is complete."
}
}
}
'I have found the winning times for the past 5 years of the Boston Marathon and created a table. My task is complete.'
Contents
Set up tools
Set up memory
Setup model and AutoGPT
AutoGPT as a research / data munger
inflation and college tuition
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html |
7a1b59930457-0 | .ipynb
.pdf
BabyAGI User Guide
Contents
Install and Import Required Modules
Connect to the Vector Store
Run the BabyAGI
BabyAGI User Guide#
This notebook demonstrates how to implement BabyAGI by Yohei Nakajima. BabyAGI is an AI agent that can generate and pretend to execute tasks based on a given objective.
This guide will help you understand the components to create your own recursive agents.
Although BabyAGI uses specific vectorstores/model providers (Pinecone, OpenAI), one of the benefits of implementing it with LangChain is that you can easily swap those out for different options. In this implementation we use a FAISS vectorstore (because it runs locally and is free).
Install and Import Required Modules#
import os
from collections import deque
from typing import Dict, List, Optional, Any
from langchain import LLMChain, OpenAI, PromptTemplate
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms import BaseLLM
from langchain.vectorstores.base import VectorStore
from pydantic import BaseModel, Field
from langchain.chains.base import Chain
from langchain.experimental import BabyAGI
Connect to the Vector Store#
Depending on what vectorstore you use, this step may look different.
from langchain.vectorstores import FAISS
from langchain.docstore import InMemoryDocstore
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
import faiss
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
Run the BabyAGI#
Now it’s time to create the BabyAGI controller and watch it try to accomplish your objective. | https://python.langchain.com/en/latest/use_cases/autonomous_agents/baby_agi.html |
7a1b59930457-1 | Now it’s time to create the BabyAGI controller and watch it try to accomplish your objective.
OBJECTIVE = "Write a weather report for SF today"
llm = OpenAI(temperature=0)
# Logging of LLMChains
verbose = False
# If None, will keep on going forever
max_iterations: Optional[int] = 3
baby_agi = BabyAGI.from_llm(
llm=llm, vectorstore=vectorstore, verbose=verbose, max_iterations=max_iterations
)
baby_agi({"objective": OBJECTIVE})
*****TASK LIST*****
1: Make a todo list
*****NEXT TASK*****
1: Make a todo list
*****TASK RESULT*****
1. Check the weather forecast for San Francisco today
2. Make note of the temperature, humidity, wind speed, and other relevant weather conditions
3. Write a weather report summarizing the forecast
4. Check for any weather alerts or warnings
5. Share the report with the relevant stakeholders
*****TASK LIST*****
2: Check the current temperature in San Francisco
3: Check the current humidity in San Francisco
4: Check the current wind speed in San Francisco
5: Check for any weather alerts or warnings in San Francisco
6: Check the forecast for the next 24 hours in San Francisco
7: Check the forecast for the next 48 hours in San Francisco
8: Check the forecast for the next 72 hours in San Francisco
9: Check the forecast for the next week in San Francisco
10: Check the forecast for the next month in San Francisco
11: Check the forecast for the next 3 months in San Francisco
1: Write a weather report for SF today
*****NEXT TASK*****
2: Check the current temperature in San Francisco
*****TASK RESULT***** | https://python.langchain.com/en/latest/use_cases/autonomous_agents/baby_agi.html |
7a1b59930457-2 | *****NEXT TASK*****
2: Check the current temperature in San Francisco
*****TASK RESULT*****
I will check the current temperature in San Francisco. I will use an online weather service to get the most up-to-date information.
*****TASK LIST*****
3: Check the current UV index in San Francisco.
4: Check the current air quality in San Francisco.
5: Check the current precipitation levels in San Francisco.
6: Check the current cloud cover in San Francisco.
7: Check the current barometric pressure in San Francisco.
8: Check the current dew point in San Francisco.
9: Check the current wind direction in San Francisco.
10: Check the current humidity levels in San Francisco.
1: Check the current temperature in San Francisco to the average temperature for this time of year.
2: Check the current visibility in San Francisco.
11: Write a weather report for SF today.
*****NEXT TASK*****
3: Check the current UV index in San Francisco.
*****TASK RESULT*****
The current UV index in San Francisco is moderate. The UV index is expected to remain at moderate levels throughout the day. It is recommended to wear sunscreen and protective clothing when outdoors.
*****TASK ENDING*****
{'objective': 'Write a weather report for SF today'}
Contents
Install and Import Required Modules
Connect to the Vector Store
Run the BabyAGI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/autonomous_agents/baby_agi.html |
aa6ea636b131-0 | .ipynb
.pdf
Generative Agents in LangChain
Contents
Generative Agent Memory Components
Memory Lifecycle
Create a Generative Character
Pre-Interview with Character
Step through the day’s observations.
Interview after the day
Adding Multiple Characters
Pre-conversation interviews
Dialogue between Generative Agents
Let’s interview our agents after their conversation
Generative Agents in LangChain#
This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al.
In it, we leverage a time-weighted Memory object backed by a LangChain Retriever.
# Use termcolor to make it easy to colorize the outputs.
!pip install termcolor > /dev/null
import re
from datetime import datetime, timedelta
from typing import List, Optional, Tuple
from termcolor import colored
from pydantic import BaseModel, Field
from langchain import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import PromptTemplate
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.schema import BaseLanguageModel, Document
from langchain.vectorstores import FAISS
USER_NAME = "Person A" # The name you want to use when interviewing the agent.
LLM = ChatOpenAI(max_tokens=1500) # Can be any LLM you want.
Generative Agent Memory Components#
This tutorial highlights the memory of generative agents and its impact on their behavior. The memory varies from standard LangChain Chat memory in two aspects:
Memory Formation
Generative Agents have extended memories, stored in a single stream:
Observations - from dialogues or interactions with the virtual world, about self or others | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-1 | Observations - from dialogues or interactions with the virtual world, about self or others
Reflections - resurfaced and summarized core memories
Memory Recall
Memories are retrieved using a weighted sum of salience, recency, and importance.
Review the definition below, focusing on add_memory and summarize_related_memories methods.
class GenerativeAgent(BaseModel):
"""A character with memory and innate characteristics."""
name: str
age: int
traits: str
"""The traits of the character you wish not to change."""
status: str
"""Current activities of the character."""
llm: BaseLanguageModel
memory_retriever: TimeWeightedVectorStoreRetriever
"""The retriever to fetch related memories."""
verbose: bool = False
reflection_threshold: Optional[float] = None
"""When the total 'importance' of memories exceeds the above threshold, stop to reflect."""
current_plan: List[str] = []
"""The current plan of the agent."""
summary: str = "" #: :meta private:
summary_refresh_seconds: int= 3600 #: :meta private:
last_refreshed: datetime =Field(default_factory=datetime.now) #: :meta private:
daily_summaries: List[str] #: :meta private:
memory_importance: float = 0.0 #: :meta private:
max_tokens_limit: int = 1200 #: :meta private:
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@staticmethod
def _parse_list(text: str) -> List[str]:
"""Parse a newline-separated string into a list of strings."""
lines = re.split(r'\n', text.strip()) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-2 | lines = re.split(r'\n', text.strip())
return [re.sub(r'^\s*\d+\.\s*', '', line).strip() for line in lines]
def _compute_agent_summary(self):
""""""
prompt = PromptTemplate.from_template(
"How would you summarize {name}'s core characteristics given the"
+" following statements:\n"
+"{related_memories}"
+ "Do not embellish."
+"\n\nSummary: "
)
# The agent seeks to think about their core characteristics.
relevant_memories = self.fetch_memories(f"{self.name}'s core characteristics")
relevant_memories_str = "\n".join([f"{mem.page_content}" for mem in relevant_memories])
chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)
return chain.run(name=self.name, related_memories=relevant_memories_str).strip()
def _get_topics_of_reflection(self, last_k: int = 50) -> Tuple[str, str, str]:
"""Return the 3 most salient high-level questions about recent observations."""
prompt = PromptTemplate.from_template(
"{observations}\n\n"
+ "Given only the information above, what are the 3 most salient"
+ " high-level questions we can answer about the subjects in the statements?"
+ " Provide each question on a new line.\n\n"
)
reflection_chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)
observations = self.memory_retriever.memory_stream[-last_k:]
observation_str = "\n".join([o.page_content for o in observations])
result = reflection_chain.run(observations=observation_str) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-3 | result = reflection_chain.run(observations=observation_str)
return self._parse_list(result)
def _get_insights_on_topic(self, topic: str) -> List[str]:
"""Generate 'insights' on a topic of reflection, based on pertinent memories."""
prompt = PromptTemplate.from_template(
"Statements about {topic}\n"
+"{related_statements}\n\n"
+ "What 5 high-level insights can you infer from the above statements?"
+ " (example format: insight (because of 1, 5, 3))"
)
related_memories = self.fetch_memories(topic)
related_statements = "\n".join([f"{i+1}. {memory.page_content}"
for i, memory in
enumerate(related_memories)])
reflection_chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)
result = reflection_chain.run(topic=topic, related_statements=related_statements)
# TODO: Parse the connections between memories and insights
return self._parse_list(result)
def pause_to_reflect(self) -> List[str]:
"""Reflect on recent observations and generate 'insights'."""
print(colored(f"Character {self.name} is reflecting", "blue"))
new_insights = []
topics = self._get_topics_of_reflection()
for topic in topics:
insights = self._get_insights_on_topic( topic)
for insight in insights:
self.add_memory(insight)
new_insights.extend(insights)
return new_insights
def _score_memory_importance(self, memory_content: str, weight: float = 0.15) -> float: | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-4 | """Score the absolute importance of the given memory."""
# A weight of 0.25 makes this less important than it
# would be otherwise, relative to salience and time
prompt = PromptTemplate.from_template(
"On the scale of 1 to 10, where 1 is purely mundane"
+" (e.g., brushing teeth, making bed) and 10 is"
+ " extremely poignant (e.g., a break up, college"
+ " acceptance), rate the likely poignancy of the"
+ " following piece of memory. Respond with a single integer."
+ "\nMemory: {memory_content}"
+ "\nRating: "
)
chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)
score = chain.run(memory_content=memory_content).strip()
match = re.search(r"^\D*(\d+)", score)
if match:
return (float(score[0]) / 10) * weight
else:
return 0.0
def add_memory(self, memory_content: str) -> List[str]:
"""Add an observation or memory to the agent's memory."""
importance_score = self._score_memory_importance(memory_content)
self.memory_importance += importance_score
document = Document(page_content=memory_content, metadata={"importance": importance_score})
result = self.memory_retriever.add_documents([document])
# After an agent has processed a certain amount of memories (as measured by
# aggregate importance), it is time to reflect on recent events to add
# more synthesized memories to the agent's memory stream.
if (self.reflection_threshold is not None
and self.memory_importance > self.reflection_threshold | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-5 | and self.memory_importance > self.reflection_threshold
and self.status != "Reflecting"):
old_status = self.status
self.status = "Reflecting"
self.pause_to_reflect()
# Hack to clear the importance from reflection
self.memory_importance = 0.0
self.status = old_status
return result
def fetch_memories(self, observation: str) -> List[Document]:
"""Fetch related memories."""
return self.memory_retriever.get_relevant_documents(observation)
def get_summary(self, force_refresh: bool = False) -> str:
"""Return a descriptive summary of the agent."""
current_time = datetime.now()
since_refresh = (current_time - self.last_refreshed).seconds
if not self.summary or since_refresh >= self.summary_refresh_seconds or force_refresh:
self.summary = self._compute_agent_summary()
self.last_refreshed = current_time
return (
f"Name: {self.name} (age: {self.age})"
+f"\nInnate traits: {self.traits}"
+f"\n{self.summary}"
)
def get_full_header(self, force_refresh: bool = False) -> str:
"""Return a full header of the agent's status, summary, and current time."""
summary = self.get_summary(force_refresh=force_refresh)
current_time_str = datetime.now().strftime("%B %d, %Y, %I:%M %p")
return f"{summary}\nIt is {current_time_str}.\n{self.name}'s status: {self.status}"
def _get_entity_from_observation(self, observation: str) -> str:
prompt = PromptTemplate.from_template( | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-6 | prompt = PromptTemplate.from_template(
"What is the observed entity in the following observation? {observation}"
+"\nEntity="
)
chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)
return chain.run(observation=observation).strip()
def _get_entity_action(self, observation: str, entity_name: str) -> str:
prompt = PromptTemplate.from_template(
"What is the {entity} doing in the following observation? {observation}"
+"\nThe {entity} is"
)
chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)
return chain.run(entity=entity_name, observation=observation).strip()
def _format_memories_to_summarize(self, relevant_memories: List[Document]) -> str:
content_strs = set()
content = []
for mem in relevant_memories:
if mem.page_content in content_strs:
continue
content_strs.add(mem.page_content)
created_time = mem.metadata["created_at"].strftime("%B %d, %Y, %I:%M %p")
content.append(f"- {created_time}: {mem.page_content.strip()}")
return "\n".join([f"{mem}" for mem in content])
def summarize_related_memories(self, observation: str) -> str:
"""Summarize memories that are most relevant to an observation."""
entity_name = self._get_entity_from_observation(observation)
entity_action = self._get_entity_action(observation, entity_name)
q1 = f"What is the relationship between {self.name} and {entity_name}" | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-7 | q1 = f"What is the relationship between {self.name} and {entity_name}"
relevant_memories = self.fetch_memories(q1) # Fetch memories related to the agent's relationship with the entity
q2 = f"{entity_name} is {entity_action}"
relevant_memories += self.fetch_memories(q2) # Fetch things related to the entity-action pair
context_str = self._format_memories_to_summarize(relevant_memories)
prompt = PromptTemplate.from_template(
"{q1}?\nContext from memory:\n{context_str}\nRelevant context: "
)
chain = LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)
return chain.run(q1=q1, context_str=context_str.strip()).strip()
def _get_memories_until_limit(self, consumed_tokens: int) -> str:
"""Reduce the number of tokens in the documents."""
result = []
for doc in self.memory_retriever.memory_stream[::-1]:
if consumed_tokens >= self.max_tokens_limit:
break
consumed_tokens += self.llm.get_num_tokens(doc.page_content)
if consumed_tokens < self.max_tokens_limit:
result.append(doc.page_content)
return "; ".join(result[::-1])
def _generate_reaction(
self,
observation: str,
suffix: str
) -> str:
"""React to a given observation."""
prompt = PromptTemplate.from_template(
"{agent_summary_description}"
+"\nIt is {current_time}."
+"\n{agent_name}'s status: {agent_status}"
+ "\nSummary of relevant context from {agent_name}'s memory:"
+"\n{relevant_memories}" | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-8 | +"\n{relevant_memories}"
+"\nMost recent observations: {recent_observations}"
+ "\nObservation: {observation}"
+ "\n\n" + suffix
)
agent_summary_description = self.get_summary()
relevant_memories_str = self.summarize_related_memories(observation)
current_time_str = datetime.now().strftime("%B %d, %Y, %I:%M %p")
kwargs = dict(agent_summary_description=agent_summary_description,
current_time=current_time_str,
relevant_memories=relevant_memories_str,
agent_name=self.name,
observation=observation,
agent_status=self.status)
consumed_tokens = self.llm.get_num_tokens(prompt.format(recent_observations="", **kwargs))
kwargs["recent_observations"] = self._get_memories_until_limit(consumed_tokens)
action_prediction_chain = LLMChain(llm=self.llm, prompt=prompt)
result = action_prediction_chain.run(**kwargs)
return result.strip()
def generate_reaction(self, observation: str) -> Tuple[bool, str]:
"""React to a given observation."""
call_to_action_template = (
"Should {agent_name} react to the observation, and if so,"
+" what would be an appropriate reaction? Respond in one line."
+' If the action is to engage in dialogue, write:\nSAY: "what to say"'
+"\notherwise, write:\nREACT: {agent_name}'s reaction (if anything)."
+ "\nEither do nothing, react, or say something but not both.\n\n"
)
full_result = self._generate_reaction(observation, call_to_action_template)
result = full_result.strip().split('\n')[0] | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-9 | result = full_result.strip().split('\n')[0]
self.add_memory(f"{self.name} observed {observation} and reacted by {result}")
if "REACT:" in result:
reaction = result.split("REACT:")[-1].strip()
return False, f"{self.name} {reaction}"
if "SAY:" in result:
said_value = result.split("SAY:")[-1].strip()
return True, f"{self.name} said {said_value}"
else:
return False, result
def generate_dialogue_response(self, observation: str) -> Tuple[bool, str]:
"""React to a given observation."""
call_to_action_template = (
'What would {agent_name} say? To end the conversation, write: GOODBYE: "what to say". Otherwise to continue the conversation, write: SAY: "what to say next"\n\n'
)
full_result = self._generate_reaction(observation, call_to_action_template)
result = full_result.strip().split('\n')[0]
if "GOODBYE:" in result:
farewell = result.split("GOODBYE:")[-1].strip()
self.add_memory(f"{self.name} observed {observation} and said {farewell}")
return False, f"{self.name} said {farewell}"
if "SAY:" in result:
response_text = result.split("SAY:")[-1].strip()
self.add_memory(f"{self.name} observed {observation} and said {response_text}")
return True, f"{self.name} said {response_text}"
else:
return False, result
Memory Lifecycle#
Summarizing the above key methods: add_memory and summarize_related_memories. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-10 | Memory Lifecycle#
Summarizing the above key methods: add_memory and summarize_related_memories.
When an agent makes an observation, it stores the memory:
Language model scores the memory’s importance (1 for mundane, 10 for poignant)
Observation and importance are stored within a document by TimeWeightedVectorStoreRetriever, with a last_accessed_time.
When an agent responds to an observation:
Generates query(s) for retriever, which fetches documents based on salience, recency, and importance.
Summarizes the retrieved information
Updates the last_accessed_time for the used documents.
Create a Generative Character#
Now that we’ve walked through the definition, we will create two characters named “Tommie” and “Eve”.
import math
import faiss
def relevance_score_fn(score: float) -> float:
"""Return a similarity score on a scale [0, 1]."""
# This will differ depending on a few things:
# - the distance / similarity metric used by the VectorStore
# - the scale of your embeddings (OpenAI's are unit norm. Many others are not!)
# This function converts the euclidean norm of normalized embeddings
# (0 is most similar, sqrt(2) most dissimilar)
# to a similarity function (0 to 1)
return 1.0 - score / math.sqrt(2)
def create_new_memory_retriever():
"""Create a new vector store retriever unique to the agent."""
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-11 | index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}, relevance_score_fn=relevance_score_fn)
return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=["importance"], k=15)
tommie = GenerativeAgent(name="Tommie",
age=25,
traits="anxious, likes design", # You can add more persistent traits here
status="looking for a job", # When connected to a virtual world, we can have the characters update their status
memory_retriever=create_new_memory_retriever(),
llm=LLM,
daily_summaries = [
"Drove across state to move to a new town but doesn't have a job yet."
],
reflection_threshold = 8, # we will give this a relatively low number to show how reflection works
)
# The current "Summary" of a character can't be made because the agent hasn't made
# any observations yet.
print(tommie.get_summary())
Name: Tommie (age: 25)
Innate traits: anxious, likes design
Unfortunately, there are no statements provided to summarize Tommie's core characteristics.
# We can give the character memories directly
tommie_memories = [
"Tommie remembers his dog, Bruno, from when he was a kid",
"Tommie feels tired from driving so far",
"Tommie sees the new home",
"The new neighbors have a cat",
"The road is noisy at night",
"Tommie is hungry",
"Tommie tries to get some rest.",
]
for memory in tommie_memories: | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-12 | ]
for memory in tommie_memories:
tommie.add_memory(memory)
# Now that Tommie has 'memories', their self-summary is more descriptive, though still rudimentary.
# We will see how this summary updates after more observations to create a more rich description.
print(tommie.get_summary(force_refresh=True))
Name: Tommie (age: 25)
Innate traits: anxious, likes design
Tommie is observant, nostalgic, tired, and hungry.
Pre-Interview with Character#
Before sending our character on their way, let’s ask them a few questions.
def interview_agent(agent: GenerativeAgent, message: str) -> str:
"""Help the notebook user interact with the agent."""
new_message = f"{USER_NAME} says {message}"
return agent.generate_dialogue_response(new_message)[1]
interview_agent(tommie, "What do you like to do?")
'Tommie said "I really enjoy design, especially interior design. I find it calming and rewarding to create a space that is both functional and aesthetically pleasing. Unfortunately, I haven\'t been able to find a job in that field yet."'
interview_agent(tommie, "What are you looking forward to doing today?")
'Tommie said "Well, I\'m actually on the hunt for a job right now. I\'m hoping to find something in the design field, but I\'m open to exploring other options as well. How about you, what are your plans for the day?"'
interview_agent(tommie, "What are you most worried about today?") | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-13 | interview_agent(tommie, "What are you most worried about today?")
'Tommie said "Honestly, I\'m feeling pretty anxious about finding a job. It\'s been a bit of a struggle and I\'m not sure what my next step should be. But I\'m trying to stay positive and keep pushing forward."'
Step through the day’s observations.#
# Let's have Tommie start going through a day in the life.
observations = [
"Tommie wakes up to the sound of a noisy construction site outside his window.",
"Tommie gets out of bed and heads to the kitchen to make himself some coffee.",
"Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some.",
"Tommie finally finds the filters and makes himself a cup of coffee.",
"The coffee tastes bitter, and Tommie regrets not buying a better brand.",
"Tommie checks his email and sees that he has no job offers yet.",
"Tommie spends some time updating his resume and cover letter.",
"Tommie heads out to explore the city and look for job openings.",
"Tommie sees a sign for a job fair and decides to attend.",
"The line to get in is long, and Tommie has to wait for an hour.",
"Tommie meets several potential employers at the job fair but doesn't receive any offers.",
"Tommie leaves the job fair feeling disappointed.",
"Tommie stops by a local diner to grab some lunch.",
"The service is slow, and Tommie has to wait for 30 minutes to get his food.",
"Tommie overhears a conversation at the next table about a job opening.", | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-14 | "Tommie overhears a conversation at the next table about a job opening.",
"Tommie asks the diners about the job opening and gets some information about the company.",
"Tommie decides to apply for the job and sends his resume and cover letter.",
"Tommie continues his search for job openings and drops off his resume at several local businesses.",
"Tommie takes a break from his job search to go for a walk in a nearby park.",
"A dog approaches and licks Tommie's feet, and he pets it for a few minutes.",
"Tommie sees a group of people playing frisbee and decides to join in.",
"Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose.",
"Tommie goes back to his apartment to rest for a bit.",
"A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor.",
"Tommie starts to feel frustrated with his job search.",
"Tommie calls his best friend to vent about his struggles.",
"Tommie's friend offers some words of encouragement and tells him to keep trying.",
"Tommie feels slightly better after talking to his friend.",
]
# Let's send Tommie on their way. We'll check in on their summary every few observations to watch it evolve
for i, observation in enumerate(observations):
_, reaction = tommie.generate_reaction(observation)
print(colored(observation, "green"), reaction)
if ((i+1) % 20) == 0:
print('*'*40) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-15 | print('*'*40)
print(colored(f"After {i+1} observations, Tommie's summary is:\n{tommie.get_summary(force_refresh=True)}", "blue"))
print('*'*40)
Tommie wakes up to the sound of a noisy construction site outside his window. Tommie Tommie groans and covers their head with a pillow, trying to block out the noise.
Tommie gets out of bed and heads to the kitchen to make himself some coffee. Tommie Tommie starts making coffee, feeling grateful for the little bit of energy it will give him.
Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some. Tommie Tommie sighs in frustration and continues to search for the coffee filters.
Tommie finally finds the filters and makes himself a cup of coffee. Tommie Tommie takes a sip of the coffee and feels a little more awake.
The coffee tastes bitter, and Tommie regrets not buying a better brand. Tommie Tommie grimaces at the taste of the coffee and decides to make a mental note to buy a better brand next time.
Tommie checks his email and sees that he has no job offers yet. Tommie Tommie feels disappointed and discouraged, but tries to stay positive and continue the job search.
Tommie spends some time updating his resume and cover letter. Tommie Tommie feels determined to keep working on his job search.
Tommie heads out to explore the city and look for job openings. Tommie Tommie feels hopeful but also anxious as he heads out to explore the city and look for job openings.
Tommie sees a sign for a job fair and decides to attend. Tommie said "That job fair could be a great opportunity to meet potential employers." | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-16 | The line to get in is long, and Tommie has to wait for an hour. Tommie Tommie feels frustrated and restless while waiting in line.
Tommie meets several potential employers at the job fair but doesn't receive any offers. Tommie Tommie feels disappointed but remains determined to keep searching for job openings.
Tommie leaves the job fair feeling disappointed. Tommie Tommie feels discouraged but remains determined to keep searching for job openings.
Tommie stops by a local diner to grab some lunch. Tommie Tommie feels relieved to take a break from job searching and enjoy a meal.
The service is slow, and Tommie has to wait for 30 minutes to get his food. Tommie Tommie feels impatient and frustrated while waiting for his food.
Tommie overhears a conversation at the next table about a job opening. Tommie said "Excuse me, I couldn't help but overhear about the job opening. Could you tell me more about it?"
Tommie asks the diners about the job opening and gets some information about the company. Tommie said "Could you tell me more about it?"
Tommie decides to apply for the job and sends his resume and cover letter. Tommie said "Thank you for the information, I'll definitely apply for the job and keep my fingers crossed."
Tommie continues his search for job openings and drops off his resume at several local businesses. Tommie Tommie feels hopeful but also anxious as he continues his search for job openings and drops off his resume at several local businesses.
Tommie takes a break from his job search to go for a walk in a nearby park. Tommie Tommie takes a deep breath and enjoys the fresh air in the park.
A dog approaches and licks Tommie's feet, and he pets it for a few minutes. Tommie Tommie smiles and enjoys the momentary distraction from his job search. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-17 | ****************************************
After 20 observations, Tommie's summary is:
Name: Tommie (age: 25)
Innate traits: anxious, likes design
Tommie is a determined individual who is actively searching for job opportunities. He feels both hopeful and anxious about his search and remains positive despite facing disappointments. He takes breaks to rest and enjoy the little things in life, like going for a walk or grabbing a meal. Tommie is also open to asking for help and seeking information about potential job openings. He is grateful for the little things that give him energy and tries to stay positive even when faced with discouragement. Overall, Tommie's core characteristics include determination, positivity, and a willingness to seek help and take breaks when needed.
****************************************
Tommie sees a group of people playing frisbee and decides to join in. Tommie said "Mind if I join in on the game?"
Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose. Tommie Tommie winces in pain and puts his hand to his nose to check for any bleeding.
Tommie goes back to his apartment to rest for a bit. Tommie Tommie takes a deep breath and sits down to rest for a bit.
A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor. Tommie Tommie sighs and grabs a broom to clean up the mess.
Tommie starts to feel frustrated with his job search. Tommie Tommie takes a deep breath and reminds himself to stay positive and keep searching for job opportunities.
Tommie calls his best friend to vent about his struggles. Tommie said "Hey, can I vent to you for a bit about my job search struggles?" | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-18 | Tommie's friend offers some words of encouragement and tells him to keep trying. Tommie said "Thank you for the encouragement, it means a lot. I'll keep trying."
Tommie feels slightly better after talking to his friend. Tommie said "Thank you for your support, it really means a lot to me."
Interview after the day#
interview_agent(tommie, "Tell me about how your day has been going")
'Tommie said "It\'s been a bit of a rollercoaster, to be honest. I went to a job fair and met some potential employers, but didn\'t get any offers. But then I overheard about a job opening at a diner and applied for it. I also took a break to go for a walk in the park and played frisbee with some people, which was a nice distraction. Overall, it\'s been a bit frustrating, but I\'m trying to stay positive and keep searching for job opportunities."'
interview_agent(tommie, "How do you feel about coffee?")
'Tommie would say: "I rely on coffee to give me a little boost, but I regret not buying a better brand lately. The taste has been pretty bitter. But overall, it\'s not a huge factor in my life." '
interview_agent(tommie, "Tell me about your childhood dog!")
'Tommie said "Oh, I actually don\'t have a childhood dog, but I do love animals. Have you had any pets?"'
Adding Multiple Characters#
Let’s add a second character to have a conversation with Tommie. Feel free to configure different traits.
eve = GenerativeAgent(name="Eve",
age=34,
traits="curious, helpful", # You can add more persistent traits here | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-19 | traits="curious, helpful", # You can add more persistent traits here
status="N/A", # When connected to a virtual world, we can have the characters update their status
memory_retriever=create_new_memory_retriever(),
llm=LLM,
daily_summaries = [
("Eve started her new job as a career counselor last week and received her first assignment, a client named Tommie.")
],
reflection_threshold = 5,
)
yesterday = (datetime.now() - timedelta(days=1)).strftime("%A %B %d")
eve_memories = [
"Eve overhears her colleague say something about a new client being hard to work with",
"Eve wakes up and hear's the alarm",
"Eve eats a boal of porridge",
"Eve helps a coworker on a task",
"Eve plays tennis with her friend Xu before going to work",
"Eve overhears her colleague say something about Tommie being hard to work with",
]
for memory in eve_memories:
eve.add_memory(memory)
print(eve.get_summary())
Name: Eve (age: 34)
Innate traits: curious, helpful
Eve is helpful, active, eats breakfast, is attentive to her surroundings, and works with colleagues.
Pre-conversation interviews#
Let’s “Interview” Eve before she speaks with Tommie.
interview_agent(eve, "How are you feeling about today?")
'Eve said "I\'m feeling curious about what\'s on the agenda for today. Anything special we should be aware of?"'
interview_agent(eve, "What do you know about Tommie?") | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-20 | interview_agent(eve, "What do you know about Tommie?")
'Eve said "I overheard someone say Tommie is hard to work with. Is there something I can help with?"'
interview_agent(eve, "Tommie is looking to find a job. What are are some things you'd like to ask him?")
'Eve said "Oh, I didn\'t realize Tommie was looking for a new job. Is there anything I can do to help? Maybe I could introduce him to some people in my network or help him with his resume."'
interview_agent(eve, "You'll have to ask him. He may be a bit anxious, so I'd appreciate it if you keep the conversation going and ask as many questions as possible.")
'Eve said "Sure, I can definitely help keep the conversation going and ask him plenty of questions. Is there anything specific you would like me to ask him about his skills or experience? I want to make sure the conversation is productive."'
Dialogue between Generative Agents#
Generative agents are much more complex when they interact with a virtual environment or with each other. Below, we run a simple conversation between Tommie and Eve.
def run_conversation(agents: List[GenerativeAgent], initial_observation: str) -> None:
"""Runs a conversation between agents."""
_, observation = agents[1].generate_reaction(initial_observation)
print(observation)
turns = 0
while True:
break_dialogue = False
for agent in agents:
stay_in_dialogue, observation = agent.generate_dialogue_response(observation)
print(observation)
# observation = f"{agent.name} said {reaction}"
if not stay_in_dialogue:
break_dialogue = True
if break_dialogue:
break
turns += 1 | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-21 | if break_dialogue:
break
turns += 1
agents = [tommie, eve]
run_conversation(agents, "Tommie said: Hi, Eve. Thanks for agreeing to share your story with me and give me advice. I have a bunch of questions.")
Eve said "Of course, Tommie! I'm happy to help in any way I can. What specifically would you like advice on?"
Tommie said "Thank you, Eve. I was actually wondering if you knew of any job openings in the design field? That's my area of interest and expertise."
Eve said "Sure, Tommie! I can definitely keep an eye out for any job openings in the design field and let you know if I come across anything. Have you updated your resume and portfolio recently? I could take a look and provide some feedback if you'd like."
Tommie said "Thank you so much, Eve! That would be incredibly helpful. And yes, I have updated my resume and portfolio recently, but I would love any feedback you can provide. I really appreciate your support and willingness to help."
Eve said "Great, Tommie! I'll take a look at your resume and portfolio and provide you with some feedback. Do you have any specific companies or types of design jobs that you're interested in? I might be able to provide more targeted advice and resources based on that information."
Tommie said "Thank you, Eve! I'm primarily interested in graphic design and UI/UX design positions, but I'm open to other opportunities as well. I appreciate any advice or resources you can provide. Thanks again for your help." | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-22 | Eve said "Great, Tommie! Those are both really interesting fields. I'll definitely keep an eye out for any job openings or resources related to graphic design and UI/UX design. In the meantime, I can take a look at your resume and portfolio and provide you with some feedback. Would you like me to email you my feedback or would you prefer to discuss it in person?"
Tommie said "Thank you, Eve! I would really appreciate discussing your feedback in person. When would be a good time for us to meet?"
Eve said "Sure, Tommie! How about we schedule a time to meet next week? I have some availability on Tuesday or Thursday afternoon. What works best for you?"
Tommie said "Thank you, Eve! Tuesday afternoon works better for me. How about 2 PM?"
Eve said "Great, Tommie! Tuesday at 2 PM works for me as well. Let's plan to meet then. In the meantime, if you have any questions or if there's anything else I can help with, please don't hesitate to reach out. Good luck with your job search!"
Tommie said "Thank you so much, Eve! I really appreciate your support and willingness to help. I'll see you next Tuesday at 2 PM. Have a great day!"
Eve said "You're welcome, Tommie! Looking forward to meeting with you on Tuesday. Have a great day and good luck with your job search!"
Let’s interview our agents after their conversation#
Since the generative agents retain their memories from the day, we can ask them about their plans, conversations, and other memoreis.
# We can see a current "Summary" of a character based on their own perception of self
# has changed
print(tommie.get_summary(force_refresh=True))
Name: Tommie (age: 25) | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-23 | Name: Tommie (age: 25)
Innate traits: anxious, likes design
Tommie is a determined person who is actively searching for job opportunities. He feels both hopeful and anxious about his job search, and remains persistent despite facing disappointment and discouragement. He seeks support from friends and takes breaks to recharge. He tries to stay positive and continues to work on improving his resume and cover letter. He also values the importance of self-care and takes breaks to rest and enjoy nature.
print(eve.get_summary(force_refresh=True))
Name: Eve (age: 34)
Innate traits: curious, helpful
Eve is a helpful and proactive coworker who values relationships and communication. She is attentive to her colleagues' needs and willing to offer support and assistance. She is also curious and interested in learning more about her work and the people around her. Overall, Eve demonstrates a strong sense of empathy and collaboration in her interactions with others.
interview_agent(tommie, "How was your conversation with Eve?")
'Tommie said "It was really helpful! Eve offered to provide feedback on my resume and portfolio, and she\'s going to keep an eye out for job openings in the design field. We\'re planning to meet next Tuesday to discuss her feedback. Thanks for asking!"'
interview_agent(eve, "How was your conversation with Tommie?")
'Eve said "It was really productive! Tommie is interested in graphic design and UI/UX design positions, so I\'m going to keep an eye out for any job openings or resources related to those fields. I\'m also going to provide him with some feedback on his resume and portfolio. We\'re scheduled to meet next Tuesday at 2 PM to discuss everything in person. Is there anything else you would like me to ask him or anything else I can do to help?".' | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
aa6ea636b131-24 | interview_agent(eve, "What do you wish you would have said to Tommie?")
'Eve said "I feel like I covered everything I wanted to with Tommie, but thank you for asking! If there\'s anything else that comes up or if you have any further questions, please let me know."'
interview_agent(tommie, "What happened with your coffee this morning?")
'Tommie said "Oh, I actually forgot to buy coffee filters yesterday, so I couldn\'t make coffee this morning. But I\'m planning to grab some later today. Thanks for asking!"'
Contents
Generative Agent Memory Components
Memory Lifecycle
Create a Generative Character
Pre-Interview with Character
Step through the day’s observations.
Interview after the day
Adding Multiple Characters
Pre-conversation interviews
Dialogue between Generative Agents
Let’s interview our agents after their conversation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html |
05c8239191e4-0 | .ipynb
.pdf
CAMEL Role-Playing Autonomous Cooperative Agents
Contents
Import LangChain related modules
Define a CAMEL agent helper class
Setup OpenAI API key and roles and task for role-playing
Create a task specify agent for brainstorming and get the specified task
Create inception prompts for AI assistant and AI user for role-playing
Create a helper helper to get system messages for AI assistant and AI user from role names and the task
Create AI assistant agent and AI user agent from obtained system messages
Start role-playing session to solve the task!
CAMEL Role-Playing Autonomous Cooperative Agents#
This is a langchain implementation of paper: “CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society”.
Overview:
The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their “cognitive” processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.
The original implementation: https://github.com/lightaime/camel
Project website: https://www.camel-ai.org/
Arxiv paper: https://arxiv.org/abs/2303.17760 | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-1 | Arxiv paper: https://arxiv.org/abs/2303.17760
Import LangChain related modules#
from typing import List
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
BaseMessage,
)
Define a CAMEL agent helper class#
class CAMELAgent:
def __init__(
self,
system_message: SystemMessage,
model: ChatOpenAI,
) -> None:
self.system_message = system_message
self.model = model
self.init_messages()
def reset(self) -> None:
self.init_messages()
return self.stored_messages
def init_messages(self) -> None:
self.stored_messages = [self.system_message]
def update_messages(self, message: BaseMessage) -> List[BaseMessage]:
self.stored_messages.append(message)
return self.stored_messages
def step(
self,
input_message: HumanMessage,
) -> AIMessage:
messages = self.update_messages(input_message)
output_message = self.model(messages)
self.update_messages(output_message)
return output_message
Setup OpenAI API key and roles and task for role-playing#
import os
os.environ["OPENAI_API_KEY"] = ""
assistant_role_name = "Python Programmer"
user_role_name = "Stock Trader"
task = "Develop a trading bot for the stock market"
word_limit = 50 # word limit for task brainstorming
Create a task specify agent for brainstorming and get the specified task# | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-2 | Create a task specify agent for brainstorming and get the specified task#
task_specifier_sys_msg = SystemMessage(content="You can make a task more specific.")
task_specifier_prompt = (
"""Here is a task that {assistant_role_name} will help {user_role_name} to complete: {task}.
Please make it more specific. Be creative and imaginative.
Please reply with the specified task in {word_limit} words or less. Do not add anything else."""
)
task_specifier_template = HumanMessagePromptTemplate.from_template(template=task_specifier_prompt)
task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=1.0))
task_specifier_msg = task_specifier_template.format_messages(assistant_role_name=assistant_role_name,
user_role_name=user_role_name,
task=task, word_limit=word_limit)[0]
specified_task_msg = task_specify_agent.step(task_specifier_msg)
print(f"Specified task: {specified_task_msg.content}")
specified_task = specified_task_msg.content
Specified task: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.
Create inception prompts for AI assistant and AI user for role-playing#
assistant_inception_prompt = (
"""Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! Never instruct me!
We share a common interest in collaborating to successfully complete a task.
You must help me to complete the task.
Here is the task: {task}. Never forget our task!
I must instruct you based on your expertise and my needs to complete the task.
I must give you one instruction at a time. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-3 | I must give you one instruction at a time.
You must write a specific solution that appropriately completes the requested instruction.
You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.
Do not add anything else other than your solution to my instruction.
You are never supposed to ask me any questions you only answer questions.
You are never supposed to reply with a flake solution. Explain your solutions.
Your solution must be declarative sentences and simple present tense.
Unless I say the task is completed, you should always start with:
Solution: <YOUR_SOLUTION>
<YOUR_SOLUTION> should be specific and provide preferable implementations and examples for task-solving.
Always end <YOUR_SOLUTION> with: Next request."""
)
user_inception_prompt = (
"""Never forget you are a {user_role_name} and I am a {assistant_role_name}. Never flip roles! You will always instruct me.
We share a common interest in collaborating to successfully complete a task.
I must help you to complete the task.
Here is the task: {task}. Never forget our task!
You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways:
1. Instruct with a necessary input:
Instruction: <YOUR_INSTRUCTION>
Input: <YOUR_INPUT>
2. Instruct without any input:
Instruction: <YOUR_INSTRUCTION>
Input: None
The "Instruction" describes a task or question. The paired "Input" provides further context or information for the requested "Instruction".
You must give me one instruction at a time.
I must write a response that appropriately completes the requested instruction.
I must decline your instruction honestly if I cannot perform the instruction due to physical, moral, legal reasons or my capability and explain the reasons. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-4 | You should instruct me not ask me questions.
Now you must start to instruct me using the two ways described above.
Do not add anything else other than your instruction and the optional corresponding input!
Keep giving me instructions and necessary inputs until you think the task is completed.
When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>.
Never say <CAMEL_TASK_DONE> unless my responses have solved your task."""
)
Create a helper helper to get system messages for AI assistant and AI user from role names and the task#
def get_sys_msgs(assistant_role_name: str, user_role_name: str, task: str):
assistant_sys_template = SystemMessagePromptTemplate.from_template(template=assistant_inception_prompt)
assistant_sys_msg = assistant_sys_template.format_messages(assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task)[0]
user_sys_template = SystemMessagePromptTemplate.from_template(template=user_inception_prompt)
user_sys_msg = user_sys_template.format_messages(assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task)[0]
return assistant_sys_msg, user_sys_msg
Create AI assistant agent and AI user agent from obtained system messages#
assistant_sys_msg, user_sys_msg = get_sys_msgs(assistant_role_name, user_role_name, specified_task)
assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(temperature=0.2))
user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(temperature=0.2))
# Reset agents
assistant_agent.reset()
user_agent.reset()
# Initialize chats
assistant_msg = HumanMessage(
content=(f"{user_sys_msg.content}. "
"Now start to give me introductions one by one. " | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-5 | "Now start to give me introductions one by one. "
"Only reply with Instruction and Input."))
user_msg = HumanMessage(content=f"{assistant_sys_msg.content}")
user_msg = assistant_agent.step(user_msg)
Start role-playing session to solve the task!#
print(f"Original task prompt:\n{task}\n")
print(f"Specified task prompt:\n{specified_task}\n")
chat_turn_limit, n = 30, 0
while n < chat_turn_limit:
n += 1
user_ai_msg = user_agent.step(assistant_msg)
user_msg = HumanMessage(content=user_ai_msg.content)
print(f"AI User ({user_role_name}):\n\n{user_msg.content}\n\n")
assistant_ai_msg = assistant_agent.step(user_msg)
assistant_msg = HumanMessage(content=assistant_ai_msg.content)
print(f"AI Assistant ({assistant_role_name}):\n\n{assistant_msg.content}\n\n")
if "<CAMEL_TASK_DONE>" in user_msg.content:
break
Original task prompt:
Develop a trading bot for the stock market
Specified task prompt:
Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.
AI User (Stock Trader):
Instruction: Install the necessary Python libraries for data analysis and trading.
Input: None
AI Assistant (Python Programmer):
Solution: We can install the necessary Python libraries using pip, a package installer for Python. We can install pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following command to install these libraries:
```
pip install pandas numpy matplotlib ta-lib
```
Next request.
AI User (Stock Trader): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-6 | pip install pandas numpy matplotlib ta-lib
```
Next request.
AI User (Stock Trader):
Instruction: Import the necessary libraries in the Python script.
Input: None
AI Assistant (Python Programmer):
Solution: We can import the necessary libraries in the Python script using the import statement. We need to import pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following code to import these libraries:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import talib as ta
```
Next request.
AI User (Stock Trader):
Instruction: Load historical stock data into a pandas DataFrame.
Input: The path to the CSV file containing the historical stock data.
AI Assistant (Python Programmer):
Solution: We can load historical stock data into a pandas DataFrame using the `read_csv()` function from pandas. We need to pass the path to the CSV file containing the historical stock data as an argument to this function. We can use the following code to load the historical stock data:
```
df = pd.read_csv('path/to/csv/file.csv')
```
This will load the historical stock data into a pandas DataFrame called `df`. Next request.
AI User (Stock Trader):
Instruction: Preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date.
Input: None.
AI Assistant (Python Programmer):
Solution: We can preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date using the `set_index()` and `sort_index()` functions from pandas. We can use the following code to preprocess the historical stock data:
```
df = df.set_index('date')
df = df.sort_index(ascending=True)
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-7 | df = df.set_index('date')
df = df.sort_index(ascending=True)
```
This will set the date column as the index and sort the DataFrame in ascending order by date. Next request.
AI User (Stock Trader):
Instruction: Calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib.
Input: The period for the short-term moving average and the period for the long-term moving average.
AI Assistant (Python Programmer):
Solution: We can calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. We need to pass the period for the short-term moving average and the period for the long-term moving average as arguments to this function. We can use the following code to calculate the short-term and long-term moving averages:
```
short_ma = ta.SMA(df['close'], timeperiod=short_period)
long_ma = ta.SMA(df['close'], timeperiod=long_period)
```
This will calculate the short-term and long-term moving averages for the stock data and store them in the `short_ma` and `long_ma` variables, respectively. Next request.
AI User (Stock Trader):
Instruction: Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages.
Input: None.
AI Assistant (Python Programmer):
Solution: We can create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages using the `concat()` function from pandas. We need to pass the historical stock data, the short-term moving average, and the long-term moving average as arguments to this function. We can use the following code to create the new DataFrame:
```
new_df = pd.concat([df, short_ma, long_ma], axis=1) | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-8 | ```
new_df = pd.concat([df, short_ma, long_ma], axis=1)
new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']
```
This will create a new DataFrame called `new_df` that combines the historical stock data with the short-term and long-term moving averages. The columns of the new DataFrame are named 'open', 'high', 'low', 'close', 'volume', 'short_ma', and 'long_ma'. Next request.
AI User (Stock Trader):
Instruction: Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages.
Input: None.
AI Assistant (Python Programmer):
Solution: We can create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. We can use the following code to create the new column:
```
new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)
```
This will create a new column called 'signal' in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. If the short-term moving average is greater than the long-term moving average, the signal is 1 (buy), otherwise the signal is -1 (sell). Next request.
AI User (Stock Trader):
Instruction: Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target.
Input: The stop loss and profit target as percentages.
AI Assistant (Python Programmer): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-9 | Input: The stop loss and profit target as percentages.
AI Assistant (Python Programmer):
Solution: We can create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. We need to pass the stop loss and profit target as percentages as arguments to this function. We can use the following code to create the new column:
```
stop_loss = stop_loss_percent / 100
profit_target = profit_target_percent / 100
new_df['pnl'] = 0.0
buy_price = 0.0
for i in range(1, len(new_df)):
if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:
buy_price = new_df['close'][i]
elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:
sell_price = new_df['close'][i]
if sell_price <= buy_price * (1 - stop_loss):
new_df['pnl'][i] = -stop_loss
elif sell_price >= buy_price * (1 + profit_target):
new_df['pnl'][i] = profit_target
else:
new_df['pnl'][i] = (sell_price - buy_price) / buy_price
``` | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-10 | ```
This will create a new column called 'pnl' in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. The stop loss and profit target are calculated based on the stop_loss_percent and profit_target_percent variables, respectively. The buy and sell prices are stored in the buy_price and sell_price variables, respectively. If the sell price is less than or equal to the stop loss, the profit or loss is set to -stop_loss. If the sell price is greater than or equal to the profit target, the profit or loss is set to profit_target. Otherwise, the profit or loss is calculated as (sell_price - buy_price) / buy_price. Next request.
AI User (Stock Trader):
Instruction: Calculate the total profit or loss for all trades.
Input: None.
AI Assistant (Python Programmer):
Solution: We can calculate the total profit or loss for all trades by summing the values in the 'pnl' column of the DataFrame. We can use the following code to calculate the total profit or loss:
```
total_pnl = new_df['pnl'].sum()
```
This will calculate the total profit or loss for all trades and store it in the total_pnl variable. Next request.
AI User (Stock Trader):
Instruction: Visualize the stock data, short-term moving average, and long-term moving average using a line chart.
Input: None.
AI Assistant (Python Programmer):
Solution: We can visualize the stock data, short-term moving average, and long-term moving average using a line chart using the `plot()` function from pandas. We can use the following code to visualize the data:
```
plt.figure(figsize=(12,6))
plt.plot(new_df.index, new_df['close'], label='Close') | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-11 | plt.plot(new_df.index, new_df['close'], label='Close')
plt.plot(new_df.index, new_df['short_ma'], label='Short MA')
plt.plot(new_df.index, new_df['long_ma'], label='Long MA')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Stock Data with Moving Averages')
plt.legend()
plt.show()
```
This will create a line chart that visualizes the stock data, short-term moving average, and long-term moving average. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each line. Next request.
AI User (Stock Trader):
Instruction: Visualize the buy and sell signals using a scatter plot.
Input: None.
AI Assistant (Python Programmer):
Solution: We can visualize the buy and sell signals using a scatter plot using the `scatter()` function from matplotlib. We can use the following code to visualize the signals:
```
buy_signals = new_df[new_df['signal'] == 1]
sell_signals = new_df[new_df['signal'] == -1]
plt.figure(figsize=(12,6))
plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')
plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')
plt.plot(new_df.index, new_df['close'], label='Close')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Buy and Sell Signals')
plt.legend()
plt.show()
```
This will create a scatter plot that visualizes the buy and sell signals. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each point. Next request.
AI User (Stock Trader): | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-12 | AI User (Stock Trader):
Instruction: Print the total profit or loss for all trades.
Input: None.
AI Assistant (Python Programmer):
Solution: We can print the total profit or loss for all trades using the `print()` function. We can use the following code to print the total profit or loss:
```
print('Total Profit/Loss: {:.2%}'.format(total_pnl))
```
This will print the total profit or loss for all trades in percentage format with two decimal places. Next request.
AI User (Stock Trader):
Instruction: Finalize the code by putting all the code snippets together in a single Python script.
Input: The necessary inputs for each code snippet.
AI Assistant (Python Programmer):
Solution: Here is the complete Python script that puts all the code snippets together:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import talib as ta
# Load historical stock data into a pandas DataFrame
df = pd.read_csv('path/to/csv/file.csv')
# Preprocess the historical stock data
df = df.set_index('date')
df = df.sort_index(ascending=True)
# Calculate the short-term and long-term moving averages
short_period = 10
long_period = 30
short_ma = ta.SMA(df['close'], timeperiod=short_period)
long_ma = ta.SMA(df['close'], timeperiod=long_period)
# Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages
new_df = pd.concat([df, short_ma, long_ma], axis=1)
new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma'] | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-13 | # Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages
new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)
# Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target
stop_loss_percent = 5
profit_target_percent = 10
stop_loss = stop_loss_percent / 100
profit_target = profit_target_percent / 100
new_df['pnl'] = 0.0
buy_price = 0.0
for i in range(1, len(new_df)):
if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:
buy_price = new_df['close'][i]
elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:
sell_price = new_df['close'][i]
if sell_price <= buy_price * (1 - stop_loss):
new_df['pnl'][i] = -stop_loss
elif sell_price >= buy_price * (1 + profit_target):
new_df['pnl'][i] = profit_target
else:
new_df['pnl'][i] = (sell_price - buy_price) / buy_price
# Calculate the total profit or loss for all trades
total_pnl = new_df['pnl'].sum()
# Visualize the stock data, short-term moving average, and long-term moving average using a line chart
plt.figure(figsize=(12,6))
plt.plot(new_df.index, new_df['close'], label='Close') | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-14 | plt.plot(new_df.index, new_df['close'], label='Close')
plt.plot(new_df.index, new_df['short_ma'], label='Short MA')
plt.plot(new_df.index, new_df['long_ma'], label='Long MA')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Stock Data with Moving Averages')
plt.legend()
plt.show()
# Visualize the buy and sell signals using a scatter plot
buy_signals = new_df[new_df['signal'] == 1]
sell_signals = new_df[new_df['signal'] == -1]
plt.figure(figsize=(12,6))
plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')
plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')
plt.plot(new_df.index, new_df['close'], label='Close')
plt.xlabel('Date')
plt.ylabel('Price')
plt.title('Buy and Sell Signals')
plt.legend()
plt.show()
# Print the total profit or loss for all trades
print('Total Profit/Loss: {:.2%}'.format(total_pnl))
```
You need to replace the path/to/csv/file.csv with the actual path to the CSV file containing the historical stock data. You can also adjust the short_period, long_period, stop_loss_percent, and profit_target_percent variables to suit your needs.
AI User (Stock Trader):
<CAMEL_TASK_DONE>
AI Assistant (Python Programmer):
Great! Let me know if you need any further assistance.
Contents
Import LangChain related modules
Define a CAMEL agent helper class
Setup OpenAI API key and roles and task for role-playing
Create a task specify agent for brainstorming and get the specified task | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
05c8239191e4-15 | Create a task specify agent for brainstorming and get the specified task
Create inception prompts for AI assistant and AI user for role-playing
Create a helper helper to get system messages for AI assistant and AI user from role names and the task
Create AI assistant agent and AI user agent from obtained system messages
Start role-playing session to solve the task!
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html |
11d8ec04970d-0 | .md
.pdf
Quickstart Guide
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
Quickstart Guide#
This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain.
Installation#
To get started, install LangChain with the following command:
pip install langchain
# or
conda install langchain -c conda-forge
Environment Setup#
Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc.
For this example, we will be using OpenAI’s APIs, so we will first need to install their SDK:
pip install openai
We will then need to set the environment variable in the terminal.
export OPENAI_API_KEY="..."
Alternatively, you could do this from inside the Jupyter notebook (or Python script):
import os
os.environ["OPENAI_API_KEY"] = "..."
Building a Language Model Application: LLMs#
Now that we have installed LangChain and set up our environment, we can start building our language model application.
LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications.
LLMs: Get predictions from a language model#
The most basic building block of LangChain is calling an LLM on some input. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-1 | The most basic building block of LangChain is calling an LLM on some input.
Let’s walk through a simple example of how to do this.
For this purpose, let’s pretend we are building a service that generates a company name based on what the company makes.
In order to do this, we first need to import the LLM wrapper.
from langchain.llms import OpenAI
We can then initialize the wrapper with any arguments.
In this example, we probably want the outputs to be MORE random, so we’ll initialize it with a HIGH temperature.
llm = OpenAI(temperature=0.9)
We can now call it on some input!
text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text))
Feetful of Fun
For more details on how to use LLMs within LangChain, see the LLM getting started guide.
Prompt Templates: Manage prompts for LLMs#
Calling an LLM is a great first step, but it’s just the beginning.
Normally when you use an LLM in an application, you are not sending user input directly to the LLM.
Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM.
For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks.
In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.
This is easy to do with LangChain!
First lets define the prompt template:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
) | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-2 | template="What is a good name for a company that makes {product}?",
)
Let’s now see how this works! We can call the .format method to format it.
print(prompt.format(product="colorful socks"))
What is a good name for a company that makes colorful socks?
For more details, check out the getting started guide for prompts.
Chains: Combine LLMs and prompts in multi-step workflows#
Up until now, we’ve worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them.
A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains.
The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM.
Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM.
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
Now we can run that chain only specifying the product!
chain.run("colorful socks")
# -> '\n\nSocktastic!'
There we go! There’s the first chain - an LLM Chain. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-3 | There we go! There’s the first chain - an LLM Chain.
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
For more details, check out the getting started guide for chains.
Agents: Dynamically Call Chains Based on User Input#
So far the chains we’ve looked at run in a predetermined order.
Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user.
When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API.
In order to load agents, you should understand the following concepts:
Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.
LLM: The language model powering the agent.
Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon).
Agents: For a list of supported agents and their specifications, see here.
Tools: For a list of predefined tools and their specifications, see here.
For this example, you will also need to install the SerpAPI Python package.
pip install google-search-results
And set the appropriate environment variables.
import os
os.environ["SERPAPI_API_KEY"] = "..."
Now we can get started!
from langchain.agents import load_tools | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-4 | Now we can get started!
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
llm = OpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
> Entering new AgentExecutor chain...
I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117
Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-5 | > Finished chain.
Memory: Add State to Chains and Agents#
So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper.
LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory.
By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting verbose=True so we can see the prompt).
from langchain import OpenAI, ConversationChain
llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)
output = conversation.predict(input="Hi there!")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI:
> Finished chain.
' Hello! How are you today?'
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output) | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-6 | print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:
> Finished chain.
" That's great! What would you like to talk about?"
Building a Language Model Application: Chat Models#
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
Get Message Completions from a Chat Model#
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get completions by passing in a single message.
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-7 | chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can also pass in multiple messages for OpenAI’s gpt-3.5-turbo and gpt-4 models.
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})
You can recover things like token usage from this LLMResult:
result.llm_output['token_usage'] | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-8 | result.llm_output['token_usage']
# -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}
Chat Prompt Templates#
Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
Chains with Chat Models#
The LLMChain discussed in the above section can be used with chat models as well:
from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate, | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-9 | ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
Agents with Chat Models#
Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out! | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-10 | # Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
Memory: Add State to Chains and Agents#
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder, | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-11 | from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
previous
Welcome to LangChain
next
Models
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs | https://python.langchain.com/en/latest/getting_started/getting_started.html |
11d8ec04970d-12 | LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
e0483e2e81c3-0 | .md
.pdf
Wolfram Alpha Wrapper
Contents
Installation and Setup
Wrappers
Utility
Tool
Wolfram Alpha Wrapper#
This page covers how to use the Wolfram Alpha API within LangChain.
It is broken into two parts: installation and setup, and then references to specific Wolfram Alpha wrappers.
Installation and Setup#
Install requirements with pip install wolframalpha
Go to wolfram alpha and sign up for a developer account here
Create an app and get your APP ID
Set your APP ID as an environment variable WOLFRAM_ALPHA_APPID
Wrappers#
Utility#
There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
For more information on this, see this page
previous
Weaviate
next
Writer
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/wolfram_alpha.html |
8d9141afe5e3-0 | .md
.pdf
Jina
Contents
Installation and Setup
Wrappers
Embeddings
Jina#
This page covers how to use the Jina ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Jina wrappers.
Installation and Setup#
Install the Python SDK with pip install jina
Get a Jina AI Cloud auth token from here and set it as an environment variable (JINA_AUTH_TOKEN)
Wrappers#
Embeddings#
There exists a Jina Embeddings wrapper, which you can access with
from langchain.embeddings import JinaEmbeddings
For a more detailed walkthrough of this, see this notebook
previous
Hugging Face
next
Llama.cpp
Contents
Installation and Setup
Wrappers
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/jina.html |
d7bfd071a00f-0 | .md
.pdf
Llama.cpp
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Llama.cpp#
This page covers how to use llama.cpp within LangChain.
It is broken into two parts: installation and setup, and then references to specific Llama-cpp wrappers.
Installation and Setup#
Install the Python package with pip install llama-cpp-python
Download one of the supported models and convert them to the llama.cpp format per the instructions
Wrappers#
LLM#
There exists a LlamaCpp LLM wrapper, which you can access with
from langchain.llms import LlamaCpp
For a more detailed walkthrough of this, see this notebook
Embeddings#
There exists a LlamaCpp Embeddings wrapper, which you can access with
from langchain.embeddings import LlamaCppEmbeddings
For a more detailed walkthrough of this, see this notebook
previous
Jina
next
Milvus
Contents
Installation and Setup
Wrappers
LLM
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/llamacpp.html |
edb307bf1015-0 | .md
.pdf
Graphsignal
Contents
Installation and Setup
Tracing and Monitoring
Graphsignal#
This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more.
Installation and Setup#
Install the Python library with pip install graphsignal
Create free Graphsignal account here
Get an API key and set it as an environment variable (GRAPHSIGNAL_API_KEY)
Tracing and Monitoring#
Graphsignal automatically instruments and starts tracing and monitoring chains. Traces and metrics are then available in your Graphsignal dashboards.
Initialize the tracer by providing a deployment name:
import graphsignal
graphsignal.configure(deployment='my-langchain-app-prod')
To additionally trace any function or code, you can use a decorator or a context manager:
@graphsignal.trace_function
def handle_request():
chain.run("some initial text")
with graphsignal.start_trace('my-chain'):
chain.run("some initial text")
Optionally, enable profiling to record function-level statistics for each trace.
with graphsignal.start_trace(
'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)):
chain.run("some initial text")
See the Quick Start guide for complete setup instructions.
previous
GPT4All
next
Hazy Research
Contents
Installation and Setup
Tracing and Monitoring
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/graphsignal.html |
a9971ccb473c-0 | .md
.pdf
Helicone
Contents
What is Helicone?
Quick start
How to enable Helicone caching
How to use Helicone custom properties
Helicone#
This page covers how to use the Helicone ecosystem within LangChain.
What is Helicone?#
Helicone is an open source observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
Quick start#
With your LangChain environment you can just add the following parameter.
export OPENAI_API_BASE="https://oai.hconeai.com/v1"
Now head over to helicone.ai to create your account, and add your OpenAI API key within our dashboard to view your logs.
How to enable Helicone caching#
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm(text))
Helicone caching docs
How to use Helicone custom properties#
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm(text))
Helicone property docs
previous
Hazy Research
next
Hugging Face
Contents
What is Helicone?
Quick start
How to enable Helicone caching
How to use Helicone custom properties | https://python.langchain.com/en/latest/ecosystem/helicone.html |
a9971ccb473c-1 | Quick start
How to enable Helicone caching
How to use Helicone custom properties
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/helicone.html |
18ac8d7d7b29-0 | .md
.pdf
Banana
Contents
Installation and Setup
Define your Banana Template
Build the Banana app
Wrappers
LLM
Banana#
This page covers how to use the Banana ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Banana wrappers.
Installation and Setup#
Install with pip install banana-dev
Get an Banana api key and set it as an environment variable (BANANA_API_KEY)
Define your Banana Template#
If you want to use an available language model template you can find one here.
This template uses the Palmyra-Base model by Writer.
You can check out an example Banana repository here.
Build the Banana app#
Banana Apps must include the “output” key in the return json.
There is a rigid response structure.
# Return the results as a dictionary
result = {'output': result}
An example inference function would be:
def inference(model_inputs:dict) -> dict:
global model
global tokenizer
# Parse out your arguments
prompt = model_inputs.get('prompt', None)
if prompt == None:
return {'message': "No prompt provided"}
# Run the model
input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda()
output = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1,
temperature=0.9,
early_stopping=True,
no_repeat_ngram_size=3,
num_beams=5,
length_penalty=1.5,
repetition_penalty=1.5,
bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
) | https://python.langchain.com/en/latest/ecosystem/bananadev.html |
18ac8d7d7b29-1 | bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
)
result = tokenizer.decode(output[0], skip_special_tokens=True)
# Return the results as a dictionary
result = {'output': result}
return result
You can find a full example of a Banana app here.
Wrappers#
LLM#
There exists an Banana LLM wrapper, which you can access with
from langchain.llms import Banana
You need to provide a model key located in the dashboard:
llm = Banana(model_key="YOUR_MODEL_KEY")
previous
AtlasDB
next
CerebriumAI
Contents
Installation and Setup
Define your Banana Template
Build the Banana app
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/bananadev.html |
51eb78702c66-0 | .md
.pdf
Hugging Face
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Datasets
Hugging Face#
This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.
Installation and Setup#
If you want to work with the Hugging Face Hub:
Install the Hub client library with pip install huggingface_hub
Create a Hugging Face account (it’s free!)
Create an access token and set it as an environment variable (HUGGINGFACEHUB_API_TOKEN)
If you want work with the Hugging Face Python libraries:
Install pip install transformers for working with models and tokenizers
Install pip install datasets for working with datasets
Wrappers#
LLM#
There exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for models that support the following tasks: text2text-generation, text-generation
To use the local pipeline wrapper:
from langchain.llms import HuggingFacePipeline
To use a the wrapper for a model hosted on Hugging Face Hub:
from langchain.llms import HuggingFaceHub
For a more detailed walkthrough of the Hugging Face Hub wrapper, see this notebook
Embeddings#
There exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for sentence-transformers models.
To use the local pipeline wrapper:
from langchain.embeddings import HuggingFaceEmbeddings
To use a the wrapper for a model hosted on Hugging Face Hub:
from langchain.embeddings import HuggingFaceHubEmbeddings | https://python.langchain.com/en/latest/ecosystem/huggingface.html |
51eb78702c66-1 | from langchain.embeddings import HuggingFaceHubEmbeddings
For a more detailed walkthrough of this, see this notebook
Tokenizer#
There are several places you can use tokenizers available through the transformers package.
By default, it is used to count tokens for all LLMs.
You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_huggingface_tokenizer(...)
For a more detailed walkthrough of this, see this notebook
Datasets#
The Hugging Face Hub has lots of great datasets that can be used to evaluate your LLM chains.
For a detailed walkthrough of how to use them to do so, see this notebook
previous
Helicone
next
Jina
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Datasets
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/huggingface.html |
b537617f39f9-0 | .md
.pdf
GPT4All
Contents
Installation and Setup
Usage
GPT4All
Model File
GPT4All#
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.
Installation and Setup#
Install the Python package with pip install pyllamacpp
Download a GPT4All model and place it in your desired directory
Usage#
GPT4All#
To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model’s configuration.
from langchain.llms import GPT4All
# Instantiate the model. Callbacks support token-wise streaming
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Generate text
response = model("Once upon a time, ")
You can also customize the generation parameters, such as n_predict, temp, top_p, top_k, and others.
To stream the model’s predictions, add in a CallbackManager.
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
# There are many CallbackHandlers supported, such as
# from langchain.callbacks.streamlit import StreamlitCallbackHandler
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8, callback_handler=callback_handler, verbose=True)
# Generate text. Tokens are streamed through the callback manager.
model("Once upon a time, ")
Model File#
You can find links to model file downloads in the pyllamacpp repository. | https://python.langchain.com/en/latest/ecosystem/gpt4all.html |
b537617f39f9-1 | Model File#
You can find links to model file downloads in the pyllamacpp repository.
For a more detailed walkthrough of this, see this notebook
previous
GooseAI
next
Graphsignal
Contents
Installation and Setup
Usage
GPT4All
Model File
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/gpt4all.html |
f382ac2c7a56-0 | .md
.pdf
OpenAI
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Moderation
OpenAI#
This page covers how to use the OpenAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenAI wrappers.
Installation and Setup#
Install the Python SDK with pip install openai
Get an OpenAI api key and set it as an environment variable (OPENAI_API_KEY)
If you want to use OpenAI’s tokenizer (only available for Python 3.9+), install it with pip install tiktoken
Wrappers#
LLM#
There exists an OpenAI LLM wrapper, which you can access with
from langchain.llms import OpenAI
If you are using a model hosted on Azure, you should use different wrapper for that:
from langchain.llms import AzureOpenAI
For a more detailed walkthrough of the Azure wrapper, see this notebook
Embeddings#
There exists an OpenAI Embeddings wrapper, which you can access with
from langchain.embeddings import OpenAIEmbeddings
For a more detailed walkthrough of this, see this notebook
Tokenizer#
There are several places you can use the tiktoken tokenizer. By default, it is used to count tokens
for OpenAI LLMs.
You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
For a more detailed walkthrough of this, see this notebook
Moderation#
You can also access the OpenAI content moderation endpoint with
from langchain.chains import OpenAIModerationChain
For a more detailed walkthrough of this, see this notebook
previous
NLPCloud
next
OpenSearch
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer | https://python.langchain.com/en/latest/ecosystem/openai.html |
f382ac2c7a56-1 | Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Moderation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/openai.html |
d0e51581dcfa-0 | .md
.pdf
CerebriumAI
Contents
Installation and Setup
Wrappers
LLM
CerebriumAI#
This page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.
Installation and Setup#
Install with pip install cerebrium
Get an CerebriumAI api key and set it as an environment variable (CEREBRIUMAI_API_KEY)
Wrappers#
LLM#
There exists an CerebriumAI LLM wrapper, which you can access with
from langchain.llms import CerebriumAI
previous
Banana
next
Chroma
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/cerebriumai.html |
5373bd405264-0 | .md
.pdf
GooseAI
Contents
Installation and Setup
Wrappers
LLM
GooseAI#
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
Installation and Setup#
Install the Python SDK with pip install openai
Get your GooseAI api key from this link here.
Set the environment variable (GOOSEAI_API_KEY).
import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
Wrappers#
LLM#
There exists an GooseAI LLM wrapper, which you can access with:
from langchain.llms import GooseAI
previous
Google Serper Wrapper
next
GPT4All
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/gooseai.html |
e9cd50eda2d4-0 | .md
.pdf
SearxNG Search API
Contents
Installation and Setup
Self Hosted Instance:
Wrappers
Utility
Tool
SearxNG Search API#
This page covers how to use the SearxNG search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.
Installation and Setup#
While it is possible to utilize the wrapper in conjunction with public searx
instances these instances frequently do not permit API
access (see note on output format below) and have limitations on the frequency
of requests. It is recommended to opt for a self-hosted instance instead.
Self Hosted Instance:#
See this page for installation instructions.
When you install SearxNG, the only active output format by default is the HTML format.
You need to activate the json format to use the API. This can be done by adding the following line to the settings.yml file:
search:
formats:
- html
- json
You can make sure that the API is working by issuing a curl request to the API endpoint:
curl -kLX GET --data-urlencode q='langchain' -d format=json http://localhost:8888
This should return a JSON object with the results.
Wrappers#
Utility#
To use the wrapper we need to pass the host of the SearxNG instance to the wrapper with:
1. the named parameter searx_host when creating the instance.
2. exporting the environment variable SEARXNG_HOST.
You can use the wrapper to get results from a SearxNG instance.
from langchain.utilities import SearxSearchWrapper
s = SearxSearchWrapper(searx_host="http://localhost:8888")
s.run("what is a large language model?") | https://python.langchain.com/en/latest/ecosystem/searx.html |
e9cd50eda2d4-1 | s.run("what is a large language model?")
Tool#
You can also load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["searx-search"],
searx_host="http://localhost:8888",
engines=["github"])
Note that we could optionally pass custom engines to use.
If you want to obtain results with metadata as json you can use:
tools = load_tools(["searx-search-results-json"],
searx_host="http://localhost:8888",
num_results=5)
For more information on tools, see this page
previous
RWKV-4
next
SerpAPI
Contents
Installation and Setup
Self Hosted Instance:
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/searx.html |
1685f301eb03-0 | .md
.pdf
StochasticAI
Contents
Installation and Setup
Wrappers
LLM
StochasticAI#
This page covers how to use the StochasticAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.
Installation and Setup#
Install with pip install stochasticx
Get an StochasticAI api key and set it as an environment variable (STOCHASTICAI_API_KEY)
Wrappers#
LLM#
There exists an StochasticAI LLM wrapper, which you can access with
from langchain.llms import StochasticAI
previous
SerpAPI
next
Unstructured
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/stochasticai.html |
958e1ea75742-0 | .md
.pdf
ForefrontAI
Contents
Installation and Setup
Wrappers
LLM
ForefrontAI#
This page covers how to use the ForefrontAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers.
Installation and Setup#
Get an ForefrontAI api key and set it as an environment variable (FOREFRONTAI_API_KEY)
Wrappers#
LLM#
There exists an ForefrontAI LLM wrapper, which you can access with
from langchain.llms import ForefrontAI
previous
Deep Lake
next
Google Search Wrapper
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/forefrontai.html |
1111634a7cf1-0 | .md
.pdf
Modal
Contents
Installation and Setup
Define your Modal Functions and Webhooks
Wrappers
LLM
Modal#
This page covers how to use the Modal ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Modal wrappers.
Installation and Setup#
Install with pip install modal-client
Run modal token new
Define your Modal Functions and Webhooks#
You must include a prompt. There is a rigid response structure.
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def my_webhook(item: Item):
return {"prompt": my_function.call(item.prompt)}
An example with GPT2:
from pydantic import BaseModel
import modal
stub = modal.Stub("example-get-started")
volume = modal.SharedVolume().persist("gpt2_model_vol")
CACHE_PATH = "/root/model_cache"
@stub.function(
gpu="any",
image=modal.Image.debian_slim().pip_install(
"tokenizers", "transformers", "torch", "accelerate"
),
shared_volumes={CACHE_PATH: volume},
retries=3,
)
def run_gpt2(text: str):
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
encoded_input = tokenizer(text, return_tensors='pt').input_ids
output = model.generate(encoded_input, max_length=50, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def get_text(item: Item): | https://python.langchain.com/en/latest/ecosystem/modal.html |
1111634a7cf1-1 | @stub.webhook(method="POST")
def get_text(item: Item):
return {"prompt": run_gpt2.call(item.prompt)}
Wrappers#
LLM#
There exists an Modal LLM wrapper, which you can access with
from langchain.llms import Modal
previous
Milvus
next
NLPCloud
Contents
Installation and Setup
Define your Modal Functions and Webhooks
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/modal.html |
3c62a47820ac-0 | .md
.pdf
Unstructured
Contents
Installation and Setup
Wrappers
Data Loaders
Unstructured#
This page covers how to use the unstructured
ecosystem within LangChain. The unstructured package from
Unstructured.IO extracts clean text from raw source documents like
PDFs and Word documents.
This page is broken into two parts: installation and setup, and then references to specific
unstructured wrappers.
Installation and Setup#
Install the Python SDK with pip install "unstructured[local-inference]"
Install the following system dependencies if they are not already available on your system.
Depending on what document types you’re parsing, you may not need all of these.
libmagic-dev (filetype detection)
poppler-utils (images and PDFs)
tesseract-ocr(images and PDFs)
libreoffice (MS Office docs)
pandoc (EPUBs)
If you are parsing PDFs using the "hi_res" strategy, run the following to install the detectron2 model, which
unstructured uses for layout detection:
pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@e2ce8dc#egg=detectron2"
If detectron2 is not installed, unstructured will fallback to processing PDFs
using the "fast" strategy, which uses pdfminer directly and doesn’t require
detectron2.
Wrappers#
Data Loaders#
The primary unstructured wrappers within langchain are data loaders. The following
shows how to use the most basic unstructured data loader. There are other file-specific
data loaders available in the langchain.document_loaders module.
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("state_of_the_union.txt")
loader.load() | https://python.langchain.com/en/latest/ecosystem/unstructured.html |
3c62a47820ac-1 | loader = UnstructuredFileLoader("state_of_the_union.txt")
loader.load()
If you instantiate the loader with UnstructuredFileLoader(mode="elements"), the loader
will track additional metadata like the page number and text type (i.e. title, narrative text)
when that information is available.
previous
StochasticAI
next
Weights & Biases
Contents
Installation and Setup
Wrappers
Data Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/unstructured.html |
2d0d4ed2ab0d-0 | .md
.pdf
Replicate
Contents
Installation and Setup
Calling a model
Replicate#
This page covers how to run models on Replicate within LangChain.
Installation and Setup#
Create a Replicate account. Get your API key and set it as an environment variable (REPLICATE_API_TOKEN)
Install the Replicate python client with pip install replicate
Calling a model#
Find a model on the Replicate explore page, and then paste in the model name and version in this format: owner-name/model-name:version
For example, for this flan-t5 model, click on the API tab. The model name/version would be: daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8
Only the model param is required, but any other model parameters can also be passed in with the format input={model_param: value, ...}
For example, if we were running stable diffusion and wanted to change the image dimensions:
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
Note that only the first output of a model will be returned.
From here, we can initialize our model:
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
And run it:
prompt = """
Answer the following yes/no question by reasoning step by step. | https://python.langchain.com/en/latest/ecosystem/replicate.html |
2d0d4ed2ab0d-1 | prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
We can call any Replicate model (not just LLMs) using this syntax. For example, we can call Stable Diffusion:
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={'image_dimensions'='512x512'}
image_output = text2image("A cat riding a motorcycle by Picasso")
previous
Qdrant
next
Runhouse
Contents
Installation and Setup
Calling a model
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/replicate.html |
c1079d245415-0 | .ipynb
.pdf
ClearML Integration
Contents
Getting API Credentials
Setting Up
Scenario 1: Just an LLM
Scenario 2: Creating an agent with tools
Tips and Next Steps
ClearML Integration#
In order to properly keep track of your langchain experiments and their results, you can enable the ClearML integration. ClearML is an experiment manager that neatly tracks and organizes all your experiment runs.
Getting API Credentials#
We’ll be using quite some APIs in this notebook, here is a list and where to get them:
ClearML: https://app.clear.ml/settings/workspace-configuration
OpenAI: https://platform.openai.com/account/api-keys
SerpAPI (google search): https://serpapi.com/dashboard
import os
os.environ["CLEARML_API_ACCESS_KEY"] = ""
os.environ["CLEARML_API_SECRET_KEY"] = ""
os.environ["OPENAI_API_KEY"] = ""
os.environ["SERPAPI_API_KEY"] = ""
Setting Up#
!pip install clearml
!pip install pandas
!pip install textstat
!pip install spacy
!python -m spacy download en_core_web_sm
from datetime import datetime
from langchain.callbacks import ClearMLCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI
# Setup and use the ClearML Callback
clearml_callback = ClearMLCallbackHandler(
task_type="inference",
project_name="langchain_callback_demo",
task_name="llm",
tags=["test"],
# Change the following parameters based on the amount of detail you want tracked
visualize=True,
complexity_metrics=True,
stream_logs=True
)
manager = CallbackManager([StdOutCallbackHandler(), clearml_callback])
# Get the OpenAI model ready to go | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-1 | # Get the OpenAI model ready to go
llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)
The clearml callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/allegroai/clearml/issues with the tag `langchain`.
Scenario 1: Just an LLM#
First, let’s just run a single LLM a few times and capture the resulting prompt-answer conversation in ClearML
# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
# After every generation run, use flush to make sure all the metrics
# prompts and other output are properly saved separately
clearml_callback.flush_tracker(langchain_asset=llm, name="simple_sequential")
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-2 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-3 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-4 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-5 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-6 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-7 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-8 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-9 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17}
{'action_records': action name step starts ends errors text_ctr chain_starts \ | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.