id
stringlengths 14
16
| text
stringlengths 45
2.73k
| source
stringlengths 49
114
|
---|---|---|
c1079d245415-10 | 0 on_llm_start OpenAI 1 1 0 0 0 0
1 on_llm_start OpenAI 1 1 0 0 0 0
2 on_llm_start OpenAI 1 1 0 0 0 0
3 on_llm_start OpenAI 1 1 0 0 0 0
4 on_llm_start OpenAI 1 1 0 0 0 0
5 on_llm_start OpenAI 1 1 0 0 0 0
6 on_llm_end NaN 2 1 1 0 0 0
7 on_llm_end NaN 2 1 1 0 0 0
8 on_llm_end NaN 2 1 1 0 0 0
9 on_llm_end NaN 2 1 1 0 0 0
10 on_llm_end NaN 2 1 1 0 0 0
11 on_llm_end NaN 2 1 1 0 0 0
12 on_llm_start OpenAI 3 2 1 0 0 0
13 on_llm_start OpenAI 3 2 1 0 0 0 | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-11 | 14 on_llm_start OpenAI 3 2 1 0 0 0
15 on_llm_start OpenAI 3 2 1 0 0 0
16 on_llm_start OpenAI 3 2 1 0 0 0
17 on_llm_start OpenAI 3 2 1 0 0 0
18 on_llm_end NaN 4 2 2 0 0 0
19 on_llm_end NaN 4 2 2 0 0 0
20 on_llm_end NaN 4 2 2 0 0 0
21 on_llm_end NaN 4 2 2 0 0 0
22 on_llm_end NaN 4 2 2 0 0 0
23 on_llm_end NaN 4 2 2 0 0 0
chain_ends llm_starts ... difficult_words linsear_write_formula \
0 0 1 ... NaN NaN
1 0 1 ... NaN NaN
2 0 1 ... NaN NaN
3 0 1 ... NaN NaN
4 0 1 ... NaN NaN
5 0 1 ... NaN NaN | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-12 | 5 0 1 ... NaN NaN
6 0 1 ... 0.0 5.5
7 0 1 ... 2.0 6.5
8 0 1 ... 0.0 5.5
9 0 1 ... 2.0 6.5
10 0 1 ... 0.0 5.5
11 0 1 ... 2.0 6.5
12 0 2 ... NaN NaN
13 0 2 ... NaN NaN
14 0 2 ... NaN NaN
15 0 2 ... NaN NaN
16 0 2 ... NaN NaN
17 0 2 ... NaN NaN
18 0 2 ... 0.0 5.5
19 0 2 ... 2.0 6.5
20 0 2 ... 0.0 5.5
21 0 2 ... 2.0 6.5
22 0 2 ... 0.0 5.5
23 0 2 ... 2.0 6.5
gunning_fog text_standard fernandez_huerta szigriszt_pazos \
0 NaN NaN NaN NaN | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-13 | 0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 5.20 5th and 6th grade 133.58 131.54
7 8.28 6th and 7th grade 115.58 112.37
8 5.20 5th and 6th grade 133.58 131.54
9 8.28 6th and 7th grade 115.58 112.37
10 5.20 5th and 6th grade 133.58 131.54
11 8.28 6th and 7th grade 115.58 112.37
12 NaN NaN NaN NaN
13 NaN NaN NaN NaN
14 NaN NaN NaN NaN
15 NaN NaN NaN NaN
16 NaN NaN NaN NaN
17 NaN NaN NaN NaN
18 5.20 5th and 6th grade 133.58 131.54
19 8.28 6th and 7th grade 115.58 112.37
20 5.20 5th and 6th grade 133.58 131.54 | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-14 | 21 8.28 6th and 7th grade 115.58 112.37
22 5.20 5th and 6th grade 133.58 131.54
23 8.28 6th and 7th grade 115.58 112.37
gutierrez_polini crawford gulpease_index osman
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 62.30 -0.2 79.8 116.91
7 54.83 1.4 72.1 100.17
8 62.30 -0.2 79.8 116.91
9 54.83 1.4 72.1 100.17
10 62.30 -0.2 79.8 116.91
11 54.83 1.4 72.1 100.17
12 NaN NaN NaN NaN
13 NaN NaN NaN NaN
14 NaN NaN NaN NaN
15 NaN NaN NaN NaN
16 NaN NaN NaN NaN
17 NaN NaN NaN NaN
18 62.30 -0.2 79.8 116.91 | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-15 | 19 54.83 1.4 72.1 100.17
20 62.30 -0.2 79.8 116.91
21 54.83 1.4 72.1 100.17
22 62.30 -0.2 79.8 116.91
23 54.83 1.4 72.1 100.17
[24 rows x 39 columns], 'session_analysis': prompt_step prompts name output_step \
0 1 Tell me a joke OpenAI 2
1 1 Tell me a poem OpenAI 2
2 1 Tell me a joke OpenAI 2
3 1 Tell me a poem OpenAI 2
4 1 Tell me a joke OpenAI 2
5 1 Tell me a poem OpenAI 2
6 3 Tell me a joke OpenAI 4
7 3 Tell me a poem OpenAI 4
8 3 Tell me a joke OpenAI 4
9 3 Tell me a poem OpenAI 4
10 3 Tell me a joke OpenAI 4
11 3 Tell me a poem OpenAI 4
output \
0 \n\nQ: What did the fish say when it hit the w...
1 \n\nRoses are red,\nViolets are blue,\nSugar i... | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-16 | 2 \n\nQ: What did the fish say when it hit the w...
3 \n\nRoses are red,\nViolets are blue,\nSugar i...
4 \n\nQ: What did the fish say when it hit the w...
5 \n\nRoses are red,\nViolets are blue,\nSugar i...
6 \n\nQ: What did the fish say when it hit the w...
7 \n\nRoses are red,\nViolets are blue,\nSugar i...
8 \n\nQ: What did the fish say when it hit the w...
9 \n\nRoses are red,\nViolets are blue,\nSugar i...
10 \n\nQ: What did the fish say when it hit the w...
11 \n\nRoses are red,\nViolets are blue,\nSugar i...
token_usage_total_tokens token_usage_prompt_tokens \
0 162 24
1 162 24
2 162 24
3 162 24
4 162 24
5 162 24
6 162 24
7 162 24
8 162 24
9 162 24
10 162 24
11 162 24
token_usage_completion_tokens flesch_reading_ease flesch_kincaid_grade \
0 138 109.04 1.3
1 138 83.66 4.8 | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-17 | 1 138 83.66 4.8
2 138 109.04 1.3
3 138 83.66 4.8
4 138 109.04 1.3
5 138 83.66 4.8
6 138 109.04 1.3
7 138 83.66 4.8
8 138 109.04 1.3
9 138 83.66 4.8
10 138 109.04 1.3
11 138 83.66 4.8
... difficult_words linsear_write_formula gunning_fog \
0 ... 0 5.5 5.20
1 ... 2 6.5 8.28
2 ... 0 5.5 5.20
3 ... 2 6.5 8.28
4 ... 0 5.5 5.20
5 ... 2 6.5 8.28
6 ... 0 5.5 5.20
7 ... 2 6.5 8.28
8 ... 0 5.5 5.20
9 ... 2 6.5 8.28
10 ... 0 5.5 5.20 | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-18 | 10 ... 0 5.5 5.20
11 ... 2 6.5 8.28
text_standard fernandez_huerta szigriszt_pazos gutierrez_polini \
0 5th and 6th grade 133.58 131.54 62.30
1 6th and 7th grade 115.58 112.37 54.83
2 5th and 6th grade 133.58 131.54 62.30
3 6th and 7th grade 115.58 112.37 54.83
4 5th and 6th grade 133.58 131.54 62.30
5 6th and 7th grade 115.58 112.37 54.83
6 5th and 6th grade 133.58 131.54 62.30
7 6th and 7th grade 115.58 112.37 54.83
8 5th and 6th grade 133.58 131.54 62.30
9 6th and 7th grade 115.58 112.37 54.83
10 5th and 6th grade 133.58 131.54 62.30
11 6th and 7th grade 115.58 112.37 54.83
crawford gulpease_index osman | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-19 | crawford gulpease_index osman
0 -0.2 79.8 116.91
1 1.4 72.1 100.17
2 -0.2 79.8 116.91
3 1.4 72.1 100.17
4 -0.2 79.8 116.91
5 1.4 72.1 100.17
6 -0.2 79.8 116.91
7 1.4 72.1 100.17
8 -0.2 79.8 116.91
9 1.4 72.1 100.17
10 -0.2 79.8 116.91
11 1.4 72.1 100.17
[12 rows x 24 columns]}
2023-03-29 14:00:25,948 - clearml.Task - INFO - Completed model upload to https://files.clear.ml/langchain_callback_demo/llm.988bd727b0e94a29a3ac0ee526813545/models/simple_sequential
At this point you can already go to https://app.clear.ml and take a look at the resulting ClearML Task that was created.
Among others, you should see that this notebook is saved along with any git information. The model JSON that contains the used parameters is saved as an artifact, there are also console logs and under the plots section, youβll find tables that represent the flow of the chain.
Finally, if you enabled visualizations, these are stored as HTML files under debug samples. | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-20 | Finally, if you enabled visualizations, these are stored as HTML files under debug samples.
Scenario 2: Creating an agent with tools#
To show a more advanced workflow, letβs create an agent with access to tools. The way ClearML tracks the results is not different though, only the table will look slightly different as there are other types of actions taken when compared to the earlier, simpler example.
You can now also see the use of the finish=True keyword, which will fully close the ClearML Task, instead of just resetting the parameters and prompts for a new conversation.
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# SCENARIO 2 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callback_manager=manager)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callback_manager=manager,
verbose=True,
)
agent.run(
"Who is the wife of the person who sang summer of 69?"
)
clearml_callback.flush_tracker(langchain_asset=agent, name="Agent with Tools", finish=True)
> Entering new AgentExecutor chain...
{'action': 'on_chain_start', 'name': 'AgentExecutor', 'step': 1, 'starts': 1, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 0, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'input': 'Who is the wife of the person who sang summer of 69?'} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-21 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 2, 'starts': 2, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought:'} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-22 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 189, 'token_usage_completion_tokens': 34, 'token_usage_total_tokens': 223, 'model_name': 'text-davinci-003', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': ' I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 91.61, 'flesch_kincaid_grade': 3.8, 'smog_index': 0.0, 'coleman_liau_index': 3.41, 'automated_readability_index': 3.5, 'dale_chall_readability_score': 6.06, 'difficult_words': 2, 'linsear_write_formula': 5.75, 'gunning_fog': 5.4, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 121.07, 'szigriszt_pazos': 119.5, 'gutierrez_polini': 54.91, 'crawford': 0.9, 'gulpease_index': 72.7, 'osman': 92.16} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-23 | I need to find out who sang summer of 69 and then find out who their wife is.
Action: Search
Action Input: "Who sang summer of 69"{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who sang summer of 69', 'log': ' I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"', 'step': 4, 'starts': 3, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 1, 'tool_ends': 0, 'agent_ends': 0}
{'action': 'on_tool_start', 'input_str': 'Who sang summer of 69', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 5, 'starts': 4, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 0, 'agent_ends': 0}
Observation: Bryan Adams - Summer Of 69 (Official Music Video). | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-24 | Observation: Bryan Adams - Summer Of 69 (Official Music Video).
Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams - Summer Of 69 (Official Music Video).', 'step': 6, 'starts': 4, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-25 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 7, 'starts': 5, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\nThought:'} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-26 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 242, 'token_usage_completion_tokens': 28, 'token_usage_total_tokens': 270, 'model_name': 'text-davinci-003', 'step': 8, 'starts': 5, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'text': ' I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 94.66, 'flesch_kincaid_grade': 2.7, 'smog_index': 0.0, 'coleman_liau_index': 4.73, 'automated_readability_index': 4.0, 'dale_chall_readability_score': 7.16, 'difficult_words': 2, 'linsear_write_formula': 4.25, 'gunning_fog': 4.2, 'text_standard': '4th and 5th grade', 'fernandez_huerta': 124.13, 'szigriszt_pazos': 119.2, 'gutierrez_polini': 52.26, 'crawford': 0.7, 'gulpease_index': 74.7, 'osman': 84.2}
I need to find out who Bryan Adams is married to.
Action: Search | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-27 | I need to find out who Bryan Adams is married to.
Action: Search
Action Input: "Who is Bryan Adams married to"{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who is Bryan Adams married to', 'log': ' I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"', 'step': 9, 'starts': 6, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 3, 'tool_ends': 1, 'agent_ends': 0}
{'action': 'on_tool_start', 'input_str': 'Who is Bryan Adams married to', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 10, 'starts': 7, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 1, 'agent_ends': 0}
Observation: Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ... | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-28 | Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...', 'step': 11, 'starts': 7, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-29 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 12, 'starts': 8, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-30 | to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\nThought: I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"\nObservation: Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...\nThought:'} | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-31 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 314, 'token_usage_completion_tokens': 18, 'token_usage_total_tokens': 332, 'model_name': 'text-davinci-003', 'step': 13, 'starts': 8, 'ends': 5, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'text': ' I now know the final answer.\nFinal Answer: Bryan Adams has never been married.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 81.29, 'flesch_kincaid_grade': 3.7, 'smog_index': 0.0, 'coleman_liau_index': 5.75, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 7.37, 'difficult_words': 1, 'linsear_write_formula': 2.5, 'gunning_fog': 2.8, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 115.7, 'szigriszt_pazos': 110.84, 'gutierrez_polini': 49.79, 'crawford': 0.7, 'gulpease_index': 85.4, 'osman': 83.14}
I now know the final answer.
Final Answer: Bryan Adams has never been married. | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-32 | I now know the final answer.
Final Answer: Bryan Adams has never been married.
{'action': 'on_agent_finish', 'output': 'Bryan Adams has never been married.', 'log': ' I now know the final answer.\nFinal Answer: Bryan Adams has never been married.', 'step': 14, 'starts': 8, 'ends': 6, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1}
> Finished chain.
{'action': 'on_chain_end', 'outputs': 'Bryan Adams has never been married.', 'step': 15, 'starts': 8, 'ends': 7, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 1, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1}
{'action_records': action name step starts ends errors text_ctr \
0 on_llm_start OpenAI 1 1 0 0 0
1 on_llm_start OpenAI 1 1 0 0 0
2 on_llm_start OpenAI 1 1 0 0 0
3 on_llm_start OpenAI 1 1 0 0 0 | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-33 | 4 on_llm_start OpenAI 1 1 0 0 0
.. ... ... ... ... ... ... ...
66 on_tool_end NaN 11 7 4 0 0
67 on_llm_start OpenAI 12 8 4 0 0
68 on_llm_end NaN 13 8 5 0 0
69 on_agent_finish NaN 14 8 6 0 0
70 on_chain_end NaN 15 8 7 0 0
chain_starts chain_ends llm_starts ... gulpease_index osman input \
0 0 0 1 ... NaN NaN NaN
1 0 0 1 ... NaN NaN NaN
2 0 0 1 ... NaN NaN NaN
3 0 0 1 ... NaN NaN NaN
4 0 0 1 ... NaN NaN NaN
.. ... ... ... ... ... ... ...
66 1 0 2 ... NaN NaN NaN
67 1 0 3 ... NaN NaN NaN
68 1 0 3 ... 85.4 83.14 NaN
69 1 0 3 ... NaN NaN NaN | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-34 | 69 1 0 3 ... NaN NaN NaN
70 1 1 3 ... NaN NaN NaN
tool tool_input log \
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
.. ... ... ...
66 NaN NaN NaN
67 NaN NaN NaN
68 NaN NaN NaN
69 NaN NaN I now know the final answer.\nFinal Answer: B...
70 NaN NaN NaN
input_str description output \
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
.. ... ... ...
66 NaN NaN Bryan Adams has never married. In the 1990s, h...
67 NaN NaN NaN
68 NaN NaN NaN
69 NaN NaN Bryan Adams has never been married.
70 NaN NaN NaN
outputs
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
.. ...
66 NaN
67 NaN
68 NaN
69 NaN
70 Bryan Adams has never been married.
[71 rows x 47 columns], 'session_analysis': prompt_step prompts name \
0 2 Answer the following questions as best you can... OpenAI | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-35 | 0 2 Answer the following questions as best you can... OpenAI
1 7 Answer the following questions as best you can... OpenAI
2 12 Answer the following questions as best you can... OpenAI
output_step output \
0 3 I need to find out who sang summer of 69 and ...
1 8 I need to find out who Bryan Adams is married...
2 13 I now know the final answer.\nFinal Answer: B...
token_usage_total_tokens token_usage_prompt_tokens \
0 223 189
1 270 242
2 332 314
token_usage_completion_tokens flesch_reading_ease flesch_kincaid_grade \
0 34 91.61 3.8
1 28 94.66 2.7
2 18 81.29 3.7
... difficult_words linsear_write_formula gunning_fog \
0 ... 2 5.75 5.4
1 ... 2 4.25 4.2
2 ... 1 2.50 2.8
text_standard fernandez_huerta szigriszt_pazos gutierrez_polini \
0 3rd and 4th grade 121.07 119.50 54.91
1 4th and 5th grade 124.13 119.20 52.26 | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
c1079d245415-36 | 2 3rd and 4th grade 115.70 110.84 49.79
crawford gulpease_index osman
0 0.9 72.7 92.16
1 0.7 74.7 84.20
2 0.7 85.4 83.14
[3 rows x 24 columns]}
Could not update last created model in Task 988bd727b0e94a29a3ac0ee526813545, Task status 'completed' cannot be updated
Tips and Next Steps#
Make sure you always use a unique name argument for the clearml_callback.flush_tracker function. If not, the model parameters used for a run will override the previous run!
If you close the ClearML Callback using clearml_callback.flush_tracker(..., finish=True) the Callback cannot be used anymore. Make a new one if you want to keep logging.
Check out the rest of the open source ClearML ecosystem, there is a data version manager, a remote execution agent, automated pipelines and much more!
previous
Chroma
next
Cohere
Contents
Getting API Credentials
Setting Up
Scenario 1: Just an LLM
Scenario 2: Creating an agent with tools
Tips and Next Steps
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/clearml_tracking.html |
467fdff8ffbd-0 | .md
.pdf
SerpAPI
Contents
Installation and Setup
Wrappers
Utility
Tool
SerpAPI#
This page covers how to use the SerpAPI search APIs within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.
Installation and Setup#
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY)
Wrappers#
Utility#
There exists a SerpAPI utility which wraps this API. To import this utility:
from langchain.utilities import SerpAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["serpapi"])
For more information on this, see this page
previous
SearxNG Search API
next
StochasticAI
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/serpapi.html |
3008ed244883-0 | .md
.pdf
Google Search Wrapper
Contents
Installation and Setup
Wrappers
Utility
Tool
Google Search Wrapper#
This page covers how to use the Google Search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific Google Search wrapper.
Installation and Setup#
Install requirements with pip install google-api-python-client
Set up a Custom Search Engine, following these instructions
Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables GOOGLE_API_KEY and GOOGLE_CSE_ID respectively
Wrappers#
Utility#
There exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities import GoogleSearchAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["google-search"])
For more information on this, see this page
previous
ForefrontAI
next
Google Serper Wrapper
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/google_search.html |
a8b3b26ebaa9-0 | .md
.pdf
DeepInfra
Contents
Installation and Setup
Wrappers
LLM
DeepInfra#
This page covers how to use the DeepInfra ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
Installation and Setup#
Get your DeepInfra api key from this link here.
Get an DeepInfra api key and set it as an environment variable (DEEPINFRA_API_TOKEN)
Wrappers#
LLM#
There exists an DeepInfra LLM wrapper, which you can access with
from langchain.llms import DeepInfra
previous
Databerry
next
Deep Lake
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/deepinfra.html |
716bb4b61910-0 | .ipynb
.pdf
Aim
Aim#
Aim makes it super easy to visualize and debug LangChain executions. Aim tracks inputs and outputs of LLMs and tools, as well as actions of agents.
With Aim, you can easily debug and examine an individual execution:
Additionally, you have the option to compare multiple executions side by side:
Aim is fully open source, learn more about Aim on GitHub.
Letβs move forward and see how to enable and configure Aim callback.
Tracking LangChain Executions with AimIn this notebook we will explore three usage scenarios. To start off, we will install the necessary packages and import certain modules. Subsequently, we will configure two environment variables that can be established either within the Python script or through the terminal.
!pip install aim
!pip install langchain
!pip install openai
!pip install google-search-results
import os
from datetime import datetime
from langchain.llms import OpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks import AimCallbackHandler, StdOutCallbackHandler
Our examples use a GPT model as the LLM, and OpenAI offers an API for this purpose. You can obtain the key from the following link: https://platform.openai.com/account/api-keys .
We will use the SerpApi to retrieve search results from Google. To acquire the SerpApi key, please go to https://serpapi.com/manage-api-key .
os.environ["OPENAI_API_KEY"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."
The event methods of AimCallbackHandler accept the LangChain module or agent as input and log at least the prompts and generated results, as well as the serialized version of the LangChain module, to the designated Aim run.
session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S") | https://python.langchain.com/en/latest/ecosystem/aim_tracking.html |
716bb4b61910-1 | session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")
aim_callback = AimCallbackHandler(
repo=".",
experiment_name="scenario 1: OpenAI LLM",
)
manager = CallbackManager([StdOutCallbackHandler(), aim_callback])
llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)
The flush_tracker function is used to record LangChain assets on Aim. By default, the session is reset rather than being terminated outright.
Scenario 1 In the first scenario, we will use OpenAI LLM.
# scenario 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
aim_callback.flush_tracker(
langchain_asset=llm,
experiment_name="scenario 2: Chain with multiple SubChains on multiple generations",
)
Scenario 2 Scenario two involves chaining with multiple SubChains across multiple generations.
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# scenario 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)
test_prompts = [
{"title": "documentary about good video games that push the boundary of game design"},
{"title": "the phenomenon behind the remarkable speed of cheetahs"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts)
aim_callback.flush_tracker( | https://python.langchain.com/en/latest/ecosystem/aim_tracking.html |
716bb4b61910-2 | ]
synopsis_chain.apply(test_prompts)
aim_callback.flush_tracker(
langchain_asset=synopsis_chain, experiment_name="scenario 3: Agent with Tools"
)
Scenario 3 The third scenario involves an agent with tools.
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# scenario 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callback_manager=manager)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callback_manager=manager,
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True)
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: Leonardo DiCaprio seemed to prove a long-held theory about his love life right after splitting from girlfriend Camila Morrone just months ...
Thought: I need to find out Camila Morrone's age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years
Thought: I need to calculate 25 raised to the 0.43 power
Action: Calculator
Action Input: 25^0.43
Observation: Answer: 3.991298452658078
Thought: I now know the final answer | https://python.langchain.com/en/latest/ecosystem/aim_tracking.html |
716bb4b61910-3 | Thought: I now know the final answer
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.
> Finished chain.
previous
AI21 Labs
next
Apify
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/aim_tracking.html |
06fae0d73a20-0 | .md
.pdf
Apify
Contents
Overview
Installation and Setup
Wrappers
Utility
Loader
Apify#
This page covers how to use Apify within LangChain.
Overview#
Apify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand
ready-made apps called Actors for various scraping, crawling, and extraction use cases.
This integration enables you run Actors on the Apify platform and load their results into LangChain to feed your vector
indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
blogs, or knowledge bases.
Installation and Setup#
Install the Apify API client for Python with pip install apify-client
Get your Apify API token and either set it as
an environment variable (APIFY_API_TOKEN) or pass it to the ApifyWrapper as apify_api_token in the constructor.
Wrappers#
Utility#
You can use the ApifyWrapper to run Actors on the Apify platform.
from langchain.utilities import ApifyWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Loader#
You can also use our ApifyDatasetLoader to get data from Apify dataset.
from langchain.document_loaders import ApifyDatasetLoader
For a more detailed walkthrough of this loader, see this notebook.
previous
Aim
next
AtlasDB
Contents
Overview
Installation and Setup
Wrappers
Utility
Loader
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/apify.html |
c347767c3d83-0 | .md
.pdf
Cohere
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Cohere#
This page covers how to use the Cohere ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Cohere wrappers.
Installation and Setup#
Install the Python SDK with pip install cohere
Get an Cohere api key and set it as an environment variable (COHERE_API_KEY)
Wrappers#
LLM#
There exists an Cohere LLM wrapper, which you can access with
from langchain.llms import Cohere
Embeddings#
There exists an Cohere Embeddings wrapper, which you can access with
from langchain.embeddings import CohereEmbeddings
For a more detailed walkthrough of this, see this notebook
previous
ClearML Integration
next
Comet
Contents
Installation and Setup
Wrappers
LLM
Embeddings
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/cohere.html |
ef95b25f9f93-0 | .md
.pdf
Google Serper Wrapper
Contents
Setup
Wrappers
Utility
Output
Tool
Google Serper Wrapper#
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
Setup#
Go to serper.dev to sign up for a free account
Get the api key and set it as an environment variable (SERPER_API_KEY)
Wrappers#
Utility#
There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities import GoogleSerperAPIWrapper
You can use it as part of a Self Ask chain:
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search"
)
]
self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
Output#
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion? | https://python.langchain.com/en/latest/ecosystem/google_serper.html |
ef95b25f9f93-1 | Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["google-serper"])
For more information on this, see this page
previous
Google Search Wrapper
next
GooseAI
Contents
Setup
Wrappers
Utility
Output
Tool
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/google_serper.html |
16b8d97597cf-0 | .md
.pdf
AtlasDB
Contents
Installation and Setup
Wrappers
VectorStore
AtlasDB#
This page covers how to use Nomicβs Atlas ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Atlas wrappers.
Installation and Setup#
Install the Python package with pip install nomic
Nomic is also included in langchains poetry extras poetry install -E all
Wrappers#
VectorStore#
There exists a wrapper around the Atlas neural database, allowing you to use it as a vectorstore.
This vectorstore also gives you full access to the underlying AtlasProject object, which will allow you to use the full range of Atlas map interactions, such as bulk tagging and automatic topic modeling.
Please see the Atlas docs for more detailed information.
To import this vectorstore:
from langchain.vectorstores import AtlasDB
For a more detailed walkthrough of the AtlasDB wrapper, see this notebook
previous
Apify
next
Banana
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/atlas.html |
b7fab6471aba-0 | .md
.pdf
AI21 Labs
Contents
Installation and Setup
Wrappers
LLM
AI21 Labs#
This page covers how to use the AI21 ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.
Installation and Setup#
Get an AI21 api key and set it as an environment variable (AI21_API_KEY)
Wrappers#
LLM#
There exists an AI21 LLM wrapper, which you can access with
from langchain.llms import AI21
previous
LangChain Ecosystem
next
Aim
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/ai21.html |
9f40c7ec29fe-0 | .md
.pdf
PGVector
Contents
Installation
Setup
Wrappers
VectorStore
Usage
PGVector#
This page covers how to use the Postgres PGVector ecosystem within LangChain
It is broken into two parts: installation and setup, and then references to specific PGVector wrappers.
Installation#
Install the Python package with pip install pgvector
Setup#
The first step is to create a database with the pgvector extension installed.
Follow the steps at PGVector Installation Steps to install the database and the extension. The docker image is the easiest way to get started.
Wrappers#
VectorStore#
There exists a wrapper around Postgres vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores.pgvector import PGVector
Usage#
For a more detailed walkthrough of the PGVector Wrapper, see this notebook
previous
Petals
next
Pinecone
Contents
Installation
Setup
Wrappers
VectorStore
Usage
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/pgvector.html |
daa4163a996c-0 | .md
.pdf
Petals
Contents
Installation and Setup
Wrappers
LLM
Petals#
This page covers how to use the Petals ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Petals wrappers.
Installation and Setup#
Install with pip install petals
Get a Hugging Face api key and set it as an environment variable (HUGGINGFACE_API_KEY)
Wrappers#
LLM#
There exists an Petals LLM wrapper, which you can access with
from langchain.llms import Petals
previous
OpenSearch
next
PGVector
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/petals.html |
25aa0b7ec5db-0 | .md
.pdf
OpenSearch
Contents
Installation and Setup
Wrappers
VectorStore
OpenSearch#
This page covers how to use the OpenSearch ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers.
Installation and Setup#
Install the Python package with pip install opensearch-py
Wrappers#
VectorStore#
There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore
for semantic search using approximate vector search powered by lucene, nmslib and faiss engines
or using painless scripting and script scoring functions for bruteforce vector search.
To import this vectorstore:
from langchain.vectorstores import OpenSearchVectorSearch
For a more detailed walkthrough of the OpenSearch wrapper, see this notebook
previous
OpenAI
next
Petals
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/opensearch.html |
24812e89b5fb-0 | .md
.pdf
Runhouse
Contents
Installation and Setup
Self-hosted LLMs
Self-hosted Embeddings
Runhouse#
This page covers how to use the Runhouse ecosystem within LangChain.
It is broken into three parts: installation and setup, LLMs, and Embeddings.
Installation and Setup#
Install the Python SDK with pip install runhouse
If youβd like to use on-demand cluster, check your cloud credentials with sky check
Self-hosted LLMs#
For a basic self-hosted LLM, you can use the SelfHostedHuggingFaceLLM class. For more
custom LLMs, you can use the SelfHostedPipeline parent class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
For a more detailed walkthrough of the Self-hosted LLMs, see this notebook
Self-hosted Embeddings#
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
the SelfHostedEmbedding class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
For a more detailed walkthrough of the Self-hosted Embeddings, see this notebook
previous
Replicate
next
RWKV-4
Contents
Installation and Setup
Self-hosted LLMs
Self-hosted Embeddings
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/runhouse.html |
42bff0c4ec62-0 | .md
.pdf
PromptLayer
Contents
Installation and Setup
Wrappers
LLM
PromptLayer#
This page covers how to use PromptLayer within LangChain.
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
Installation and Setup#
If you want to work with PromptLayer:
Install the promptlayer python library pip install promptlayer
Create a PromptLayer account
Create an api token and set it as an environment variable (PROMPTLAYER_API_KEY)
Wrappers#
LLM#
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
from langchain.llms import PromptLayerOpenAI
To tag your requests, use the argument pl_tags when instanializing the LLM
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
To get the PromptLayer request id, use the argument return_pl_id when instanializing the LLM
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(return_pl_id=True)
This will add the PromptLayer request ID in the generation_info field of the Generation returned when using .generate or .agenerate
For example:
llm_results = llm.generate(["hello world"])
for res in llm_results.generations:
print("pl request id: ", res[0].generation_info["pl_request_id"])
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. Read more about it here.
This LLM is identical to the OpenAI LLM, except that
all your requests will be logged to your PromptLayer account
you can add pl_tags when instantializing to tag your requests on PromptLayer | https://python.langchain.com/en/latest/ecosystem/promptlayer.html |
42bff0c4ec62-1 | you can add pl_tags when instantializing to tag your requests on PromptLayer
you can add return_pl_id when instantializing to return a PromptLayer request id to use while tracking requests.
PromptLayer also provides native wrappers for PromptLayerChatOpenAI and PromptLayerOpenAIChat
previous
Pinecone
next
Qdrant
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/promptlayer.html |
d81c4907af33-0 | .md
.pdf
Pinecone
Contents
Installation and Setup
Wrappers
VectorStore
Pinecone#
This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
Installation and Setup#
Install the Python SDK with pip install pinecone-client
Wrappers#
VectorStore#
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Pinecone
For a more detailed walkthrough of the Pinecone wrapper, see this notebook
previous
PGVector
next
PromptLayer
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/pinecone.html |
8ab3c605fc6d-0 | .md
.pdf
Hazy Research
Contents
Installation and Setup
Wrappers
LLM
Hazy Research#
This page covers how to use the Hazy Research ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hazy Research wrappers.
Installation and Setup#
To use the manifest, install it with pip install manifest-ml
Wrappers#
LLM#
There exists an LLM wrapper around Hazy Researchβs manifest library.
manifest is a python library which is itself a wrapper around many model providers, and adds in caching, history, and more.
To use this wrapper:
from langchain.llms.manifest import ManifestWrapper
previous
Graphsignal
next
Helicone
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/hazy_research.html |
4cd2d60bfcbd-0 | .md
.pdf
Deep Lake
Contents
Why Deep Lake?
More Resources
Installation and Setup
Wrappers
VectorStore
Deep Lake#
This page covers how to use the Deep Lake ecosystem within LangChain.
Why Deep Lake?#
More than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.
Not only stores embeddings, but also the original data with automatic version control.
Truly serverless. Doesnβt require another service and can be used with major cloud providers (AWS S3, GCS, etc.)
More Resources#
Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data
Twitter the-algorithm codebase analysis with Deep Lake
Here is whitepaper and academic paper for Deep Lake
Here is a set of additional resources available for review: Deep Lake, Getting Started andΒ Tutorials
Installation and Setup#
Install the Python package with pip install deeplake
Wrappers#
VectorStore#
There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vector store (for now), whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import DeepLake
For a more detailed walkthrough of the Deep Lake wrapper, see this notebook
previous
DeepInfra
next
ForefrontAI
Contents
Why Deep Lake?
More Resources
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/deeplake.html |
6783ce46e07f-0 | .md
.pdf
Zilliz
Contents
Installation and Setup
Wrappers
VectorStore
Zilliz#
This page covers how to use the Zilliz Cloud ecosystem within LangChain.
Zilliz uses the Milvus integration.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
Installation and Setup#
Install the Python SDK with pip install pymilvus
Wrappers#
VectorStore#
There exists a wrapper around Zilliz indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Milvus
For a more detailed walkthrough of the Miluvs wrapper, see this notebook
previous
Yeager.ai
next
Glossary
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/zilliz.html |
f910a5d626cc-0 | .md
.pdf
Yeager.ai
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
Yeager.ai#
This page covers how to use Yeager.ai to generate LangChain tools and agents.
What is Yeager.ai?#
Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools.
It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.
yAgents#
Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease.
How to use?#
pip install yeagerai-agent
yeagerai-agent
Go to http://127.0.0.1:7860
This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab βSettingsβ.
OPENAI_API_KEY=<your_openai_api_key_here>
We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.
Creating and Executing Tools with yAgents#
yAgents makes it easy to create and execute AI-powered tools. Hereβs a brief overview of the process:
Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the toolβs purpose and functionality. For example:
create a tool that returns the n-th prime number | https://python.langchain.com/en/latest/ecosystem/yeagerai.html |
f910a5d626cc-1 | create a tool that returns the n-th prime number
Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkit
Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime number
You can see a video of how it works here.
As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.
For more information, see yAgentsβ Github or our docs
previous
Writer
next
Zilliz
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/yeagerai.html |
db3b0dbfd141-0 | .ipynb
.pdf
Weights & Biases
Weights & Biases#
This notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.
Run in Colab: https://colab.research.google.com/drive/1DXH4beT4HFaRKy_Vm4PoxhXVDRf7Ym8L?usp=sharing
View Report: https://wandb.ai/a-sh0ts/langchain_callback_demo/reports/Prompt-Engineering-LLMs-with-LangChain-and-W-BβVmlldzozNjk1NTUw#π-how-to-build-a-callback-in-langchain-for-better-prompt-engineering
!pip install wandb
!pip install pandas
!pip install textstat
!pip install spacy
!python -m spacy download en_core_web_sm
import os
os.environ["WANDB_API_KEY"] = ""
# os.environ["OPENAI_API_KEY"] = ""
# os.environ["SERPAPI_API_KEY"] = ""
from datetime import datetime
from langchain.callbacks import WandbCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI
Callback Handler that logs to Weights and Biases.
Parameters:
job_type (str): The type of job.
project (str): The project to log to.
entity (str): The entity to log to.
tags (list): The tags to log.
group (str): The group to log to.
name (str): The name of the run.
notes (str): The notes to log. | https://python.langchain.com/en/latest/ecosystem/wandb_tracking.html |
db3b0dbfd141-1 | notes (str): The notes to log.
visualize (bool): Whether to visualize the run.
complexity_metrics (bool): Whether to log complexity metrics.
stream_logs (bool): Whether to stream callback actions to W&B
Default values for WandbCallbackHandler(...)
visualize: bool = False,
complexity_metrics: bool = False,
stream_logs: bool = False,
NOTE: For beta workflows we have made the default analysis based on textstat and the visualizations based on spacy
"""Main function.
This function is used to try the callback handler.
Scenarios:
1. OpenAI LLM
2. Chain with multiple SubChains on multiple generations
3. Agent with Tools
"""
session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")
wandb_callback = WandbCallbackHandler(
job_type="inference",
project="langchain_callback_demo",
group=f"minimal_{session_group}",
name="llm",
tags=["test"],
)
manager = CallbackManager([StdOutCallbackHandler(), wandb_callback])
llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)
wandb: Currently logged in as: harrison-chase. Use `wandb login --relogin` to force relogin | https://python.langchain.com/en/latest/ecosystem/wandb_tracking.html |
db3b0dbfd141-2 | Tracking run with wandb version 0.14.0Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150408-e47j1914Syncing run llm to Weights & Biases (docs) View project at https://wandb.ai/harrison-chase/langchain_callback_demo View run at https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914wandb: WARNING The wandb callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.
# Defaults for WandbCallbackHandler.flush_tracker(...)
reset: bool = True,
finish: bool = False,
The flush_tracker function is used to log LangChain sessions to Weights & Biases. It takes in the LangChain module or agent, and logs at minimum the prompts and generations alongside the serialized form of the LangChain module to the specified Weights & Biases project. By default we reset the session as opposed to concluding the session outright.
# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
wandb_callback.flush_tracker(llm, name="simple_sequential") | https://python.langchain.com/en/latest/ecosystem/wandb_tracking.html |
db3b0dbfd141-3 | wandb_callback.flush_tracker(llm, name="simple_sequential")
Waiting for W&B process to finish... (success). View run llm at: https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914Synced 5 W&B file(s), 2 media file(s), 5 artifact file(s) and 0 other file(s)Find logs at: ./wandb/run-20230318_150408-e47j1914/logsTracking run with wandb version 0.14.0Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7huSyncing run simple_sequential to Weights & Biases (docs) View project at https://wandb.ai/harrison-chase/langchain_callback_demo View run at https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# SCENARIO 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)
test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
{"title": "cocaine bear vs heroin wolf"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts) | https://python.langchain.com/en/latest/ecosystem/wandb_tracking.html |
db3b0dbfd141-4 | ]
synopsis_chain.apply(test_prompts)
wandb_callback.flush_tracker(synopsis_chain, name="agent")
Waiting for W&B process to finish... (success). View run simple_sequential at: https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7huSynced 4 W&B file(s), 2 media file(s), 6 artifact file(s) and 0 other file(s)Find logs at: ./wandb/run-20230318_150534-jyxma7hu/logsTracking run with wandb version 0.14.0Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjqSyncing run agent to Weights & Biases (docs) View project at https://wandb.ai/harrison-chase/langchain_callback_demo View run at https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# SCENARIO 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callback_manager=manager)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callback_manager=manager,
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
wandb_callback.flush_tracker(agent, reset=False, finish=True)
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search | https://python.langchain.com/en/latest/ecosystem/wandb_tracking.html |
db3b0dbfd141-5 | Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: DiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.
Thought: I need to calculate her age raised to the 0.43 power.
Action: Calculator
Action Input: 26^0.43
Observation: Answer: 4.059182145592686
Thought: I now know the final answer.
Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.
> Finished chain.
Waiting for W&B process to finish... (success). View run agent at: https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjqSynced 5 W&B file(s), 2 media file(s), 7 artifact file(s) and 0 other file(s)Find logs at: ./wandb/run-20230318_150550-wzy59zjq/logs
previous
Unstructured
next
Weaviate
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/wandb_tracking.html |
3fe10c6e1d30-0 | .md
.pdf
Writer
Contents
Installation and Setup
Wrappers
LLM
Writer#
This page covers how to use the Writer ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
Installation and Setup#
Get an Writer api key and set it as an environment variable (WRITER_API_KEY)
Wrappers#
LLM#
There exists an Writer LLM wrapper, which you can access with
from langchain.llms import Writer
previous
Wolfram Alpha Wrapper
next
Yeager.ai
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/writer.html |
471ef75cf2f4-0 | .ipynb
.pdf
Comet
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
Comet#
In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet.
Example Project: Comet with LangChain
Install Comet and Dependencies#
%pip install comet_ml langchain openai google-search-results spacy textstat pandas
import sys
!{sys.executable} -m spacy download en_core_web_sm
Initialize Comet and Set your Credentials#
You can grab your Comet API Key here or click the link after intializing Comet
import comet_ml
comet_ml.init(project_name="comet-example-langchain")
Set OpenAI and SerpAPI credentials#
You will need an OpenAI API Key and a SerpAPI API Key to run the following examples
import os
os.environ["OPENAI_API_KEY"] = "..."
#os.environ["OPENAI_ORGANIZATION"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."
Scenario 1: Using just an LLM#
from datetime import datetime
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["llm"],
visualizations=["dep"],
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback]) | https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
471ef75cf2f4-1 | )
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)
llm_result = llm.generate(["Tell me a joke", "Tell me a poem", "Tell me a fact"] * 3)
print("LLM result", llm_result)
comet_callback.flush_tracker(llm, finish=True)
Scenario 2: Using an LLM in a Chain#
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
comet_callback = CometCallbackHandler(
complexity_metrics=True,
project_name="comet-example-langchain",
stream_logs=True,
tags=["synopsis-chain"],
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)
test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]
print(synopsis_chain.apply(test_prompts))
comet_callback.flush_tracker(synopsis_chain, finish=True)
Scenario 3: Using An Agent with Tools#
from langchain.agents import initialize_agent, load_tools
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler | https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
471ef75cf2f4-2 | from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["agent"],
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)
tools = load_tools(["serpapi", "llm-math"], llm=llm, callback_manager=manager)
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
callback_manager=manager,
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
comet_callback.flush_tracker(agent, finish=True)
Scenario 4: Using Custom Evaluation Metrics#
The CometCallbackManager also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Letβs take a look at how this works.
In the snippet below, we will use the ROUGE metric to evaluate the quality of a generated summary of an input prompt.
%pip install rouge-score
from rouge_score import rouge_scorer
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
class Rouge:
def __init__(self, reference):
self.reference = reference
self.scorer = rouge_scorer.RougeScorer(["rougeLsum"], use_stemmer=True) | https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
471ef75cf2f4-3 | def compute_metric(self, generation, prompt_idx, gen_idx):
prediction = generation.text
results = self.scorer.score(target=self.reference, prediction=prediction)
return {
"rougeLsum_score": results["rougeLsum"].fmeasure,
"reference": self.reference,
}
reference = """
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.
It was the first structure to reach a height of 300 metres.
It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)
Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France .
"""
rouge_score = Rouge(reference=reference)
template = """Given the following article, it is your job to write a summary.
Article:
{article}
Summary: This is the summary for the above article:"""
prompt_template = PromptTemplate(input_variables=["article"], template=template)
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=False,
stream_logs=True,
tags=["custom_metrics"],
custom_metrics=rouge_score.compute_metric,
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)
test_prompts = [
{
"article": """
The tower is 324 metres (1,063 ft) tall, about the same height as
an 81-storey building, and the tallest structure in Paris. Its base is square, | https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
471ef75cf2f4-4 | an 81-storey building, and the tallest structure in Paris. Its base is square,
measuring 125 metres (410 ft) on each side.
During its construction, the Eiffel Tower surpassed the
Washington Monument to become the tallest man-made structure in the world,
a title it held for 41 years until the Chrysler Building
in New York City was finished in 1930.
It was the first structure to reach a height of 300 metres.
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
it is now taller than the Chrysler Building by 5.2 metres (17 ft).
Excluding transmitters, the Eiffel Tower is the second tallest
free-standing structure in France after the Millau Viaduct.
"""
}
]
print(synopsis_chain.apply(test_prompts))
comet_callback.flush_tracker(synopsis_chain, finish=True)
previous
Cohere
next
Databerry
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
18f2bc1e7be0-0 | .md
.pdf
Databerry
Contents
What is Databerry?
Quick start
Databerry#
This page covers how to use the Databerry within LangChain.
What is Databerry?#
Databerry is an open source document retrievial platform that helps to connect your personal data with Large Language Models.
Quick start#
Retrieving documents stored in Databerry from LangChain is very easy!
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://api.databerry.ai/query/clg1xg2h80000l708dymr0fxc",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
docs = retriever.get_relevant_documents("What's Databerry?")
previous
Comet
next
DeepInfra
Contents
What is Databerry?
Quick start
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/databerry.html |
a08eb410ae17-0 | .md
.pdf
Milvus
Contents
Installation and Setup
Wrappers
VectorStore
Milvus#
This page covers how to use the Milvus ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
Installation and Setup#
Install the Python SDK with pip install pymilvus
Wrappers#
VectorStore#
There exists a wrapper around Milvus indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Milvus
For a more detailed walkthrough of the Miluvs wrapper, see this notebook
previous
Llama.cpp
next
Modal
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/milvus.html |
995539b386ba-0 | .md
.pdf
Chroma
Contents
Installation and Setup
Wrappers
VectorStore
Chroma#
This page covers how to use the Chroma ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Chroma wrappers.
Installation and Setup#
Install the Python package with pip install chromadb
Wrappers#
VectorStore#
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Chroma
For a more detailed walkthrough of the Chroma wrapper, see this notebook
previous
CerebriumAI
next
ClearML Integration
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/chroma.html |
2f11f61cc09a-0 | .md
.pdf
RWKV-4
Contents
Installation and Setup
Usage
RWKV
Model File
Rwkv-4 models -> recommended VRAM
RWKV-4#
This page covers how to use the RWKV-4 wrapper within LangChain.
It is broken into two parts: installation and setup, and then usage with an example.
Installation and Setup#
Install the Python package with pip install rwkv
Install the tokenizer Python package with pip install tokenizer
Download a RWKV model and place it in your desired directory
Download the tokens file
Usage#
RWKV#
To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizerβs configuration.
from langchain.llms import RWKV
# Test the model
```python
def generate_prompt(instruction, input=None):
if input:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
# Instruction:
{instruction}
# Input:
{input}
# Response:
"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
# Instruction:
{instruction}
# Response:
"""
model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json")
response = model(generate_prompt("Once upon a time, "))
Model File#
You can find links to model file downloads at the RWKV-4-Raven repository.
Rwkv-4 models -> recommended VRAM#
RWKV VRAM
Model | 8bit | bf16/fp16 | fp32 | https://python.langchain.com/en/latest/ecosystem/rwkv.html |
2f11f61cc09a-1 | RWKV VRAM
Model | 8bit | bf16/fp16 | fp32
14B | 16GB | 28GB | >50GB
7B | 8GB | 14GB | 28GB
3B | 2.8GB| 6GB | 12GB
1b5 | 1.3GB| 3GB | 6GB
See the rwkv pip page for more information about strategies, including streaming and cuda support.
previous
Runhouse
next
SearxNG Search API
Contents
Installation and Setup
Usage
RWKV
Model File
Rwkv-4 models -> recommended VRAM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/rwkv.html |
6a2c8c13d56f-0 | .md
.pdf
Weaviate
Contents
Installation and Setup
Wrappers
VectorStore
Weaviate#
This page covers how to use the Weaviate ecosystem within LangChain.
What is Weaviate?
Weaviate in a nutshell:
Weaviate is an open-source βdatabase of the type βvector search engine.
Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
Weaviate has a GraphQL-API to access your data easily.
We aim to bring your vector search set up to production to query in mere milliseconds (check our open source benchmarks to see if Weaviate fits your use case).
Get to know Weaviate in the basics getting started guide in under five minutes.
Weaviate in detail:
Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
Installation and Setup#
Install the Python SDK with pip install weaviate-client
Wrappers#
VectorStore#
There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Weaviate | https://python.langchain.com/en/latest/ecosystem/weaviate.html |
6a2c8c13d56f-1 | To import this vectorstore:
from langchain.vectorstores import Weaviate
For a more detailed walkthrough of the Weaviate wrapper, see this notebook
previous
Weights & Biases
next
Wolfram Alpha Wrapper
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/weaviate.html |
b90192cf75cc-0 | .md
.pdf
Qdrant
Contents
Installation and Setup
Wrappers
VectorStore
Qdrant#
This page covers how to use the Qdrant ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
Installation and Setup#
Install the Python SDK with pip install qdrant-client
Wrappers#
VectorStore#
There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Qdrant
For a more detailed walkthrough of the Qdrant wrapper, see this notebook
previous
PromptLayer
next
Replicate
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/qdrant.html |
c7170a93f23b-0 | .md
.pdf
NLPCloud
Contents
Installation and Setup
Wrappers
LLM
NLPCloud#
This page covers how to use the NLPCloud ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific NLPCloud wrappers.
Installation and Setup#
Install the Python SDK with pip install nlpcloud
Get an NLPCloud api key and set it as an environment variable (NLPCLOUD_API_KEY)
Wrappers#
LLM#
There exists an NLPCloud LLM wrapper, which you can access with
from langchain.llms import NLPCloud
previous
Modal
next
OpenAI
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/ecosystem/nlpcloud.html |
7be29da028a4-0 | .md
.pdf
Installation
Contents
Official Releases
Installing from source
Installation#
Official Releases#
LangChain is available on PyPi, so to it is easily installable with:
pip install langchain
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed.
However, there are two other ways to install LangChain that do bring in those dependencies.
To install modules needed for the common LLM providers, run:
pip install langchain[llms]
To install all modules needed for all integrations, run:
pip install langchain[all]
Note that if you are using zsh, youβll need to quote square brackets when passing them as an argument to a command, for example:
pip install 'langchain[all]'
Installing from source#
If you want to install from source, you can do so by cloning the repo and running:
pip install -e .
previous
SQL Question Answering Benchmarking: Chinook
next
Integrations
Contents
Official Releases
Installing from source
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/reference/installation.html |
e820fd915de3-0 | .md
.pdf
Integrations
Integrations#
Besides the installation of this python package, you will also need to install packages and set environment variables depending on which chains you want to use.
Note: the reason these packages are not included in the dependencies by default is that as we imagine scaling this package, we do not want to force dependencies that are not needed.
The following use cases require specific installs and api keys:
OpenAI:
Install requirements with pip install openai
Get an OpenAI api key and either set it as an environment variable (OPENAI_API_KEY) or pass it to the LLM constructor as openai_api_key.
Cohere:
Install requirements with pip install cohere
Get a Cohere api key and either set it as an environment variable (COHERE_API_KEY) or pass it to the LLM constructor as cohere_api_key.
GooseAI:
Install requirements with pip install openai
Get an GooseAI api key and either set it as an environment variable (GOOSEAI_API_KEY) or pass it to the LLM constructor as gooseai_api_key.
Hugging Face Hub
Install requirements with pip install huggingface_hub
Get a Hugging Face Hub api token and either set it as an environment variable (HUGGINGFACEHUB_API_TOKEN) or pass it to the LLM constructor as huggingfacehub_api_token.
Petals:
Install requirements with pip install petals
Get an GooseAI api key and either set it as an environment variable (HUGGINGFACE_API_KEY) or pass it to the LLM constructor as huggingface_api_key.
CerebriumAI:
Install requirements with pip install cerebrium
Get a Cerebrium api key and either set it as an environment variable (CEREBRIUMAI_API_KEY) or pass it to the LLM constructor as cerebriumai_api_key. | https://python.langchain.com/en/latest/reference/integrations.html |
e820fd915de3-1 | PromptLayer:
Install requirements with pip install promptlayer (be sure to be on version 0.1.62 or higher)
Get an API key from promptlayer.com and set it using promptlayer.api_key=<API KEY>
SerpAPI:
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY) or pass it to the LLM constructor as serpapi_api_key.
GoogleSearchAPI:
Install requirements with pip install google-api-python-client
Get a Google api key and either set it as an environment variable (GOOGLE_API_KEY) or pass it to the LLM constructor as google_api_key. You will also need to set the GOOGLE_CSE_ID environment variable to your custom search engine id. You can pass it to the LLM constructor as google_cse_id as well.
WolframAlphaAPI:
Install requirements with pip install wolframalpha
Get a Wolfram Alpha api key and either set it as an environment variable (WOLFRAM_ALPHA_APPID) or pass it to the LLM constructor as wolfram_alpha_appid.
NatBot:
Install requirements with pip install playwright
Wikipedia:
Install requirements with pip install wikipedia
Elasticsearch:
Install requirements with pip install elasticsearch
Set up Elasticsearch backend. If you want to do locally, this is a good guide.
FAISS:
Install requirements with pip install faiss for Python 3.7 and pip install faiss-cpu for Python 3.10+.
Manifest:
Install requirements with pip install manifest-ml (Note: this is only available in Python 3.8+ currently).
OpenSearch:
Install requirements with pip install opensearch-py
If you want to set up OpenSearch on your local, here
DeepLake:
Install requirements with pip install deeplake | https://python.langchain.com/en/latest/reference/integrations.html |
e820fd915de3-2 | DeepLake:
Install requirements with pip install deeplake
LlamaCpp:
Install requirements with pip install llama-cpp-python
Download model and convert following llama.cpp instructions
Milvus:
Install requirements with pip install pymilvus
In order to setup a local cluster, take a look here.
Zilliz:
Install requirements with pip install pymilvus
To get up and running, take a look here.
If you are using the NLTKTextSplitter or the SpacyTextSplitter, you will also need to install the appropriate models. For example, if you want to use the SpacyTextSplitter, you will need to install the en_core_web_sm model with python -m spacy download en_core_web_sm. Similarly, if you want to use the NLTKTextSplitter, you will need to install the punkt model with python -m nltk.downloader punkt.
previous
Installation
next
API References
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/reference/integrations.html |
25fa73f225a4-0 | .rst
.pdf
Utilities
Utilities#
There are a lot of different utilities that LangChain provides integrations for
These guides go over how to use them.
These can largely be grouped into two categories: generic utilities, and then utilities for working with larger text documents.
Generic Utilities
Python REPL
SerpAPI
SearxNG Search
Utilities for working with Documents
Docstore
Text Splitter
Embeddings
VectorStores
previous
API References
next
Python REPL
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/reference/utils.html |
8bdb8b468df3-0 | .rst
.pdf
Prompts
Prompts#
The reference guides here all relate to objects for working with Prompts.
PromptTemplates
Example Selector
previous
How to serialize prompts
next
PromptTemplates
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/reference/prompts.html |
79e856615526-0 | .rst
.pdf
VectorStores
VectorStores#
Wrappers on top of vector stores.
class langchain.vectorstores.Annoy(embedding_function: Callable, index: Any, metric: str, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#
Wrapper around Annoy vector database.
To use, you should have the annoy python package installed.
Example
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) β langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from embeddings.
Parameters
text_embeddings β List of tuples of (text, embedding)
embedding β Embedding function to use.
metadatas β List of metadata dictionaries to associate with documents.
metric β Metric to use for indexing. Defaults to βangularβ.
trees β Number of trees to use for indexing. Defaults to 100.
n_jobs β Number of jobs to use for indexing. Defaults to -1 | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-1 | n_jobs β Number of jobs to use for indexing. Defaults to -1
This is a user friendly interface that:
Creates an in memory docstore with provided embeddings
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) β langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from raw documents.
Parameters
texts β List of documents to index.
embedding β Embedding function to use.
metadatas β List of metadata dictionaries to associate with documents.
metric β Metric to use for indexing. Defaults to βangularβ.
trees β Number of trees to use for indexing. Defaults to 100.
n_jobs β Number of jobs to use for indexing. Defaults to -1.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings) | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-2 | index = Annoy.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings) β langchain.vectorstores.annoy.Annoy[source]#
Load Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings β Embeddings to use when generating queries.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents selected by maximal marginal relevance.
process_index_results(idxs: List[int], dists: List[float]) β List[Tuple[langchain.schema.Document, float]][source]# | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-3 | Turns annoy results into a list of documents and scores.
Parameters
idxs β List of indices of the documents in the index.
dists β List of distances of the documents in the index.
Returns
List of Documents and scores.
save_local(folder_path: str, prefault: bool = False) β None[source]#
Save Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to save index, docstore,
and index_to_docstore_id to.
prefault β Whether to pre-load the index into memory.
similarity_search(query: str, k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query.
similarity_search_by_index(docstore_index: int, k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to docstore_index.
Parameters
docstore_index β Index of document in docstore
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-4 | Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_index(docstore_index: int, k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-5 | to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#
Wrapper around Atlas: Nomicβs neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]]) β An optional list of ids.
refresh (bool) β Whether or not to refresh indices with the updated data.
Default True.
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs: Any) β Any[source]#
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail. | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-6 | for full detail.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) β Name of the collection to create.
api_key (str) β Your nomic API key,
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-7 | Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) β The list of texts to ingest.
name (str) β Name of the project to create.
api_key (str) β Your nomic API key,
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-8 | Run similarity search with AtlasDB
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
Returns
List of documents most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None, client: Optional[chromadb.Client] = None)[source]#
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_collection() β None[source]#
Delete the collection. | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-9 | Return type
List[str]
delete_collection() β None[source]#
Delete the collection.
classmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) β Chroma[source]#
Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) β Chroma[source]#
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-10 | Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) β List of texts to add to the collection.
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents. | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-11 | Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents selected by maximal marginal relevance.
persist() β None[source]#
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Run similarity search with Chroma.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector. | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-12 | Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.DeepLake(dataset_path: str = 'mem://langchain', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, num_workers: int = 4, **kwargs: Any)[source]#
Wrapper around Deep Lake, a data lake for deep learning applications.
We implement naive similarity search and filtering for fast prototyping,
but it can be extended with Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesnβt require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings() | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-13 | embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete(ids: Any[List[str], None] = None, filter: Any[Dict[str, str], None] = None, delete_all: Any[bool, None] = None) β bool[source]#
Delete the entities in the dataset
Parameters
ids (Optional[List[str]], optional) β The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional) β The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional) β Whether to drop the dataset.
Defaults to None.
delete_dataset() β None[source]#
Delete the collection.
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = 'mem://langchain', **kwargs: Any) β langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-14 | Otherwise, the data will be ephemeral in-memory.
Parameters
path (str, pathlib.Path) β
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use βactiveloop loginβ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the form``gcs://bucketname/path/to/dataset``Credentials are required
in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesnβtsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
documents (List[Document]) β List of documents to add.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
Returns
Deep Lake dataset.
Return type
DeepLake
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Returns | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-15 | :param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
persist() β None[source]#
Persist the collection.
search(query: Any[str, None] = None, embedding: Any[float, None] = None, k: int = 4, distance_metric: str = 'L2', use_maximal_marginal_relevance: Optional[bool] = False, fetch_k: Optional[int] = 20, filter: Optional[Any[Dict[str, str], Callable, str]] = None, return_score: Optional[bool] = False, **kwargs: Any) β Any[List[Document], List[Tuple[Document, float]]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
embedding β Embedding function to use. Defaults to None.
k β Number of Documents to return. Defaults to 4.
distance_metric β L2 for Euclidean, L1 for Nuclear,
max L-infinity distance, cos for cosine similarity,
βdotβ for dot product. Defaults to L2.
filter β Attribute filter by metadata example {βkeyβ: βvalueβ}. It can also | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-16 | filter β Attribute filter by metadata example {βkeyβ: βvalueβ}. It can also
filter] (take [Deep Lake) β
(https β //docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.filter)
Defaults to None.
maximal_marginal_relevance β Whether to use maximal marginal relevance.
Defaults to False.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
return_score β Whether to return the score. Defaults to False.
Returns
List of Documents selected by the specified distance metric,
if return_score True, return a tuple of (Document, score)
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β text to embed and run the query on.
k β Number of Documents to return.
Defaults to 4.
query β Text to look up documents similar to.
embedding β Embedding function to use.
Defaults to None.
k β Number of Documents to return.
Defaults to 4.
distance_metric β L2 for Euclidean, L1 for Nuclear, max
L-infinity distance, cos for cosine similarity, βdotβ for dot product
Defaults to L2.
filter β Attribute filter by metadata example {βkeyβ: βvalueβ}.
Defaults to None.
maximal_marginal_relevance β Whether to use maximal marginal relevance.
Defaults to False.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
return_score β Whether to return the score. Defaults to False.
Returns
List of Documents most similar to the query vector. | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-17 | Returns
List of Documents most similar to the query vector.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, distance_metric: str = 'L2', k: int = 4, filter: Optional[Dict[str, str]] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Deep Lake with distance returned.
Parameters
query (str) β Query text to search for.
distance_metric β L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, βdotβ for dot product.
Defaults to L2.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch( | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-18 | embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) β The URL for the Elasticsearch instance.
index_name (str) β The name of the Elasticsearch index for the embeddings. | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-19 | index_name (str) β The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) β An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
Raises
ValueError β If the elasticsearch python package is not installed.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
refresh_indices β bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
79e856615526-20 | Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
class langchain.vectorstores.FAISS(embedding_function: typing.Callable, index: typing.Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: typing.Dict[int, str], relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_relevance_score_fn>)[source]#
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
add_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings β Iterable pairs of string and embedding to
add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore. | https://python.langchain.com/en/latest/reference/modules/vectorstore.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.