id
stringlengths
14
16
source
stringlengths
49
117
text
stringlengths
16
2.73k
99b2553660fa-88
https://python.langchain.com/en/latest/genindex.html
top_k_docs_for_context (langchain.chains.ChatVectorDBChain attribute) top_k_results (langchain.utilities.ArxivAPIWrapper attribute) (langchain.utilities.GooglePlacesAPIWrapper attribute) (langchain.utilities.PubMedAPIWrapper attribute) (langchain.utilities.WikipediaAPIWrapper attribute) top_n (langchain.retrievers.docu...
99b2553660fa-89
https://python.langchain.com/en/latest/genindex.html
ts_type_from_python() (langchain.tools.APIOperation static method) ttl (langchain.memory.RedisEntityStore attribute) tuned_model_name (langchain.llms.VertexAI attribute) TwitterTweetLoader (class in langchain.document_loaders) type (langchain.utilities.GoogleSerperAPIWrapper attribute) Typesense (class in langchain.vec...
99b2553660fa-90
https://python.langchain.com/en/latest/genindex.html
(langchain.llms.Anyscale class method) (langchain.llms.AzureOpenAI class method) (langchain.llms.Banana class method) (langchain.llms.Beam class method) (langchain.llms.Bedrock class method) (langchain.llms.CerebriumAI class method) (langchain.llms.Cohere class method) (langchain.llms.CTransformers class method) (langc...
99b2553660fa-91
https://python.langchain.com/en/latest/genindex.html
(langchain.llms.SagemakerEndpoint class method) (langchain.llms.SelfHostedHuggingFaceLLM class method) (langchain.llms.SelfHostedPipeline class method) (langchain.llms.StochasticAI class method) (langchain.llms.VertexAI class method) (langchain.llms.Writer class method) upsert_messages() (langchain.memory.CosmosDBChatM...
99b2553660fa-92
https://python.langchain.com/en/latest/genindex.html
vectorizer (langchain.retrievers.TFIDFRetriever attribute) VectorStore (class in langchain.vectorstores) vectorstore (langchain.agents.agent_toolkits.VectorStoreInfo attribute) (langchain.chains.ChatVectorDBChain attribute) (langchain.chains.VectorDBQA attribute) (langchain.chains.VectorDBQAWithSourcesChain attribute) ...
99b2553660fa-93
https://python.langchain.com/en/latest/genindex.html
(langchain.llms.HumanInputLLM attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.Modal attribute) (langchain.llms.MosaicML attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.OpenLM attribute) (langchain.llms.Petals attribute...
99b2553660fa-94
https://python.langchain.com/en/latest/genindex.html
web_path (langchain.document_loaders.WebBaseLoader property) web_paths (langchain.document_loaders.WebBaseLoader attribute) WebBaseLoader (class in langchain.document_loaders) WhatsAppChatLoader (class in langchain.document_loaders) Wikipedia (class in langchain.docstore) WikipediaLoader (class in langchain.document_lo...
4ed3a21b83d7-0
https://python.langchain.com/en/latest/search.html
Search Error Please activate JavaScript to enable the search functionality. Ctrl+K By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 04, 2023.
1fd387a86ce7-0
https://python.langchain.com/en/latest/ecosystem/deployments.html
.md .pdf Deployments Contents Streamlit Gradio (on Hugging Face) Chainlit Beam Vercel FastAPI + Vercel Kinsta Fly.io Digitalocean App Platform Google Cloud Run SteamShip Langchain-serve BentoML Databutton Deployments# So, you’ve created a really cool chain - now what? How do you deploy it and make it easily shareable...
1fd387a86ce7-1
https://python.langchain.com/en/latest/ecosystem/deployments.html
This repo serves as a template for how deploy a LangChain with Beam. It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API. Vercel# A minimal example on how to run LangChain on Vercel using Flask. FastAPI + Vercel# A minimal example on how to run LangChain on Ve...
1fd387a86ce7-2
https://python.langchain.com/en/latest/ecosystem/deployments.html
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memory, a Person...
ef68ed4865a9-0
https://python.langchain.com/en/latest/tracing/local_installation.html
.md .pdf Locally Hosted Setup Contents Installation Environment Setup Locally Hosted Setup# This page contains instructions for installing and then setting up the environment to use the locally hosted version of tracing. Installation# Ensure you have Docker installed (see Get Docker) and that it’s running. Install th...
ef68ed4865a9-1
https://python.langchain.com/en/latest/tracing/local_installation.html
Last updated on Jun 04, 2023.
df5c460206ef-0
https://python.langchain.com/en/latest/tracing/hosted_installation.html
.md .pdf Cloud Hosted Setup Contents Installation Environment Setup Cloud Hosted Setup# We offer a hosted version of tracing at langchainplus.vercel.app. You can use this to view traces from your run without having to run the server locally. Note: we are currently only offering this to a limited number of users. The ...
df5c460206ef-1
https://python.langchain.com/en/latest/tracing/hosted_installation.html
os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Better to set it in your terminal. Contents Installation Environment Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 04, 2023.
27b211f430d6-0
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
.ipynb .pdf Tracing Walkthrough Contents [Beta] Tracing V2 Tracing Walkthrough# There are two recommended ways to trace your LangChains: Setting the LANGCHAIN_TRACING environment variable to “true”. Using a context manager with tracing_enabled() to trace a particular block of code. Note if the environment variable is...
27b211f430d6-1
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
I need to use a calculator to solve this. Action: Calculator Action Input: 2^.123243 Observation: Answer: 1.0891804557407723 Thought: I now know the final answer. Final Answer: 1.0891804557407723 > Finished chain. '1.0891804557407723' # Agent run with tracing using a chat model agent = initialize_agent( tools, Chat...
27b211f430d6-2
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
Observation: Answer: 1.2193914912400514 Thought:I now know the answer to the question. Final Answer: 1.2193914912400514 > Finished chain. # Now, we unset the environment variable and use a context manager. if "LANGCHAIN_TRACING" in os.environ: del os.environ["LANGCHAIN_TRACING"] # here, we are writing traces to "m...
27b211f430d6-3
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced with tracing_enabled() as session: assert session tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced await asyncio.gather(*tasks) await task > Entering new AgentExecutor chain... > Entering new AgentExec...
27b211f430d6-4
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
# os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus" # Uncomment this line if you want to use the hosted version # os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGCHAINPLUS-API-KEY>" # Uncomment this line if you want to use the hosted version. import langchain from langchain.agents import Tool, initialize_a...
b099ae02276a-0
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
Source code for langchain.text_splitter """Functionality for splitting text.""" from __future__ import annotations import copy import logging import re from abc import ABC, abstractmethod from enum import Enum from typing import ( AbstractSet, Any, Callable, Collection, Iterable, List, Liter...
b099ae02276a-1
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
keep_separator: bool = False, ): """Create a new TextSplitter. Args: chunk_size: Maximum size of chunks to return chunk_overlap: Overlap in characters between chunks length_function: Function that measures the length of given chunks keep_separator: Whe...
b099ae02276a-2
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]: text = separator.join(docs) text = text.strip() if text == "": return None else: return text def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]: # We now w...
b099ae02276a-3
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
current_doc = current_doc[1:] current_doc.append(d) total += _len + (separator_len if len(current_doc) > 1 else 0) doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) return docs [docs] @classmethod def from_huggingface_to...
b099ae02276a-4
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
"This is needed in order to calculate max_tokens_for_prompt. " "Please install it with `pip install tiktoken`." ) if model_name is not None: enc = tiktoken.encoding_for_model(model_name) else: enc = tiktoken.get_encoding(encoding_name) def _tik...
b099ae02276a-5
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
[docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits = _split_text(text, self._separator, self._keep_separator) _separator = "" if self._keep_separator else self....
b099ae02276a-6
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
disallowed_special=self._disallowed_special, ) start_idx = 0 cur_idx = min(start_idx + self._chunk_size, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] while start_idx < len(input_ids): splits.append(self._tokenizer.decode(chunk_ids)) start_idx +...
b099ae02276a-7
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
def _split_text(self, text: str, separators: List[str]) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = separators[-1] new_separators = None for i, _s in enumerate(separators): if _s =...
b099ae02276a-8
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
separators = cls.get_separators_for_language(language) return cls(separators=separators, **kwargs) [docs] @staticmethod def get_separators_for_language(language: Language) -> List[str]: if language == Language.CPP: return [ # Split along class definitions ...
b099ae02276a-9
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
" ", "", ] elif language == Language.JS: return [ # Split along function definitions "\nfunction ", "\nconst ", "\nlet ", "\nvar ", "\nclass ", # Split along co...
b099ae02276a-10
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
"\nclass ", "\ndef ", "\n\tdef ", # Now split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.RST: return [ # Split along section...
b099ae02276a-11
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
# Split along method definitions "\ndef ", "\nval ", "\nvar ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nmatch ", "\ncase ", # Spl...
b099ae02276a-12
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
"\n", " ", "", ] elif language == Language.LATEX: return [ # First, try to split along Latex sections "\n\\chapter{", "\n\\section{", "\n\\subsection{", "\n\\subsubsection{", ...
b099ae02276a-13
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
f"Please choose from {list(Language)}" ) [docs]class NLTKTextSplitter(TextSplitter): """Implementation of splitting text that looks at sentences using NLTK.""" def __init__(self, separator: str = "\n\n", **kwargs: Any): """Initialize the NLTK splitter.""" super().__init__(**kwargs) ...
b099ae02276a-14
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
splits = (str(s) for s in self._tokenizer(text).sents) return self._merge_splits(splits, self._separator) # For backwards compatibility [docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Python syntax.""" def __init__(self, **kwargs: Any): """...
81b8eb375b52-0
https://python.langchain.com/en/latest/_modules/langchain/requests.html
Source code for langchain.requests """Lightweight wrapper around requests library, with async support.""" from contextlib import asynccontextmanager from typing import Any, AsyncGenerator, Dict, Optional import aiohttp import requests from pydantic import BaseModel, Extra class Requests(BaseModel): """Wrapper aroun...
81b8eb375b52-1
https://python.langchain.com/en/latest/_modules/langchain/requests.html
"""DELETE the URL and return the text.""" return requests.delete(url, headers=self.headers, **kwargs) @asynccontextmanager async def _arequest( self, method: str, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """Make an async request.""" if not se...
81b8eb375b52-2
https://python.langchain.com/en/latest/_modules/langchain/requests.html
@asynccontextmanager async def aput( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """PUT the URL and return the text asynchronously.""" async with self._arequest("PUT", url, **kwargs) as response: yield response @a...
81b8eb375b52-3
https://python.langchain.com/en/latest/_modules/langchain/requests.html
[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PATCH the URL and return the text.""" return self.requests.patch(url, data, **kwargs).text [docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PUT the URL and return the text.""" ...
81b8eb375b52-4
https://python.langchain.com/en/latest/_modules/langchain/requests.html
return await response.text() [docs] async def adelete(self, url: str, **kwargs: Any) -> str: """DELETE the URL and return the text asynchronously.""" async with self.requests.adelete(url, **kwargs) as response: return await response.text() # For backwards compatibility RequestsWrapper = T...
b1e543a584e5-0
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
Source code for langchain.document_transformers """Transform documents""" from typing import Any, Callable, List, Sequence import numpy as np from pydantic import BaseModel, Field from langchain.embeddings.base import Embeddings from langchain.math_utils import cosine_similarity from langchain.schema import BaseDocumen...
b1e543a584e5-1
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
if first_idx in included_idxs and second_idx in included_idxs: # Default to dropping the second document of any highly similar pair. included_idxs.remove(second_idx) return list(sorted(included_idxs)) def _get_embeddings_from_stateful_docs( embeddings: Embeddings, documents: Sequence[_Do...
b1e543a584e5-2
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
included_idxs = _filter_similar_embeddings( embedded_documents, self.similarity_fn, self.similarity_threshold ) return [stateful_documents[i] for i in sorted(included_idxs)] [docs] async def atransform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence...
5b2703e2b6f1-0
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
Source code for langchain.experimental.autonomous_agents.baby_agi.baby_agi """BabyAGI agent.""" from collections import deque from typing import Any, Dict, List, Optional from pydantic import BaseModel, Field from langchain.base_language import BaseLanguageModel from langchain.callbacks.manager import CallbackManagerFo...
5b2703e2b6f1-1
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
def print_next_task(self, task: Dict) -> None: print("\033[92m\033[1m" + "\n*****NEXT TASK*****\n" + "\033[0m\033[0m") print(str(task["task_id"]) + ": " + task["task_name"]) def print_task_result(self, result: str) -> None: print("\033[93m\033[1m" + "\n*****TASK RESULT*****\n" + "\033[0m\033...
5b2703e2b6f1-2
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
next_task_id=str(next_task_id), objective=objective, ) new_tasks = response.split("\n") prioritized_task_list = [] for task_string in new_tasks: if not task_string.strip(): continue task_parts = task_string.strip().split(".", 1) ...
5b2703e2b6f1-3
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
if self.task_list: self.print_task_list() # Step 1: Pull the first task task = self.task_list.popleft() self.print_next_task(task) # Step 2: Execute the task result = self.execute_task(objective, task["task_name"]) ...
5b2703e2b6f1-4
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html
"""Initialize the BabyAGI Controller.""" task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose) task_prioritization_chain = TaskPrioritizationChain.from_llm( llm, verbose=verbose ) if task_execution_chain is None: execution_chain: Chain = TaskExecu...
64fa5b911a50-0
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html
Source code for langchain.experimental.autonomous_agents.autogpt.agent from __future__ import annotations from typing import List, Optional from pydantic import ValidationError from langchain.chains.llm import LLMChain from langchain.chat_models.base import BaseChatModel from langchain.experimental.autonomous_agents.au...
64fa5b911a50-1
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html
tools: List[BaseTool], llm: BaseChatModel, human_in_the_loop: bool = False, output_parser: Optional[BaseAutoGPTOutputParser] = None, ) -> AutoGPT: prompt = AutoGPTPrompt( ai_name=ai_name, ai_role=ai_role, tools=tools, input_variables=["...
64fa5b911a50-2
https://python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html
tools = {t.name: t for t in self.tools} if action.name == FINISH_NAME: return action.args["response"] if action.name in tools: tool = tools[action.name] try: observation = tool.run(action.args) except ValidationE...
ace0ca23347e-0
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
Source code for langchain.experimental.generative_agents.generative_agent import re from datetime import datetime from typing import Any, Dict, List, Optional, Tuple from pydantic import BaseModel, Field from langchain import LLMChain from langchain.base_language import BaseLanguageModel from langchain.experimental.gen...
ace0ca23347e-1
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
def _parse_list(text: str) -> List[str]: """Parse a newline-separated string into a list of strings.""" lines = re.split(r"\n", text.strip()) return [re.sub(r"^\s*\d+\.\s*", "", line).strip() for line in lines] def chain(self, prompt: PromptTemplate) -> LLMChain: return LLMChain( ...
ace0ca23347e-2
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
q2 = f"{entity_name} is {entity_action}" return self.chain(prompt=prompt).run(q1=q1, queries=[q1, q2]).strip() def _generate_reaction( self, observation: str, suffix: str, now: Optional[datetime] = None ) -> str: """React to a given observation or dialogue act.""" prompt = Prompt...
ace0ca23347e-3
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
return self.chain(prompt=prompt).run(**kwargs).strip() def _clean_response(self, text: str) -> str: return re.sub(f"^{self.name} ", "", text.strip()).strip() [docs] def generate_reaction( self, observation: str, now: Optional[datetime] = None ) -> Tuple[bool, str]: """React to a given...
ace0ca23347e-4
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
[docs] def generate_dialogue_response( self, observation: str, now: Optional[datetime] = None ) -> Tuple[bool, str]: """React to a given observation.""" call_to_action_template = ( "What would {agent_name} say? To end the conversation, write:" ' GOODBYE: "what to s...
ace0ca23347e-5
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
# updated periodically through probing its memories # ###################################################### def _compute_agent_summary(self) -> str: """""" prompt = PromptTemplate.from_template( "How would you summarize {name}'s core characteristics given the" + " follo...
ace0ca23347e-6
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html
summary = self.get_summary(force_refresh=force_refresh, now=now) current_time_str = now.strftime("%B %d, %Y, %I:%M %p") return ( f"{summary}\nIt is {current_time_str}.\n{self.name}'s status: {self.status}" ) By Harrison Chase © Copyright 2023, Harrison Chase. Last...
4d820bc16959-0
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
Source code for langchain.experimental.generative_agents.memory import logging import re from datetime import datetime from typing import Any, Dict, List, Optional from langchain import LLMChain from langchain.base_language import BaseLanguageModel from langchain.prompts import PromptTemplate from langchain.retrievers ...
4d820bc16959-1
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
relevant_memories_simple_key: str = "relevant_memories_simple" most_recent_memories_key: str = "most_recent_memories" now_key: str = "now" reflecting: bool = False def chain(self, prompt: PromptTemplate) -> LLMChain: return LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose) @staticm...
4d820bc16959-2
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
"""Generate 'insights' on a topic of reflection, based on pertinent memories.""" prompt = PromptTemplate.from_template( "Statements relevant to: '{topic}'\n" "---\n" "{related_statements}\n" "---\n" "What 5 high-level novel insights can you infer from ...
4d820bc16959-3
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
new_insights.extend(insights) return new_insights def _score_memory_importance(self, memory_content: str) -> float: """Score the absolute importance of the given memory.""" prompt = PromptTemplate.from_template( "On the scale of 1 to 10, where 1 is purely mundane" + "...
4d820bc16959-4
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
# more synthesized memories to the agent's memory stream. if ( self.reflection_threshold is not None and self.aggregate_importance > self.reflection_threshold and not self.reflecting ): self.reflecting = True self.pause_to_reflect(now=now) ...
4d820bc16959-5
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
for doc in self.memory_retriever.memory_stream[::-1]: if consumed_tokens >= self.max_tokens_limit: break consumed_tokens += self.llm.get_num_tokens(doc.page_content) if consumed_tokens < self.max_tokens_limit: result.append(doc) return self.for...
4d820bc16959-6
https://python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html
self.add_memory(mem, now=now) [docs] def clear(self) -> None: """Clear memory contents.""" # TODO By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 04, 2023.
257ac82b613a-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
Source code for langchain.retrievers.time_weighted_retriever """Retriever that combines embedding similarity with recency in retrieving values.""" import datetime from copy import deepcopy from typing import Any, Dict, List, Optional, Tuple from pydantic import BaseModel, Field from langchain.schema import BaseRetrieve...
257ac82b613a-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
"""Configuration for this pydantic object.""" arbitrary_types_allowed = True def _get_combined_score( self, document: Document, vector_relevance: Optional[float], current_time: datetime.datetime, ) -> float: """Return the combined score for a document.""" ...
257ac82b613a-2
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
# If a doc is considered salient, update the salience score docs_and_scores.update(self.get_salient_docs(query)) rescored_docs = [ (doc, self._get_combined_score(doc, relevance, current_time)) for doc, relevance in docs_and_scores.values() ] rescored_docs.sort(key...
257ac82b613a-3
https://python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html
return self.vectorstore.add_documents(dup_docs, **kwargs) [docs] async def aadd_documents( self, documents: List[Document], **kwargs: Any ) -> List[str]: """Add documents to vectorstore.""" current_time = kwargs.get("current_time") if current_time is None: current_time...
e8ee33bc64a0-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html
Source code for langchain.retrievers.pinecone_hybrid_search """Taken from: https://docs.pinecone.io/docs/hybrid-search""" import hashlib from typing import Any, Dict, List, Optional from pydantic import BaseModel, Extra, root_validator from langchain.embeddings.base import Embeddings from langchain.schema import BaseRe...
e8ee33bc64a0-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html
# create sparse vectors sparse_embeds = sparse_encoder.encode_documents(context_batch) for s in sparse_embeds: s["values"] = [float(s1) for s1 in s["values"]] vectors = [] # loop through the data and create dictionaries for upserts for doc_id, sparse, dense, metadata ...
e8ee33bc64a0-2
https://python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html
from pinecone_text.sparse.base_sparse_encoder import ( BaseSparseEncoder, # noqa:F401 ) except ImportError: raise ValueError( "Could not import pinecone_text python package. " "Please install it with `pip install pinecone_text`." ...
f62ea7d3967c-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html
Source code for langchain.retrievers.vespa_retriever """Wrapper for retrieving documents from Vespa.""" from __future__ import annotations import json from typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Sequence, Union from langchain.schema import BaseRetriever, Document if TYPE_CHECKING: from ves...
f62ea7d3967c-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html
[docs] def get_relevant_documents(self, query: str) -> List[Document]: body = self._query_body.copy() body["query"] = query return self._query(body) [docs] async def aget_relevant_documents(self, query: str) -> List[Document]: raise NotImplementedError [docs] def get_relevant_do...
f62ea7d3967c-2
https://python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html
_filter (Optional[str]): Document filter condition expressed in YQL. Defaults to None. yql (Optional[str]): Full YQL query to be used. Should not be specified if _filter or sources are specified. Defaults to None. kwargs (Any): Keyword arguments added to query bod...
1b8739524fbb-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html
Source code for langchain.retrievers.tfidf """TF-IDF Retriever. Largely based on https://github.com/asvskartheek/Text-Retrieval/blob/master/TF-IDF%20Search%20Engine%20(SKLEARN).ipynb""" from __future__ import annotations from typing import Any, Dict, Iterable, List, Optional from pydantic import BaseModel from langchai...
1b8739524fbb-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html
return cls(vectorizer=vectorizer, docs=docs, tfidf_array=tfidf_array, **kwargs) [docs] @classmethod def from_documents( cls, documents: Iterable[Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any, ) -> TFIDFRetriever: texts, metadatas = ...
1193e524cfea-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html
Source code for langchain.retrievers.svm """SMV Retriever. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb""" from __future__ import annotations import concurrent.futures from typing import Any, List, Optional import numpy as np from pydantic import BaseModel from langchain.embedding...
1193e524cfea-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html
class_weight="balanced", verbose=False, max_iter=10000, tol=1e-6, C=0.1 ) clf.fit(x, y) similarities = clf.decision_function(x) sorted_ix = np.argsort(-similarities) # svm.LinearSVC in scikit-learn is non-deterministic. # if a text is the same as a query, there is no guar...
1c494575cc58-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/wikipedia.html
Source code for langchain.retrievers.wikipedia from typing import List from langchain.schema import BaseRetriever, Document from langchain.utilities.wikipedia import WikipediaAPIWrapper [docs]class WikipediaRetriever(BaseRetriever, WikipediaAPIWrapper): """ It is effectively a wrapper for WikipediaAPIWrapper. ...
2252610a7a0a-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html
Source code for langchain.retrievers.azure_cognitive_search """Retriever wrapper for Azure Cognitive Search.""" from __future__ import annotations import json from typing import Dict, List, Optional import aiohttp import requests from pydantic import BaseModel, Extra, root_validator from langchain.schema import BaseRet...
2252610a7a0a-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html
values["api_key"] = get_from_dict_or_env( values, "api_key", "AZURE_COGNITIVE_SEARCH_API_KEY" ) return values def _build_search_url(self, query: str) -> str: base_url = f"https://{self.service_name}.search.windows.net/" endpoint_path = f"indexes/{self.index_name}/docs?api...
2252610a7a0a-2
https://python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html
Document(page_content=result.pop(self.content_key), metadata=result) for result in search_results ] [docs] async def aget_relevant_documents(self, query: str) -> List[Document]: search_results = await self._asearch(query) return [ Document(page_content=result.pop(self....
5eed9d96f4e3-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html
Source code for langchain.retrievers.elastic_search_bm25 """Wrapper around Elasticsearch vector database.""" from __future__ import annotations import uuid from typing import Any, Iterable, List from langchain.docstore.document import Document from langchain.schema import BaseRetriever [docs]class ElasticSearchBM25Retr...
5eed9d96f4e3-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html
cls, elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75 ) -> ElasticSearchBM25Retriever: from elasticsearch import Elasticsearch # Create an Elasticsearch client instance es = Elasticsearch(elasticsearch_url) # Define the index settings and mappings set...
5eed9d96f4e3-2
https://python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html
ids = [] for i, text in enumerate(texts): _id = str(uuid.uuid4()) request = { "_op_type": "index", "_index": self.index_name, "content": text, "_id": _id, } ids.append(_id) requests.append...
89b4c588bb2b-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html
Source code for langchain.retrievers.knn """KNN Retriever. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb""" from __future__ import annotations import concurrent.futures from typing import Any, List, Optional import numpy as np from pydantic import BaseModel from langchain.embedding...
89b4c588bb2b-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html
sorted_ix = np.argsort(-similarities) denominator = np.max(similarities) - np.min(similarities) + 1e-6 normalized_similarities = (similarities - np.min(similarities)) / denominator top_k_results = [ Document(page_content=self.texts[row]) for row in sorted_ix[0 : self.k] ...
01cf0126d83c-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/remote_retriever.html
Source code for langchain.retrievers.remote_retriever from typing import List, Optional import aiohttp import requests from pydantic import BaseModel from langchain.schema import BaseRetriever, Document [docs]class RemoteLangChainRetriever(BaseRetriever, BaseModel): url: str headers: Optional[dict] = None i...
97ca0d125cbf-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html
Source code for langchain.retrievers.zep from __future__ import annotations from typing import TYPE_CHECKING, List, Optional from langchain.schema import BaseRetriever, Document if TYPE_CHECKING: from zep_python import SearchResult [docs]class ZepRetriever(BaseRetriever): """A Retriever implementation for the Z...
97ca0d125cbf-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html
[docs] def get_relevant_documents(self, query: str) -> List[Document]: from zep_python import SearchPayload payload: SearchPayload = SearchPayload(text=query) results: List[SearchResult] = self.zep_client.search_memory( self.session_id, payload, limit=self.top_k ) ...
76cae3620686-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html
Source code for langchain.retrievers.contextual_compression """Retriever that wraps a base retriever and filters the results.""" from typing import List from pydantic import BaseModel, Extra from langchain.retrievers.document_compressors.base import ( BaseDocumentCompressor, ) from langchain.schema import BaseRetri...
76cae3620686-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html
© Copyright 2023, Harrison Chase. Last updated on Jun 04, 2023.
cf8c73c7312a-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html
Source code for langchain.retrievers.weaviate_hybrid_search """Wrapper around weaviate vector database.""" from __future__ import annotations from typing import Any, Dict, List, Optional from uuid import uuid4 from pydantic import Extra from langchain.docstore.document import Document from langchain.schema import BaseR...
cf8c73c7312a-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html
"vectorizer": "text2vec-openai", } if not self._client.schema.exists(self._index_name): self._client.schema.create_class(class_obj) [docs] class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True # added te...
cf8c73c7312a-2
https://python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html
raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs [docs] async def aget_relevant_documents( ...
5627c3dda3ad-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html
Source code for langchain.retrievers.databerry from typing import List, Optional import aiohttp import requests from langchain.schema import BaseRetriever, Document [docs]class DataberryRetriever(BaseRetriever): datastore_url: str top_k: Optional[int] api_key: Optional[str] def __init__( self, ...
5627c3dda3ad-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html
**({"topK": self.top_k} if self.top_k is not None else {}), }, headers={ "Content-Type": "application/json", **( {"Authorization": f"Bearer {self.api_key}"} if self.api_key is not None ...
ebaf8b5f22fb-0
https://python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html
Source code for langchain.retrievers.chatgpt_plugin_retriever from __future__ import annotations from typing import List, Optional import aiohttp import requests from pydantic import BaseModel from langchain.schema import BaseRetriever, Document [docs]class ChatGPTPluginRetriever(BaseRetriever, BaseModel): url: str...
ebaf8b5f22fb-1
https://python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html
docs.append(Document(page_content=content, metadata=d)) return docs def _create_request(self, query: str) -> tuple[str, dict, dict]: url = f"{self.url}/query" json = { "queries": [ { "query": query, "filter": self.filter, ...