id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
---|---|---|
69b91d29fce1-3 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
parse(completion: str) → T[source]¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt_value: PromptValue) → T[source]¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html |
69b91d29fce1-4 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using RetryWithErrorOutputParser¶
Retry parser | https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html |
e011ff9c4ed9-0 | langchain.text_splitter.NLTKTextSplitter¶
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]¶
Splitting text using NLTK package.
Initialize the NLTK splitter.
Methods
__init__([separator])
Initialize the NLTK splitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split incoming text and return chunks.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(separator: str = '\n\n', **kwargs: Any) → None[source]¶
Initialize the NLTK splitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.NLTKTextSplitter.html |
e011ff9c4ed9-1 | Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split incoming text and return chunks.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using NLTKTextSplitter¶
Split by tokens | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.NLTKTextSplitter.html |
4c216fd08d8b-0 | langchain.text_splitter.MarkdownHeaderTextSplitter¶
class langchain.text_splitter.MarkdownHeaderTextSplitter(headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False)[source]¶
Splitting markdown files based on specified headers.
Create a new MarkdownHeaderTextSplitter.
Parameters
headers_to_split_on – Headers we want to track
return_each_line – Return each line w/ associated headers
Methods
__init__(headers_to_split_on[, return_each_line])
Create a new MarkdownHeaderTextSplitter.
aggregate_lines_to_chunks(lines)
Combine lines with common metadata into chunks :param lines: Line of text / associated header metadata
split_text(text)
Split markdown file :param text: Markdown file
__init__(headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False)[source]¶
Create a new MarkdownHeaderTextSplitter.
Parameters
headers_to_split_on – Headers we want to track
return_each_line – Return each line w/ associated headers
aggregate_lines_to_chunks(lines: List[LineType]) → List[Document][source]¶
Combine lines with common metadata into chunks
:param lines: Line of text / associated header metadata
split_text(text: str) → List[Document][source]¶
Split markdown file
:param text: Markdown file
Examples using MarkdownHeaderTextSplitter¶
Context aware text splitting and QA / Chat
Perform context-aware text splitting
MarkdownHeaderTextSplitter | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.MarkdownHeaderTextSplitter.html |
7533d81db74c-0 | langchain.text_splitter.CharacterTextSplitter¶
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', is_separator_regex: bool = False, **kwargs: Any)[source]¶
Splitting text that looks at characters.
Create a new TextSplitter.
Methods
__init__([separator, is_separator_regex])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split incoming text and return chunks.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(separator: str = '\n\n', is_separator_regex: bool = False, **kwargs: Any) → None[source]¶
Create a new TextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html |
7533d81db74c-1 | Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split incoming text and return chunks.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using CharacterTextSplitter¶
Hugging Face
OpenAI
Vectara Text Generation
Document Comparison
Vectorstore Agent
LanceDB
Weaviate
Activeloop’s Deep Lake
Vectara
Redis
PGVector
Rockset
Zilliz
SingleStoreDB
Annoy
Typesense
Tair
Chroma
Alibaba Cloud OpenSearch
StarRocks
Clarifai
scikit-learn
DocArrayHnswSearch
MyScale
ClickHouse Vector Search
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
Azure Cognitive Search
Cassandra
Milvus
ElasticSearch
Marqo
DocArrayInMemorySearch
pg_embedding
FAISS
AnalyticDB
Hologres
MongoDB Atlas
Meilisearch
Figma
Psychic
Manifest
Caching integrations
Data Augmented Question Answering
Question answering over a group chat messages using Activeloop’s DeepLake
Retrieve from vector stores directly
Improve document indexing with HyDE | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html |
7533d81db74c-2 | Retrieve from vector stores directly
Improve document indexing with HyDE
Structure answers with OpenAI functions
QA using Activeloop’s DeepLake
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
Split by tokens
How to add memory to a Multi-Input Chain
Combine agents and vector stores
Loading from LangChainHub
Retrieval QA using OpenAI functions
Vector store-augmented text generation
Hypothetical Document Embeddings | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html |
05f7167c8b3e-0 | langchain.text_splitter.MarkdownTextSplitter¶
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]¶
Attempts to split the text along Markdown-formatted headings.
Initialize a MarkdownTextSplitter.
Methods
__init__(**kwargs)
Initialize a MarkdownTextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_language(language, **kwargs)
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
get_separators_for_language(language)
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(**kwargs: Any) → None[source]¶
Initialize a MarkdownTextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.MarkdownTextSplitter.html |
05f7167c8b3e-1 | classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter¶
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
static get_separators_for_language(language: Language) → List[str]¶
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.MarkdownTextSplitter.html |
9ce8b45af6c7-0 | langchain.text_splitter.Language¶
class langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enum of the programming languages.
CPP = 'cpp'¶
GO = 'go'¶
JAVA = 'java'¶
JS = 'js'¶
PHP = 'php'¶
PROTO = 'proto'¶
PYTHON = 'python'¶
RST = 'rst'¶
RUBY = 'ruby'¶
RUST = 'rust'¶
SCALA = 'scala'¶
SWIFT = 'swift'¶
MARKDOWN = 'markdown'¶
LATEX = 'latex'¶
HTML = 'html'¶
SOL = 'sol'¶
Examples using Language¶
Source Code | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Language.html |
9948b928ffe8-0 | langchain.text_splitter.LineType¶
class langchain.text_splitter.LineType[source]¶
Line type as typed dict.
metadata: Dict[str, str]¶
content: str¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LineType.html |
e3998c055ce8-0 | langchain.text_splitter.SpacyTextSplitter¶
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]¶
Splitting text using Spacy package.
Per default, Spacy’s en_core_web_sm model is used. For a faster, but
potentially less accurate splitting, you can use pipeline=’sentencizer’.
Initialize the spacy text splitter.
Methods
__init__([separator, pipeline])
Initialize the spacy text splitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split incoming text and return chunks.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any) → None[source]¶
Initialize the spacy text splitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SpacyTextSplitter.html |
e3998c055ce8-1 | Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split incoming text and return chunks.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using SpacyTextSplitter¶
spaCy
Atlas
Split by tokens | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SpacyTextSplitter.html |
a0acc9c7bdfa-0 | langchain.text_splitter.HeaderType¶
class langchain.text_splitter.HeaderType[source]¶
Header type as typed dict.
level: int¶
name: str¶
data: str¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.HeaderType.html |
d9311dcfb137-0 | langchain.text_splitter.RecursiveCharacterTextSplitter¶
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, keep_separator: bool = True, is_separator_regex: bool = False, **kwargs: Any)[source]¶
Splitting text by recursively look at characters.
Recursively tries to split by different characters to find one
that works.
Create a new TextSplitter.
Methods
__init__([separators, keep_separator, ...])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_language(language, **kwargs)
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
get_separators_for_language(language)
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(separators: Optional[List[str]] = None, keep_separator: bool = True, is_separator_regex: bool = False, **kwargs: Any) → None[source]¶
Create a new TextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.RecursiveCharacterTextSplitter.html |
d9311dcfb137-1 | Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter[source]¶
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
static get_separators_for_language(language: Language) → List[str][source]¶
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using RecursiveCharacterTextSplitter¶
Cohere Reranker
Loading documents from a YouTube url
Source Code
!pip install bs4
Context aware text splitting and QA / Chat
QA over Documents
Running LLMs locally
Question answering over a group chat messages using Activeloop’s DeepLake
Perform context-aware text splitting
Use local LLMs
QA using Activeloop’s DeepLake
MultiQueryRetriever
MarkdownHeaderTextSplitter | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.RecursiveCharacterTextSplitter.html |
0025f1fe280c-0 | langchain.text_splitter.Tokenizer¶
class langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source]¶
Attributes
chunk_overlap
tokens_per_chunk
decode
encode
Methods
__init__(chunk_overlap, tokens_per_chunk, ...)
__init__(chunk_overlap: int, tokens_per_chunk: int, decode: Callable[[list[int]], str], encode: Callable[[str], List[int]]) → None¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Tokenizer.html |
89e5afa03433-0 | langchain.text_splitter.PythonCodeTextSplitter¶
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]¶
Attempts to split the text along Python syntax.
Initialize a PythonCodeTextSplitter.
Methods
__init__(**kwargs)
Initialize a PythonCodeTextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_language(language, **kwargs)
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
get_separators_for_language(language)
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(**kwargs: Any) → None[source]¶
Initialize a PythonCodeTextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.PythonCodeTextSplitter.html |
89e5afa03433-1 | classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter¶
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
static get_separators_for_language(language: Language) → List[str]¶
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.PythonCodeTextSplitter.html |
a464b17d398f-0 | langchain.text_splitter.LatexTextSplitter¶
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]¶
Attempts to split the text along Latex-formatted layout elements.
Initialize a LatexTextSplitter.
Methods
__init__(**kwargs)
Initialize a LatexTextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_language(language, **kwargs)
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
get_separators_for_language(language)
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(**kwargs: Any) → None[source]¶
Initialize a LatexTextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LatexTextSplitter.html |
a464b17d398f-1 | classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter¶
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
static get_separators_for_language(language: Language) → List[str]¶
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LatexTextSplitter.html |
cd267c22a32e-0 | langchain.text_splitter.SentenceTransformersTokenTextSplitter¶
class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]¶
Splitting text to tokens using sentence model tokenizer.
Create a new TextSplitter.
Methods
__init__([chunk_overlap, model_name, ...])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
count_tokens(*, text)
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any) → None[source]¶
Create a new TextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
count_tokens(*, text: str) → int[source]¶
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SentenceTransformersTokenTextSplitter.html |
cd267c22a32e-1 | Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using SentenceTransformersTokenTextSplitter¶
Split by tokens | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.SentenceTransformersTokenTextSplitter.html |
9040c3d439e9-0 | langchain.text_splitter.split_text_on_tokens¶
langchain.text_splitter.split_text_on_tokens(*, text: str, tokenizer: Tokenizer) → List[str][source]¶
Split incoming text and return chunks using tokenizer. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.split_text_on_tokens.html |
bff9398abef4-0 | langchain.text_splitter.TokenTextSplitter¶
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]¶
Splitting text to tokens using model tokenizer.
Create a new TextSplitter.
Methods
__init__([encoding_name, model_name, ...])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → None[source]¶
Create a new TextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶ | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TokenTextSplitter.html |
bff9398abef4-1 | Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using TokenTextSplitter¶
StarRocks
Split by tokens | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TokenTextSplitter.html |
2a0f5db1f097-0 | langchain.text_splitter.TextSplitter¶
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: ~typing.Callable[[str], int] = <built-in function len>, keep_separator: bool = False, add_start_index: bool = False)[source]¶
Interface for splitting text into chunks.
Create a new TextSplitter.
Parameters
chunk_size – Maximum size of chunks to return
chunk_overlap – Overlap in characters between chunks
length_function – Function that measures the length of given chunks
keep_separator – Whether to keep the separator in the chunks
add_start_index – If True, includes chunk’s start index in metadata
Methods
__init__([chunk_size, chunk_overlap, ...])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: ~typing.Callable[[str], int] = <built-in function len>, keep_separator: bool = False, add_start_index: bool = False) → None[source]¶
Create a new TextSplitter.
Parameters
chunk_size – Maximum size of chunks to return | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TextSplitter.html |
2a0f5db1f097-1 | Create a new TextSplitter.
Parameters
chunk_size – Maximum size of chunks to return
chunk_overlap – Overlap in characters between chunks
length_function – Function that measures the length of given chunks
keep_separator – Whether to keep the separator in the chunks
add_start_index – If True, includes chunk’s start index in metadata
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document][source]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter[source]¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS[source]¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document][source]¶
Split documents.
abstract split_text(text: str) → List[str][source]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform sequence of documents by splitting them. | https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TextSplitter.html |
f88ab14b036a-0 | langchain.smith.evaluation.config.RunEvalConfig¶
class langchain.smith.evaluation.config.RunEvalConfig[source]¶
Bases: BaseModel
Configuration for a run evaluation.
Parameters
evaluators (List[Union[EvaluatorType, EvalConfig]]) – Configurations for which evaluators to apply to the dataset run.
Each can be the string of an EvaluatorType, such
as EvaluatorType.QA, the evaluator type string (“qa”), or a configuration for a
given evaluator (e.g., RunEvalConfig.QA).
custom_evaluators (Optional[List[Union[RunEvaluator, StringEvaluator]]]) – Custom evaluators to apply to the dataset run.
reference_key (Optional[str]) – The key in the dataset run to use as the reference string.
If not provided, it will be inferred automatically.
prediction_key (Optional[str]) – The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
input_key (Optional[str]) – The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
eval_llm (Optional[BaseLanguageModel]) – The language model to pass to any evaluators that use a language model.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param custom_evaluators: Optional[List[Union[langsmith.evaluation.evaluator.RunEvaluator, langchain.evaluation.schema.StringEvaluator]]] = None¶
Custom evaluators to apply to the dataset run.
param eval_llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
The language model to pass to any evaluators that require one. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-1 | The language model to pass to any evaluators that require one.
param evaluators: List[Union[langchain.evaluation.schema.EvaluatorType, langchain.smith.evaluation.config.EvalConfig]] [Optional]¶
Configurations for which evaluators to apply to the dataset run.
Each can be the string of an
EvaluatorType, such
as EvaluatorType.QA, the evaluator type string (“qa”), or a configuration for a
given evaluator
(e.g.,
RunEvalConfig.QA).
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
class CoTQA[source]¶
Bases: EvalConfig
Configuration for a context-based QA evaluator.
Parameters
prompt (Optional[BasePromptTemplate]) – The prompt template to use for generating the question.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.CONTEXT_QA¶
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-2 | param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-3 | Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class ContextQA[source]¶
Bases: EvalConfig | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-4 | class ContextQA[source]¶
Bases: EvalConfig
Configuration for a context-based QA evaluator.
Parameters
prompt (Optional[BasePromptTemplate]) – The prompt template to use for generating the question.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.CONTEXT_QA¶
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-5 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-6 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class Criteria[source]¶
Bases: EvalConfig
Configuration for a reference-free criteria evaluator.
Parameters
criteria (Optional[CRITERIA_TYPE]) – The criteria to evaluate.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param criteria: Optional[Union[Mapping[str, str], langchain.evaluation.criteria.eval_chain.Criteria, langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.CRITERIA¶
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-7 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-8 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class EmbeddingDistance[source]¶
Bases: EvalConfig
Configuration for an embedding distance evaluator.
Parameters | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-9 | Bases: EvalConfig
Configuration for an embedding distance evaluator.
Parameters
embeddings (Optional[Embeddings]) – The embeddings to use for computing the distance.
distance_metric (Optional[EmbeddingDistanceEnum]) – The distance metric to use for computing the distance.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param distance_metric: Optional[langchain.evaluation.embedding_distance.base.EmbeddingDistance] = None¶
param embeddings: Optional[langchain.embeddings.base.Embeddings] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.EMBEDDING_DISTANCE¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-10 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-11 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class LabeledCriteria[source]¶
Bases: EvalConfig
Configuration for a labeled (with references) criteria evaluator.
Parameters
criteria (Optional[CRITERIA_TYPE]) – The criteria to evaluate.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param criteria: Optional[Union[Mapping[str, str], langchain.evaluation.criteria.eval_chain.Criteria, langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.LABELED_CRITERIA¶
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-12 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-13 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class QA[source]¶
Bases: EvalConfig
Configuration for a QA evaluator.
Parameters | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-14 | Bases: EvalConfig
Configuration for a QA evaluator.
Parameters
prompt (Optional[BasePromptTemplate]) – The prompt template to use for generating the question.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.QA¶
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-15 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-16 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class StringDistance[source]¶
Bases: EvalConfig
Configuration for a string distance evaluator.
Parameters
distance (Optional[StringDistanceEnum]) – The string distance metric to use.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param distance: Optional[langchain.evaluation.string_distance.base.StringDistance] = None¶
The string distance metric to use.
damerau_levenshtein: The Damerau-Levenshtein distance.
levenshtein: The Levenshtein distance.
jaro: The Jaro distance.
jaro_winkler: The Jaro-Winkler distance.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.STRING_DISTANCE¶
param normalize_score: bool = True¶
Whether to normalize the distance to between 0 and 1.
Applies only to the Levenshtein and Damerau-Levenshtein distances.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-17 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-18 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-19 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
f88ab14b036a-20 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using RunEvalConfig¶
LangSmith Walkthrough | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
0351e9c5e5f9-0 | langchain.smith.evaluation.runner_utils.arun_on_dataset¶
async langchain.smith.evaluation.runner_utils.arun_on_dataset(client: Client, dataset_name: str, llm_or_chain_factory: Union[Callable[[], Union[Chain, Runnable]], BaseLanguageModel, Callable[[dict], Any], Runnable, Chain], *, evaluation: Optional[RunEvalConfig] = None, concurrency_level: int = 5, num_repetitions: int = 1, project_name: Optional[str] = None, verbose: bool = False, tags: Optional[List[str]] = None, input_mapper: Optional[Callable[[Dict], Any]] = None) → Dict[str, Any][source]¶
Asynchronously run the Chain or language model on a dataset
and store traces to the specified project name.
Parameters
client – LangSmith client to use to read the dataset, and to
log feedback and run traces.
dataset_name – Name of the dataset to run the chain on.
llm_or_chain_factory – Language model or Chain constructor to run
over the dataset. The Chain constructor is used to permit
independent calls on each example without carrying over state.
evaluation – Optional evaluation configuration to use when evaluating
concurrency_level – The number of async tasks to run concurrently.
num_repetitions – Number of times to run the model on each example.
This is useful when testing success rates or generating confidence
intervals.
project_name – Name of the project to store the traces in.
Defaults to {dataset_name}-{chain class name}-{datetime}.
verbose – Whether to print progress.
tags – Tags to add to each run in the project.
input_mapper – A function to map to the inputs dictionary from an Example
to the format expected by the model to be evaluated. This is useful if
your model needs to deserialize more complex schema or if your dataset | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html |
0351e9c5e5f9-1 | your model needs to deserialize more complex schema or if your dataset
has inputs with keys that differ from what is expected by your chain
or agent.
Returns
A dictionary containing the run’s project name and the
resulting model outputs.
For the synchronous version, see run_on_dataset().
Examples
from langsmith import Client
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, arun_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
await arun_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html |
0351e9c5e5f9-2 | def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
return {"score": prediction == reference}
evaluation_config = RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
await arun_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Examples using arun_on_dataset¶
LangSmith Walkthrough | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html |
08544d18d7bd-0 | langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper[source]¶
Bases: StringRunMapper
Extract items to evaluate from the run object from a chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_key: Optional[str] = None¶
The key from the model Run’s inputs to use as the eval input.
If not provided, will use the only input key or raise an
error if there are multiple.
param prediction_key: Optional[str] = None¶
The key from the model Run’s outputs to use as the eval prediction.
If not provided, will use the only output key or raise an error
if there are multiple.
__call__(run: Run) → Dict[str, str]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html |
08544d18d7bd-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html |
08544d18d7bd-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
The keys to extract from the run. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html |
e8fa4850b245-0 | langchain.smith.evaluation.runner_utils.run_on_dataset¶
langchain.smith.evaluation.runner_utils.run_on_dataset(client: Client, dataset_name: str, llm_or_chain_factory: Union[Callable[[], Union[Chain, Runnable]], BaseLanguageModel, Callable[[dict], Any], Runnable, Chain], *, evaluation: Optional[RunEvalConfig] = None, num_repetitions: int = 1, concurrency_level: int = 5, project_name: Optional[str] = None, verbose: bool = False, tags: Optional[List[str]] = None, input_mapper: Optional[Callable[[Dict], Any]] = None) → Dict[str, Any][source]¶
Run the Chain or language model on a dataset and store traces
to the specified project name.
Parameters
client – LangSmith client to use to access the dataset and to
log feedback and run traces.
dataset_name – Name of the dataset to run the chain on.
llm_or_chain_factory – Language model or Chain constructor to run
over the dataset. The Chain constructor is used to permit
independent calls on each example without carrying over state.
evaluation – Configuration for evaluators to run on the
results of the chain
concurrency_level – The number of async tasks to run concurrently.
num_repetitions – Number of times to run the model on each example.
This is useful when testing success rates or generating confidence
intervals.
project_name – Name of the project to store the traces in.
Defaults to {dataset_name}-{chain class name}-{datetime}.
verbose – Whether to print progress.
tags – Tags to add to each run in the project.
input_mapper – A function to map to the inputs dictionary from an Example
to the format expected by the model to be evaluated. This is useful if
your model needs to deserialize more complex schema or if your dataset | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html |
e8fa4850b245-1 | your model needs to deserialize more complex schema or if your dataset
has inputs with keys that differ from what is expected by your chain
or agent.
Returns
A dictionary containing the run’s project name and the resulting model outputs.
For the (usually faster) async version of this function, see arun_on_dataset().
Examples
from langsmith import Client
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html |
e8fa4850b245-2 | def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
return {"score": prediction == reference}
evaluation_config = RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Examples using run_on_dataset¶
LangSmith Walkthrough | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html |
ed5ced5cd622-0 | langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper[source]¶
Bases: StringRunMapper
Extract items to evaluate from the run object.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
__call__(run: Run) → Dict[str, str]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html |
ed5ced5cd622-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html |
ed5ced5cd622-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
serialize_chat_messages(messages: List[Dict]) → str[source]¶
Extract the input messages from the run.
serialize_inputs(inputs: Dict) → str[source]¶
serialize_outputs(outputs: Dict) → str[source]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
The keys to extract from the run. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html |
b45f4107cba6-0 | langchain.smith.evaluation.config.EvalConfig¶
class langchain.smith.evaluation.config.EvalConfig[source]¶
Bases: BaseModel
Configuration for a given run evaluator.
Parameters
evaluator_type (EvaluatorType) – The type of evaluator to use.
get_kwargs()[source]¶
Get the keyword arguments for the evaluator configuration.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.EvalConfig.html |
b45f4107cba6-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any][source]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.EvalConfig.html |
b45f4107cba6-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.EvalConfig.html |
74feb01fbeb9-0 | langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain¶
class langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain[source]¶
Bases: Chain, RunEvaluator
Evaluate Run and optional examples.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param example_mapper: Optional[StringExampleMapper] = None¶
Maps the Example (dataset row) to a dictionary
with a ‘reference’ string.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param name: str [Required]¶
The name of the evaluation metric.
param run_mapper: StringRunMapper [Required]¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-1 | The name of the evaluation metric.
param run_mapper: StringRunMapper [Required]¶
Maps the Run to a dictionary with ‘input’ and ‘prediction’ strings.
param string_evaluator: StringEvaluator [Required]¶
The evaluation chain.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-2 | these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-3 | tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async aevaluate_run(run: Run, example: Optional[Example] = None) → EvaluationResult[source]¶
Evaluate an example.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-4 | tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-5 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
evaluate_run(run: Run, example: Optional[Example] = None) → EvaluationResult[source]¶
Evaluate an example.
classmethod from_orm(obj: Any) → Model¶
classmethod from_run_and_data_type(evaluator: StringEvaluator, run_type: str, data_type: DataType, input_key: Optional[str] = None, prediction_key: Optional[str] = None, reference_key: Optional[str] = None, tags: Optional[List[str]] = None) → StringRunEvaluatorChain[source]¶
Create a StringRunEvaluatorChain from an evaluator and the run and dataset types. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-6 | Create a StringRunEvaluatorChain from an evaluator and the run and dataset types.
This method provides an easy way to instantiate a StringRunEvaluatorChain, by
taking an evaluator and information about the type of run and the data.
The method supports LLM and chain runs.
Parameters
evaluator (StringEvaluator) – The string evaluator to use.
run_type (str) – The type of run being evaluated.
Supported types are LLM and Chain.
data_type (DataType) – The type of dataset used in the run.
input_key (str, optional) – The key used to map the input from the run.
prediction_key (str, optional) – The key used to map the prediction from the run.
reference_key (str, optional) – The key used to map the reference from the dataset.
tags (List[str], optional) – List of tags to attach to the evaluation chain.
Returns
The instantiated evaluation chain.
Return type
StringRunEvaluatorChain
Raises
ValueError – If the run type is not supported, or if the evaluator requires a
reference from the dataset but the reference key is not provided.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-7 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-8 | Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml") | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-9 | Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶
property input_keys: List[str]¶
Keys expected to be in the chain input.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
74feb01fbeb9-10 | Return whether or not the class is serializable.
property output_keys: List[str]¶
Keys expected to be in the chain output. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
3c3b8c441015-0 | langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper[source]¶
Bases: StringRunMapper
Map an input to the tool.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
__call__(run: Run) → Dict[str, str]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper.html |
3c3b8c441015-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper.html |
3c3b8c441015-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
The keys to extract from the run. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper.html |
2dd01cfdd181-0 | langchain.smith.evaluation.string_run_evaluator.StringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.StringRunMapper[source]¶
Bases: Serializable
Extract items to evaluate from the run object.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
__call__(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html |
2dd01cfdd181-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
abstract map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html |
2dd01cfdd181-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
The keys to extract from the run. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html |
1f4aeb319f96-0 | langchain.smith.evaluation.runner_utils.InputFormatError¶
class langchain.smith.evaluation.runner_utils.InputFormatError[source]¶
Raised when the input format is invalid. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.InputFormatError.html |
3a597fd4817f-0 | langchain.smith.evaluation.string_run_evaluator.StringExampleMapper¶
class langchain.smith.evaluation.string_run_evaluator.StringExampleMapper[source]¶
Bases: Serializable
Map an example, or row in the dataset, to the inputs of an evaluation.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param reference_key: Optional[str] = None¶
__call__(example: Example) → Dict[str, str][source]¶
Maps the Run and Example to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringExampleMapper.html |
3a597fd4817f-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
map(example: Example) → Dict[str, str][source]¶
Maps the Example, or dataset row to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringExampleMapper.html |
3a597fd4817f-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
serialize_chat_messages(messages: List[Dict]) → str[source]¶
Extract the input messages from the run.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
The keys to extract from the run. | https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringExampleMapper.html |