id
stringlengths
14
16
text
stringlengths
20
3.3k
source
stringlengths
60
181
cd405779bdbb-29
Return type Runnable[Input, Output] with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶ [Beta] Implement this if there is a way of steering the model to generate responses that match a given schema. Notes Parameters schema (Union[Dict, Type[BaseModel]]) – kwargs (Any) – Return type Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]] with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. Parameters input_type (Optional[Type[Input]]) – output_type (Optional[Type[Output]]) – Return type Runnable[Input, Output] property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[ConfigurableFieldSpec]¶ List configurable fields for this runnable. property default_params: Dict[str, Any]¶ property input_schema: Type[BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids.
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.together.Together.html
cd405779bdbb-30
A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. property output_schema: Type[BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.together.Together.html
4d3b72010fd2-0
langchain_google_vertexai.llms.VertexAI¶ class langchain_google_vertexai.llms.VertexAI[source]¶ Bases: _VertexAICommon, BaseLLM Google Vertex AI large language models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Union[BaseCache, bool, None] = None¶ Whether to cache the response. If true, will use the global cache. If false, will not use a cache If None, will use the global cache if it’s set, otherwise no cache. If instance of BaseCache, will use the provided cache. Caching is not currently supported for streaming methods of models. param callback_manager: Optional[BaseCallbackManager] = None¶ [DEPRECATED] param callbacks: Callbacks = None¶ Callbacks to add to the run trace. param credentials: Any = None¶ The default custom credentials (google.auth.credentials.Credentials) to use param location: str = 'us-central1'¶ The default location to use when making API calls. param max_output_tokens: Optional[int] = None¶ Token limit determines the maximum amount of text output from one prompt. param max_retries: int = 6¶ The maximum number of retries to make when generating. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_name: str = 'text-bison'¶ The name of the Vertex AI large language model. param n: int = 1¶ How many completions to generate for each prompt. param project: Optional[str] = None¶ The default GCP project to use when making Vertex API calls. param request_parallelism: int = 5¶
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-1
param request_parallelism: int = 5¶ The amount of parallelism allowed for requests issued to VertexAI models. param safety_settings: Optional[Dict[HarmCategory, HarmBlockThreshold]] = None¶ The default safety settings to use for all generations. For example: from langchain_google_vertexai import HarmBlockThreshold, HarmCategory safety_settings = {HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, } param stop: Optional[List[str]] = None¶ Optional list of stop words to use when generating. param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = None¶ Sampling temperature, it controls the degree of randomness in token selection. param top_k: Optional[int] = None¶ How the model selects tokens for output, the next token is selected from param top_p: Optional[float] = None¶ Tokens are selected from most probable to least until the sum of their param tuned_model_name: Optional[str] = None¶ The name of a tuned model. If provided, model_name is ignored. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-2
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ [Deprecated] Check Cache and run the LLM on the given prompt and input. Notes Deprecated since version 0.1.7: Use invoke instead. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – tags (Optional[List[str]]) – metadata (Optional[Dict[str, Any]]) – kwargs (Any) – Return type str async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str]
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-3
kwargs (Any) – Return type List[str] async abatch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶ Run ainvoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (List[Input]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type AsyncIterator[Tuple[int, Union[Output, Exception]]] async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts.
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-4
Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-5
Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async ainvoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type str async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-6
[Deprecated] Notes Deprecated since version 0.1.7: Use ainvoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version 0.1.7: Use ainvoke instead. Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶ Assigns new fields to the dict output of this runnable. Returns a new runnable. from langchain_community.llms.fake import FakeStreamingListLLM from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import SystemMessagePromptTemplate from langchain_core.runnables import Runnable from operator import itemgetter prompt = ( SystemMessagePromptTemplate.from_template("You are a nice assistant.") + "{question}" ) llm = FakeStreamingListLLM(responses=["foo-lish"]) chain: Runnable = prompt | llm | {"str": StrOutputParser()} chain_with_assign = chain.assign(hello=itemgetter("str") | llm) print(chain_with_assign.input_schema.schema()) # {'title': 'PromptInput', 'type': 'object', 'properties': {'question': {'title': 'Question', 'type': 'string'}}}
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-7
{'question': {'title': 'Question', 'type': 'string'}}} print(chain_with_assign.output_schema.schema()) # {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties': {'str': {'title': 'Str', 'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}} Parameters kwargs (Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) – Return type RunnableSerializable[Any, Any] async astream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type AsyncIterator[str]
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-8
kwargs (Any) – Return type AsyncIterator[str] astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → AsyncIterator[StreamEvent]¶ [Beta] Generate a stream of events. Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results. A StreamEvent is a dictionary with the following schema: event: str - Event names are of theformat: on_[runnable_type]_(start|stream|end). name: str - The name of the runnable that generated the event. run_id: str - randomly generated ID associated with the given execution ofthe runnable that emitted the event. A child runnable that gets invoked as part of the execution of a parent runnable is assigned its own unique ID. tags: Optional[List[str]] - The tags of the runnable that generatedthe event. metadata: Optional[Dict[str, Any]] - The metadata of the runnablethat generated the event. data: Dict[str, Any] Below is a table that illustrates some evens that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table. event name chunk input output on_chat_model_start [model name] {“messages”: [[SystemMessage, HumanMessage]]} on_chat_model_stream [model name] AIMessageChunk(content=”hello”)
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-9
on_chat_model_stream [model name] AIMessageChunk(content=”hello”) on_chat_model_end [model name] {“messages”: [[SystemMessage, HumanMessage]]} {“generations”: […], “llm_output”: None, …} on_llm_start [model name] {‘input’: ‘hello’} on_llm_stream [model name] ‘Hello’ on_llm_end [model name] ‘Hello human!’ on_chain_start format_docs on_chain_stream format_docs “hello world!, goodbye world!” on_chain_end format_docs [Document(…)] “hello world!, goodbye world!” on_tool_start some_tool {“x”: 1, “y”: “2”} on_tool_stream some_tool {“x”: 1, “y”: “2”} on_tool_end some_tool {“x”: 1, “y”: “2”} on_retriever_start [retriever name] {“query”: “hello”} on_retriever_chunk [retriever name] {documents: […]} on_retriever_end [retriever name] {“query”: “hello”} {documents: […]} on_prompt_start [template_name] {“question”: “hello”} on_prompt_end [template_name] {“question”: “hello”} ChatPromptValue(messages: [SystemMessage, …]) Here are declarations associated with the events shown above: format_docs: def format_docs(docs: List[Document]) -> str: '''Format the docs.''' return ", ".join([doc.page_content for doc in docs]) format_docs = RunnableLambda(format_docs) some_tool: @tool
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-10
format_docs = RunnableLambda(format_docs) some_tool: @tool def some_tool(x: int, y: str) -> dict: '''Some_tool.''' return {"x": x, "y": y} prompt: template = ChatPromptTemplate.from_messages( [("system", "You are Cat Agent 007"), ("human", "{question}")] ).with_config({"run_name": "my_template", "tags": ["my_template"]}) Example: from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v1") ] # will produce the following events (run_id has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ] Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. version (Literal['v1']) – The version of the schema to use. Currently only version 1 is available.
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-11
Currently only version 1 is available. No default will be assigned until the API is stabilized. include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names. include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types. include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags. exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names. exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types. exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags. kwargs (Any) – Additional keyword arguments to pass to the runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log. Returns An async stream of StreamEvents. Return type AsyncIterator[StreamEvent] Notes async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-12
jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. diff (bool) – Whether to yield diffs between each step, or the current state. with_streamed_output_list (bool) – Whether to yield the streamed_output list. include_names (Optional[Sequence[str]]) – Only include logs with these names. include_types (Optional[Sequence[str]]) – Only include logs with these types. include_tags (Optional[Sequence[str]]) – Only include logs with these tags. exclude_names (Optional[Sequence[str]]) – Exclude logs with these names. exclude_types (Optional[Sequence[str]]) – Exclude logs with these types. exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags. kwargs (Any) – Return type Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]] async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (AsyncIterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type AsyncIterator[Output]
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-13
kwargs (Optional[Any]) – Return type AsyncIterator[Output] batch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str] batch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → Iterator[Tuple[int, Union[Output, Exception]]]¶ Run invoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (List[Input]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type Iterator[Tuple[int, Union[Output, Exception]]] bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable.
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-14
Bind arguments to a Runnable, returning a new Runnable. Useful when a runnable in a chain requires an argument that is not in the output of the previous runnable or included in the user input. Example: from langchain_community.chat_models import ChatOllama from langchain_core.output_parsers import StrOutputParser llm = ChatOllama(model='llama2') # Without bind. chain = ( llm | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two three four five.' # With bind. chain = ( llm.bind(stop=["three"]) | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two' Parameters kwargs (Any) – Return type Runnable[Input, Output] config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include (Optional[Sequence[str]]) – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. Return type Type[BaseModel] configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ Configure alternatives for runnables that can be set at runtime. from langchain_anthropic import ChatAnthropic
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-15
from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI model = ChatAnthropic( model_name="claude-3-sonnet-20240229" ).configurable_alternatives( ConfigurableField(id="llm"), default_key="anthropic", openai=ChatOpenAI() ) # uses the default model ChatAnthropic print(model.invoke("which organization created you?").content) # uses ChatOpenaAI print( model.with_config( configurable={"llm": "openai"} ).invoke("which organization created you?").content ) Parameters which (ConfigurableField) – default_key (str) – prefix_keys (bool) – kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) – Return type RunnableSerializable[Input, Output] configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ Configure particular runnable fields at runtime. from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI model = ChatOpenAI(max_tokens=20).configurable_fields( max_tokens=ConfigurableField( id="output_token_number", name="Max tokens in the output", description="The maximum number of tokens in the output", ) ) # max_tokens = 20 print( "max_tokens_20: ", model.invoke("tell me something about chess").content ) # max_tokens = 200 print("max_tokens_200: ", model.with_config(
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-16
# max_tokens = 200 print("max_tokens_200: ", model.with_config( configurable={"output_token_number": 200} ).invoke("tell me something about chess").content ) Parameters kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) – Return type RunnableSerializable[Input, Output] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-17
self (Model) – Returns new model instance Return type Model dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-18
functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-19
first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult get_graph(config: Optional[RunnableConfig] = None) → Graph¶ Return a graph representation of this runnable. Parameters config (Optional[RunnableConfig]) – Return type Graph get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. Return type Type[BaseModel] classmethod get_lc_namespace() → List[str][source]¶ Get the namespace of the langchain object. Return type List[str] get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶ Get the name of the runnable. Parameters suffix (Optional[str]) – name (Optional[str]) – Return type str get_num_tokens(text: str) → int¶
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-20
Return type str get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text (str) – The string input to tokenize. Returns The integer number of tokens in the text. Return type int get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages (List[BaseMessage]) – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. Return type int get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. Return type Type[BaseModel] get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶ Parameters config (Optional[RunnableConfig]) – Return type List[BasePromptTemplate] get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text (str) – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. Return type List[int]
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-21
Return type List[int] invoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – The input to the runnable. config (Optional[RunnableConfig]) – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. stop (Optional[List[str]]) – kwargs (Any) – Returns The output of the runnable. Return type str classmethod is_lc_serializable() → bool[source]¶ Is this class serializable? Return type bool json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-22
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. Return type List[str] map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. Example from langchain_core.runnables import RunnableLambda def _lambda(x: int) -> int: return x + 1 runnable = RunnableLambda(_lambda) print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4] Return type Runnable[List[Input], List[Output]] classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-23
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶ Pick keys from the dict output of this runnable. Pick single key:import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) chain = RunnableMap(str=as_str, json=as_json) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3]} json_only_chain = chain.pick("json") json_only_chain.invoke("[1, 2, 3]") # -> [1, 2, 3] Pick list of keys:from typing import Any import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) def as_bytes(x: Any) -> bytes: return bytes(x, "utf-8") chain = RunnableMap( str=as_str, json=as_json, bytes=RunnableLambda(as_bytes) ) chain.invoke("[1, 2, 3]")
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-24
) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"} json_and_bytes_chain = chain.pick(["json", "bytes"]) json_and_bytes_chain.invoke("[1, 2, 3]") # -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"} Parameters keys (Union[str, List[str]]) – Return type RunnableSerializable[Any, Any] pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶ Compose this Runnable with Runnable-like objects to make a RunnableSequence. Equivalent to RunnableSequence(self, *others) or self | others[0] | … Example from langchain_core.runnables import RunnableLambda def add_one(x: int) -> int: return x + 1 def mul_two(x: int) -> int: return x * 2 runnable_1 = RunnableLambda(add_one) runnable_2 = RunnableLambda(mul_two) sequence = runnable_1.pipe(runnable_2) # Or equivalently: # sequence = runnable_1 | runnable_2 # sequence = RunnableSequence(first=runnable_1, last=runnable_2) sequence.invoke(1) await sequence.ainvoke(1) # -> 4 sequence.batch([1, 2, 3]) await sequence.abatch([1, 2, 3]) # -> [4, 6, 8] Parameters
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-25
# -> [4, 6, 8] Parameters others (Union[Runnable[Any, Other], Callable[[Any], Other]]) – name (Optional[str]) – Return type RunnableSerializable[Input, Other] predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ [Deprecated] Notes Deprecated since version 0.1.7: Use invoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version 0.1.7: Use invoke instead. Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path (Union[Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-26
ref_template (unicode) – dumps_kwargs (Any) – Return type unicode stream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type Iterator[str] to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ Serialize the runnable to JSON. Return type Union[SerializedConstructor, SerializedNotImplemented] to_json_not_implemented() → SerializedNotImplemented¶ Return type SerializedNotImplemented transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (Iterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type Iterator[Output] classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-27
Parameters value (Any) – Return type Model with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. Parameters config (Optional[RunnableConfig]) – kwargs (Any) – Return type Runnable[Input, Output] with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Example from typing import Iterator from langchain_core.runnables import RunnableGenerator def _generate_immediate_error(input: Iterator) -> Iterator[str]: raise ValueError() yield "" def _generate(input: Iterator) -> Iterator[str]: yield from "foo bar" runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks( [RunnableGenerator(_generate)] ) print(''.join(runnable.stream({}))) #foo bar Parameters fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original runnable fails. exceptions_to_handle (Tuple[Type[BaseException], ...]) – A tuple of exception types to handle. exception_key (Optional[str]) – If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base runnable and its fallbacks must accept a dictionary as input. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. Return type
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-28
fallback in order, upon failures. Return type RunnableWithFallbacksT[Input, Output] with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. Example: Parameters on_start (Optional[Listener]) – on_end (Optional[Listener]) – on_error (Optional[Listener]) – Return type Runnable[Input, Output] with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Example: Parameters retry_if_exception_type (Tuple[Type[BaseException], ...]) – A tuple of exception types to retry on wait_exponential_jitter (bool) – Whether to add jitter to the wait time between retries stop_after_attempt (int) – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. Return type Runnable[Input, Output]
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-29
Return type Runnable[Input, Output] with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶ [Beta] Implement this if there is a way of steering the model to generate responses that match a given schema. Notes Parameters schema (Union[Dict, Type[BaseModel]]) – kwargs (Any) – Return type Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]] with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. Parameters input_type (Optional[Type[Input]]) – output_type (Optional[Type[Output]]) – Return type Runnable[Input, Output] property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property is_codey_model: bool¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids.
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
4d3b72010fd2-30
A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. property output_schema: Type[BaseModel]¶ The type of output this runnable produces specified as a pydantic model. task_executor: ClassVar[Optional[Executor]] = FieldInfo(exclude=True, extra={})¶ Examples using VertexAI¶ Google Vertex AI PaLM
https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html
40bb2738b0b9-0
langchain_community.llms.openai.acompletion_with_retry¶ async langchain_community.llms.openai.acompletion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶ Use tenacity to retry the async completion call. Parameters llm (Union[BaseOpenAI, OpenAIChat]) – run_manager (Optional[AsyncCallbackManagerForLLMRun]) – kwargs (Any) – Return type Any
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.openai.acompletion_with_retry.html
0b2716ab8e51-0
langchain_community.llms.gooseai.GooseAI¶ class langchain_community.llms.gooseai.GooseAI[source]¶ Bases: LLM GooseAI large language models. To use, you should have the openai python package installed, and the environment variable GOOSEAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain_community.llms import GooseAI gooseai = GooseAI(model_name="gpt-neo-20b") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Union[BaseCache, bool, None] = None¶ Whether to cache the response. If true, will use the global cache. If false, will not use a cache If None, will use the global cache if it’s set, otherwise no cache. If instance of BaseCache, will use the provided cache. Caching is not currently supported for streaming methods of models. param callback_manager: Optional[BaseCallbackManager] = None¶ [DEPRECATED] param callbacks: Callbacks = None¶ Callbacks to add to the run trace. param client: Any = None¶ param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param gooseai_api_key: Optional[SecretStr] = None¶ Constraints type = string writeOnly = True format = password param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_tokens: int = 256¶ The maximum number of tokens to generate in the completion.
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-1
The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param min_tokens: int = 1¶ The minimum number of tokens to generate in the completion. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-neo-20b'¶ Model name to use param n: int = 1¶ How many completions to generate for each prompt. param presence_penalty: float = 0¶ Penalizes repeated tokens. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ [Deprecated] Check Cache and run the LLM on the given prompt and input. Notes Deprecated since version 0.1.7: Use invoke instead. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – tags (Optional[List[str]]) – metadata (Optional[Dict[str, Any]]) –
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-2
metadata (Optional[Dict[str, Any]]) – kwargs (Any) – Return type str async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str] async abatch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶ Run ainvoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (List[Input]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type AsyncIterator[Tuple[int, Union[Output, Exception]]]
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-3
Return type AsyncIterator[Tuple[int, Union[Output, Exception]]] async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) –
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-4
tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-5
functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async ainvoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type str async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ [Deprecated] Notes Deprecated since version 0.1.7: Use ainvoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version 0.1.7: Use ainvoke instead. Parameters messages (List[BaseMessage]) –
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-6
Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶ Assigns new fields to the dict output of this runnable. Returns a new runnable. from langchain_community.llms.fake import FakeStreamingListLLM from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import SystemMessagePromptTemplate from langchain_core.runnables import Runnable from operator import itemgetter prompt = ( SystemMessagePromptTemplate.from_template("You are a nice assistant.") + "{question}" ) llm = FakeStreamingListLLM(responses=["foo-lish"]) chain: Runnable = prompt | llm | {"str": StrOutputParser()} chain_with_assign = chain.assign(hello=itemgetter("str") | llm) print(chain_with_assign.input_schema.schema()) # {'title': 'PromptInput', 'type': 'object', 'properties': {'question': {'title': 'Question', 'type': 'string'}}} print(chain_with_assign.output_schema.schema()) # {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties': {'str': {'title': 'Str', 'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}} Parameters kwargs (Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) –
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-7
Return type RunnableSerializable[Any, Any] async astream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type AsyncIterator[str] astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → AsyncIterator[StreamEvent]¶ [Beta] Generate a stream of events. Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results. A StreamEvent is a dictionary with the following schema: event: str - Event names are of theformat: on_[runnable_type]_(start|stream|end). name: str - The name of the runnable that generated the event. run_id: str - randomly generated ID associated with the given execution ofthe runnable that emitted the event. A child runnable that gets invoked as part of the execution of a
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-8
A child runnable that gets invoked as part of the execution of a parent runnable is assigned its own unique ID. tags: Optional[List[str]] - The tags of the runnable that generatedthe event. metadata: Optional[Dict[str, Any]] - The metadata of the runnablethat generated the event. data: Dict[str, Any] Below is a table that illustrates some evens that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table. event name chunk input output on_chat_model_start [model name] {“messages”: [[SystemMessage, HumanMessage]]} on_chat_model_stream [model name] AIMessageChunk(content=”hello”) on_chat_model_end [model name] {“messages”: [[SystemMessage, HumanMessage]]} {“generations”: […], “llm_output”: None, …} on_llm_start [model name] {‘input’: ‘hello’} on_llm_stream [model name] ‘Hello’ on_llm_end [model name] ‘Hello human!’ on_chain_start format_docs on_chain_stream format_docs “hello world!, goodbye world!” on_chain_end format_docs [Document(…)] “hello world!, goodbye world!” on_tool_start some_tool {“x”: 1, “y”: “2”} on_tool_stream some_tool {“x”: 1, “y”: “2”} on_tool_end some_tool {“x”: 1, “y”: “2”} on_retriever_start [retriever name] {“query”: “hello”} on_retriever_chunk [retriever name] {documents: […]}
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-9
on_retriever_chunk [retriever name] {documents: […]} on_retriever_end [retriever name] {“query”: “hello”} {documents: […]} on_prompt_start [template_name] {“question”: “hello”} on_prompt_end [template_name] {“question”: “hello”} ChatPromptValue(messages: [SystemMessage, …]) Here are declarations associated with the events shown above: format_docs: def format_docs(docs: List[Document]) -> str: '''Format the docs.''' return ", ".join([doc.page_content for doc in docs]) format_docs = RunnableLambda(format_docs) some_tool: @tool def some_tool(x: int, y: str) -> dict: '''Some_tool.''' return {"x": x, "y": y} prompt: template = ChatPromptTemplate.from_messages( [("system", "You are Cat Agent 007"), ("human", "{question}")] ).with_config({"run_name": "my_template", "tags": ["my_template"]}) Example: from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v1") ] # will produce the following events (run_id has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"},
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-10
}, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ] Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. version (Literal['v1']) – The version of the schema to use. Currently only version 1 is available. No default will be assigned until the API is stabilized. include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names. include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types. include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags. exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names. exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types. exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags. kwargs (Any) – Additional keyword arguments to pass to the runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log. Returns An async stream of StreamEvents. Return type AsyncIterator[StreamEvent] Notes
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-11
An async stream of StreamEvents. Return type AsyncIterator[StreamEvent] Notes async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. diff (bool) – Whether to yield diffs between each step, or the current state. with_streamed_output_list (bool) – Whether to yield the streamed_output list. include_names (Optional[Sequence[str]]) – Only include logs with these names. include_types (Optional[Sequence[str]]) – Only include logs with these types. include_tags (Optional[Sequence[str]]) – Only include logs with these tags. exclude_names (Optional[Sequence[str]]) – Exclude logs with these names. exclude_types (Optional[Sequence[str]]) – Exclude logs with these types. exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags.
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-12
exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags. kwargs (Any) – Return type Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]] async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (AsyncIterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type AsyncIterator[Output] batch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str]
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-13
kwargs (Any) – Return type List[str] batch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → Iterator[Tuple[int, Union[Output, Exception]]]¶ Run invoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (List[Input]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type Iterator[Tuple[int, Union[Output, Exception]]] bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. Useful when a runnable in a chain requires an argument that is not in the output of the previous runnable or included in the user input. Example: from langchain_community.chat_models import ChatOllama from langchain_core.output_parsers import StrOutputParser llm = ChatOllama(model='llama2') # Without bind. chain = ( llm | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two three four five.' # With bind. chain = ( llm.bind(stop=["three"]) | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two' Parameters kwargs (Any) – Return type Runnable[Input, Output] config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-14
The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include (Optional[Sequence[str]]) – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. Return type Type[BaseModel] configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ Configure alternatives for runnables that can be set at runtime. from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI model = ChatAnthropic( model_name="claude-3-sonnet-20240229" ).configurable_alternatives( ConfigurableField(id="llm"), default_key="anthropic", openai=ChatOpenAI() ) # uses the default model ChatAnthropic print(model.invoke("which organization created you?").content) # uses ChatOpenaAI print( model.with_config( configurable={"llm": "openai"} ).invoke("which organization created you?").content ) Parameters which (ConfigurableField) – default_key (str) – prefix_keys (bool) – kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) – Return type RunnableSerializable[Input, Output]
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-15
Return type RunnableSerializable[Input, Output] configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ Configure particular runnable fields at runtime. from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI model = ChatOpenAI(max_tokens=20).configurable_fields( max_tokens=ConfigurableField( id="output_token_number", name="Max tokens in the output", description="The maximum number of tokens in the output", ) ) # max_tokens = 20 print( "max_tokens_20: ", model.invoke("tell me something about chess").content ) # max_tokens = 200 print("max_tokens_200: ", model.with_config( configurable={"output_token_number": 200} ).invoke("tell me something about chess").content ) Parameters kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) – Return type RunnableSerializable[Input, Output] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-16
values (Any) – Return type Model copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-17
Parameters obj (Any) – Return type Model generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) –
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-18
tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-19
functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult get_graph(config: Optional[RunnableConfig] = None) → Graph¶ Return a graph representation of this runnable. Parameters config (Optional[RunnableConfig]) – Return type Graph get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. Return type Type[BaseModel] classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] Return type List[str] get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶ Get the name of the runnable. Parameters suffix (Optional[str]) – name (Optional[str]) – Return type str get_num_tokens(text: str) → int¶ Get the number of tokens present in the text.
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-20
Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text (str) – The string input to tokenize. Returns The integer number of tokens in the text. Return type int get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages (List[BaseMessage]) – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. Return type int get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. Return type Type[BaseModel] get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶ Parameters config (Optional[RunnableConfig]) – Return type List[BasePromptTemplate] get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text (str) – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. Return type List[int]
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-21
Return type List[int] invoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – The input to the runnable. config (Optional[RunnableConfig]) – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. stop (Optional[List[str]]) – kwargs (Any) – Returns The output of the runnable. Return type str classmethod is_lc_serializable() → bool¶ Is this class serializable? Return type bool json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-22
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. Return type List[str] map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. Example from langchain_core.runnables import RunnableLambda def _lambda(x: int) -> int: return x + 1 runnable = RunnableLambda(_lambda) print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4] Return type Runnable[List[Input], List[Output]] classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-23
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶ Pick keys from the dict output of this runnable. Pick single key:import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) chain = RunnableMap(str=as_str, json=as_json) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3]} json_only_chain = chain.pick("json") json_only_chain.invoke("[1, 2, 3]") # -> [1, 2, 3] Pick list of keys:from typing import Any import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) def as_bytes(x: Any) -> bytes: return bytes(x, "utf-8") chain = RunnableMap( str=as_str, json=as_json, bytes=RunnableLambda(as_bytes) ) chain.invoke("[1, 2, 3]")
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-24
) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"} json_and_bytes_chain = chain.pick(["json", "bytes"]) json_and_bytes_chain.invoke("[1, 2, 3]") # -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"} Parameters keys (Union[str, List[str]]) – Return type RunnableSerializable[Any, Any] pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶ Compose this Runnable with Runnable-like objects to make a RunnableSequence. Equivalent to RunnableSequence(self, *others) or self | others[0] | … Example from langchain_core.runnables import RunnableLambda def add_one(x: int) -> int: return x + 1 def mul_two(x: int) -> int: return x * 2 runnable_1 = RunnableLambda(add_one) runnable_2 = RunnableLambda(mul_two) sequence = runnable_1.pipe(runnable_2) # Or equivalently: # sequence = runnable_1 | runnable_2 # sequence = RunnableSequence(first=runnable_1, last=runnable_2) sequence.invoke(1) await sequence.ainvoke(1) # -> 4 sequence.batch([1, 2, 3]) await sequence.abatch([1, 2, 3]) # -> [4, 6, 8] Parameters
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-25
# -> [4, 6, 8] Parameters others (Union[Runnable[Any, Other], Callable[[Any], Other]]) – name (Optional[str]) – Return type RunnableSerializable[Input, Other] predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ [Deprecated] Notes Deprecated since version 0.1.7: Use invoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version 0.1.7: Use invoke instead. Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path (Union[Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-26
ref_template (unicode) – dumps_kwargs (Any) – Return type unicode stream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type Iterator[str] to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ Serialize the runnable to JSON. Return type Union[SerializedConstructor, SerializedNotImplemented] to_json_not_implemented() → SerializedNotImplemented¶ Return type SerializedNotImplemented transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (Iterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type Iterator[Output] classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-27
Parameters value (Any) – Return type Model with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. Parameters config (Optional[RunnableConfig]) – kwargs (Any) – Return type Runnable[Input, Output] with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Example from typing import Iterator from langchain_core.runnables import RunnableGenerator def _generate_immediate_error(input: Iterator) -> Iterator[str]: raise ValueError() yield "" def _generate(input: Iterator) -> Iterator[str]: yield from "foo bar" runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks( [RunnableGenerator(_generate)] ) print(''.join(runnable.stream({}))) #foo bar Parameters fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original runnable fails. exceptions_to_handle (Tuple[Type[BaseException], ...]) – A tuple of exception types to handle. exception_key (Optional[str]) – If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base runnable and its fallbacks must accept a dictionary as input. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. Return type
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-28
fallback in order, upon failures. Return type RunnableWithFallbacksT[Input, Output] with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. Example: Parameters on_start (Optional[Listener]) – on_end (Optional[Listener]) – on_error (Optional[Listener]) – Return type Runnable[Input, Output] with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Example: Parameters retry_if_exception_type (Tuple[Type[BaseException], ...]) – A tuple of exception types to retry on wait_exponential_jitter (bool) – Whether to add jitter to the wait time between retries stop_after_attempt (int) – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. Return type Runnable[Input, Output]
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-29
Return type Runnable[Input, Output] with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶ [Beta] Implement this if there is a way of steering the model to generate responses that match a given schema. Notes Parameters schema (Union[Dict, Type[BaseModel]]) – kwargs (Any) – Return type Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]] with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. Parameters input_type (Optional[Type[Input]]) – output_type (Optional[Type[Output]]) – Return type Runnable[Input, Output] property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
0b2716ab8e51-30
For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. property output_schema: Type[BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using GooseAI¶ GooseAI
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gooseai.GooseAI.html
f7bd1500e25f-0
langchain_community.llms.azureml_endpoint.DollyContentFormatter¶ class langchain_community.llms.azureml_endpoint.DollyContentFormatter[source]¶ Content handler for the Dolly-v2-12b model Attributes accepts The MIME type of the response data returned from the endpoint content_type The MIME type of the input data passed to the endpoint format_error_msg supported_api_types Supported APIs for the given formatter. Methods __init__() escape_special_characters(prompt) Escapes any special characters in prompt format_request_payload(prompt, model_kwargs, ...) Formats the request body according to the input schema of the model. format_response_payload(output, api_type) Formats the response body according to the output schema of the model. __init__()¶ static escape_special_characters(prompt: str) → str¶ Escapes any special characters in prompt Parameters prompt (str) – Return type str format_request_payload(prompt: str, model_kwargs: Dict, api_type: AzureMLEndpointApiType) → bytes[source]¶ Formats the request body according to the input schema of the model. Returns bytes or seekable file like object in the format specified in the content_type request header. Parameters prompt (str) – model_kwargs (Dict) – api_type (AzureMLEndpointApiType) – Return type bytes format_response_payload(output: bytes, api_type: AzureMLEndpointApiType) → Generation[source]¶ Formats the response body according to the output schema of the model. Returns the data type that is received from the response. Parameters output (bytes) – api_type (AzureMLEndpointApiType) – Return type Generation Examples using DollyContentFormatter¶ Azure ML
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.azureml_endpoint.DollyContentFormatter.html
e90555ed6baf-0
langchain_anthropic.llms.AnthropicLLM¶ class langchain_anthropic.llms.AnthropicLLM[source]¶ Bases: LLM, _AnthropicCommon Anthropic large language model. To use, you should have the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example from langchain_anthropic import AnthropicLLM model = AnthropicLLM() Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param AI_PROMPT: Optional[str] = None¶ param HUMAN_PROMPT: Optional[str] = None¶ param anthropic_api_key: Optional[SecretStr] = None¶ Constraints type = string writeOnly = True format = password param anthropic_api_url: Optional[str] = None¶ param cache: Union[BaseCache, bool, None] = None¶ Whether to cache the response. If true, will use the global cache. If false, will not use a cache If None, will use the global cache if it’s set, otherwise no cache. If instance of BaseCache, will use the provided cache. Caching is not currently supported for streaming methods of models. param callback_manager: Optional[BaseCallbackManager] = None¶ [DEPRECATED] param callbacks: Callbacks = None¶ Callbacks to add to the run trace. param count_tokens: Optional[Callable[[str], int]] = None¶ param default_request_timeout: Optional[float] = None¶ Timeout for requests to Anthropic Completion API. Default is 600 seconds. param max_retries: int = 2¶ Number of retries allowed for requests sent to the Anthropic Completion API.
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-1
Number of retries allowed for requests sent to the Anthropic Completion API. param max_tokens_to_sample: int = 1024 (alias 'max_tokens')¶ Denotes the number of tokens to predict per generation. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model: str = 'claude-2' (alias 'model_name')¶ Model name to use. param model_kwargs: Dict[str, Any] [Optional]¶ param streaming: bool = False¶ Whether to stream the results. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = None¶ A non-negative float that tunes the degree of randomness in generation. param top_k: Optional[int] = None¶ Number of most likely tokens to consider at each step. param top_p: Optional[float] = None¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ [Deprecated] Check Cache and run the LLM on the given prompt and input. Notes Deprecated since version 0.1.7: Use invoke instead. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – tags (Optional[List[str]]) – metadata (Optional[Dict[str, Any]]) – kwargs (Any) – Return type str
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-2
kwargs (Any) – Return type str async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str] async abatch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶ Run ainvoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (List[Input]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type AsyncIterator[Tuple[int, Union[Output, Exception]]]
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-3
Return type AsyncIterator[Tuple[int, Union[Output, Exception]]] async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) –
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-4
tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-5
functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult async ainvoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type str async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ [Deprecated] Notes Deprecated since version 0.1.7: Use ainvoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version 0.1.7: Use ainvoke instead. Parameters messages (List[BaseMessage]) –
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-6
Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶ Assigns new fields to the dict output of this runnable. Returns a new runnable. from langchain_community.llms.fake import FakeStreamingListLLM from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import SystemMessagePromptTemplate from langchain_core.runnables import Runnable from operator import itemgetter prompt = ( SystemMessagePromptTemplate.from_template("You are a nice assistant.") + "{question}" ) llm = FakeStreamingListLLM(responses=["foo-lish"]) chain: Runnable = prompt | llm | {"str": StrOutputParser()} chain_with_assign = chain.assign(hello=itemgetter("str") | llm) print(chain_with_assign.input_schema.schema()) # {'title': 'PromptInput', 'type': 'object', 'properties': {'question': {'title': 'Question', 'type': 'string'}}} print(chain_with_assign.output_schema.schema()) # {'title': 'RunnableSequenceOutput', 'type': 'object', 'properties': {'str': {'title': 'Str', 'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}} Parameters kwargs (Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) –
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-7
Return type RunnableSerializable[Any, Any] async astream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type AsyncIterator[str] astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → AsyncIterator[StreamEvent]¶ [Beta] Generate a stream of events. Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results. A StreamEvent is a dictionary with the following schema: event: str - Event names are of theformat: on_[runnable_type]_(start|stream|end). name: str - The name of the runnable that generated the event. run_id: str - randomly generated ID associated with the given execution ofthe runnable that emitted the event. A child runnable that gets invoked as part of the execution of a
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-8
A child runnable that gets invoked as part of the execution of a parent runnable is assigned its own unique ID. tags: Optional[List[str]] - The tags of the runnable that generatedthe event. metadata: Optional[Dict[str, Any]] - The metadata of the runnablethat generated the event. data: Dict[str, Any] Below is a table that illustrates some evens that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table. event name chunk input output on_chat_model_start [model name] {“messages”: [[SystemMessage, HumanMessage]]} on_chat_model_stream [model name] AIMessageChunk(content=”hello”) on_chat_model_end [model name] {“messages”: [[SystemMessage, HumanMessage]]} {“generations”: […], “llm_output”: None, …} on_llm_start [model name] {‘input’: ‘hello’} on_llm_stream [model name] ‘Hello’ on_llm_end [model name] ‘Hello human!’ on_chain_start format_docs on_chain_stream format_docs “hello world!, goodbye world!” on_chain_end format_docs [Document(…)] “hello world!, goodbye world!” on_tool_start some_tool {“x”: 1, “y”: “2”} on_tool_stream some_tool {“x”: 1, “y”: “2”} on_tool_end some_tool {“x”: 1, “y”: “2”} on_retriever_start [retriever name] {“query”: “hello”} on_retriever_chunk [retriever name] {documents: […]}
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-9
on_retriever_chunk [retriever name] {documents: […]} on_retriever_end [retriever name] {“query”: “hello”} {documents: […]} on_prompt_start [template_name] {“question”: “hello”} on_prompt_end [template_name] {“question”: “hello”} ChatPromptValue(messages: [SystemMessage, …]) Here are declarations associated with the events shown above: format_docs: def format_docs(docs: List[Document]) -> str: '''Format the docs.''' return ", ".join([doc.page_content for doc in docs]) format_docs = RunnableLambda(format_docs) some_tool: @tool def some_tool(x: int, y: str) -> dict: '''Some_tool.''' return {"x": x, "y": y} prompt: template = ChatPromptTemplate.from_messages( [("system", "You are Cat Agent 007"), ("human", "{question}")] ).with_config({"run_name": "my_template", "tags": ["my_template"]}) Example: from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v1") ] # will produce the following events (run_id has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"},
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-10
}, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ] Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. version (Literal['v1']) – The version of the schema to use. Currently only version 1 is available. No default will be assigned until the API is stabilized. include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names. include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types. include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags. exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names. exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types. exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags. kwargs (Any) – Additional keyword arguments to pass to the runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log. Returns An async stream of StreamEvents. Return type AsyncIterator[StreamEvent] Notes
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-11
An async stream of StreamEvents. Return type AsyncIterator[StreamEvent] Notes async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input (Any) – The input to the runnable. config (Optional[RunnableConfig]) – The config to use for the runnable. diff (bool) – Whether to yield diffs between each step, or the current state. with_streamed_output_list (bool) – Whether to yield the streamed_output list. include_names (Optional[Sequence[str]]) – Only include logs with these names. include_types (Optional[Sequence[str]]) – Only include logs with these types. include_tags (Optional[Sequence[str]]) – Only include logs with these tags. exclude_names (Optional[Sequence[str]]) – Exclude logs with these names. exclude_types (Optional[Sequence[str]]) – Exclude logs with these types. exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags.
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-12
exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags. kwargs (Any) – Return type Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]] async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (AsyncIterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type AsyncIterator[Output] batch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str]
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-13
kwargs (Any) – Return type List[str] batch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → Iterator[Tuple[int, Union[Output, Exception]]]¶ Run invoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (List[Input]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type Iterator[Tuple[int, Union[Output, Exception]]] bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. Useful when a runnable in a chain requires an argument that is not in the output of the previous runnable or included in the user input. Example: from langchain_community.chat_models import ChatOllama from langchain_core.output_parsers import StrOutputParser llm = ChatOllama(model='llama2') # Without bind. chain = ( llm | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two three four five.' # With bind. chain = ( llm.bind(stop=["three"]) | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two' Parameters kwargs (Any) – Return type Runnable[Input, Output] config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-14
The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include (Optional[Sequence[str]]) – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. Return type Type[BaseModel] configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ Configure alternatives for runnables that can be set at runtime. from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI model = ChatAnthropic( model_name="claude-3-sonnet-20240229" ).configurable_alternatives( ConfigurableField(id="llm"), default_key="anthropic", openai=ChatOpenAI() ) # uses the default model ChatAnthropic print(model.invoke("which organization created you?").content) # uses ChatOpenaAI print( model.with_config( configurable={"llm": "openai"} ).invoke("which organization created you?").content ) Parameters which (ConfigurableField) – default_key (str) – prefix_keys (bool) – kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) – Return type RunnableSerializable[Input, Output]
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-15
Return type RunnableSerializable[Input, Output] configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ Configure particular runnable fields at runtime. from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI model = ChatOpenAI(max_tokens=20).configurable_fields( max_tokens=ConfigurableField( id="output_token_number", name="Max tokens in the output", description="The maximum number of tokens in the output", ) ) # max_tokens = 20 print( "max_tokens_20: ", model.invoke("tell me something about chess").content ) # max_tokens = 200 print("max_tokens_200: ", model.with_config( configurable={"output_token_number": 200} ).invoke("tell me something about chess").content ) Parameters kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) – Return type RunnableSerializable[Input, Output] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model convert_prompt(prompt: PromptValue) → str[source]¶ Parameters prompt (PromptValue) – Return type str
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-16
Parameters prompt (PromptValue) – Return type str copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict classmethod from_orm(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-17
Parameters obj (Any) – Return type Model generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts. stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. tags (Optional[Union[List[str], List[List[str]]]]) –
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-18
tags (Optional[Union[List[str], List[List[str]]]]) – metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) – run_name (Optional[Union[str, List[str]]]) – run_id (Optional[Union[UUID, List[Optional[UUID]]]]) – **kwargs – Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-19
functionality, such as logging or streaming, throughout generation. **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. Return type LLMResult get_graph(config: Optional[RunnableConfig] = None) → Graph¶ Return a graph representation of this runnable. Parameters config (Optional[RunnableConfig]) – Return type Graph get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. Return type Type[BaseModel] classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] Return type List[str] get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶ Get the name of the runnable. Parameters suffix (Optional[str]) – name (Optional[str]) – Return type str get_num_tokens(text: str) → int[source]¶ Calculate number of tokens. Parameters text (str) – Return type int
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-20
Calculate number of tokens. Parameters text (str) – Return type int get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages (List[BaseMessage]) – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. Return type int get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config (Optional[RunnableConfig]) – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. Return type Type[BaseModel] get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶ Parameters config (Optional[RunnableConfig]) – Return type List[BasePromptTemplate] get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text (str) – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. Return type List[int]
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-21
Return type List[int] invoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – The input to the runnable. config (Optional[RunnableConfig]) – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. stop (Optional[List[str]]) – kwargs (Any) – Returns The output of the runnable. Return type str classmethod is_lc_serializable() → bool¶ Is this class serializable? Return type bool json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-22
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. Return type List[str] map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. Example from langchain_core.runnables import RunnableLambda def _lambda(x: int) -> int: return x + 1 runnable = RunnableLambda(_lambda) print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4] Return type Runnable[List[Input], List[Output]] classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters path (Union[str, Path]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model classmethod parse_obj(obj: Any) → Model¶ Parameters obj (Any) – Return type Model
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-23
Parameters obj (Any) – Return type Model classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ Parameters b (Union[str, bytes]) – content_type (unicode) – encoding (unicode) – proto (Protocol) – allow_pickle (bool) – Return type Model pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶ Pick keys from the dict output of this runnable. Pick single key:import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) chain = RunnableMap(str=as_str, json=as_json) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3]} json_only_chain = chain.pick("json") json_only_chain.invoke("[1, 2, 3]") # -> [1, 2, 3] Pick list of keys:from typing import Any import json from langchain_core.runnables import RunnableLambda, RunnableMap as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) def as_bytes(x: Any) -> bytes: return bytes(x, "utf-8") chain = RunnableMap( str=as_str, json=as_json, bytes=RunnableLambda(as_bytes) ) chain.invoke("[1, 2, 3]")
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-24
) chain.invoke("[1, 2, 3]") # -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"} json_and_bytes_chain = chain.pick(["json", "bytes"]) json_and_bytes_chain.invoke("[1, 2, 3]") # -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"} Parameters keys (Union[str, List[str]]) – Return type RunnableSerializable[Any, Any] pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶ Compose this Runnable with Runnable-like objects to make a RunnableSequence. Equivalent to RunnableSequence(self, *others) or self | others[0] | … Example from langchain_core.runnables import RunnableLambda def add_one(x: int) -> int: return x + 1 def mul_two(x: int) -> int: return x * 2 runnable_1 = RunnableLambda(add_one) runnable_2 = RunnableLambda(mul_two) sequence = runnable_1.pipe(runnable_2) # Or equivalently: # sequence = runnable_1 | runnable_2 # sequence = RunnableSequence(first=runnable_1, last=runnable_2) sequence.invoke(1) await sequence.ainvoke(1) # -> 4 sequence.batch([1, 2, 3]) await sequence.abatch([1, 2, 3]) # -> [4, 6, 8] Parameters
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-25
# -> [4, 6, 8] Parameters others (Union[Runnable[Any, Other], Callable[[Any], Other]]) – name (Optional[str]) – Return type RunnableSerializable[Input, Other] predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ [Deprecated] Notes Deprecated since version 0.1.7: Use invoke instead. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ [Deprecated] Notes Deprecated since version 0.1.7: Use invoke instead. Parameters messages (List[BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type BaseMessage save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path (Union[Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ Parameters by_alias (bool) – ref_template (unicode) – Return type DictStrAny classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ Parameters by_alias (bool) – ref_template (unicode) – dumps_kwargs (Any) – Return type
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-26
ref_template (unicode) – dumps_kwargs (Any) – Return type unicode stream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. Parameters input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – config (Optional[RunnableConfig]) – stop (Optional[List[str]]) – kwargs (Any) – Return type Iterator[str] to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ Serialize the runnable to JSON. Return type Union[SerializedConstructor, SerializedNotImplemented] to_json_not_implemented() → SerializedNotImplemented¶ Return type SerializedNotImplemented transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. Parameters input (Iterator[Input]) – config (Optional[RunnableConfig]) – kwargs (Optional[Any]) – Return type Iterator[Output] classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None classmethod validate(value: Any) → Model¶ Parameters value (Any) – Return type
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-27
Parameters value (Any) – Return type Model with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. Parameters config (Optional[RunnableConfig]) – kwargs (Any) – Return type Runnable[Input, Output] with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Example from typing import Iterator from langchain_core.runnables import RunnableGenerator def _generate_immediate_error(input: Iterator) -> Iterator[str]: raise ValueError() yield "" def _generate(input: Iterator) -> Iterator[str]: yield from "foo bar" runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks( [RunnableGenerator(_generate)] ) print(''.join(runnable.stream({}))) #foo bar Parameters fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original runnable fails. exceptions_to_handle (Tuple[Type[BaseException], ...]) – A tuple of exception types to handle. exception_key (Optional[str]) – If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base runnable and its fallbacks must accept a dictionary as input. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. Return type
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-28
fallback in order, upon failures. Return type RunnableWithFallbacksT[Input, Output] with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. Example: Parameters on_start (Optional[Listener]) – on_end (Optional[Listener]) – on_error (Optional[Listener]) – Return type Runnable[Input, Output] with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Example: Parameters retry_if_exception_type (Tuple[Type[BaseException], ...]) – A tuple of exception types to retry on wait_exponential_jitter (bool) – Whether to add jitter to the wait time between retries stop_after_attempt (int) – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. Return type Runnable[Input, Output]
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-29
Return type Runnable[Input, Output] with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶ [Beta] Implement this if there is a way of steering the model to generate responses that match a given schema. Notes Parameters schema (Union[Dict, Type[BaseModel]]) – kwargs (Any) – Return type Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]] with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. Parameters input_type (Optional[Type[Input]]) – output_type (Optional[Type[Output]]) – Return type Runnable[Input, Output] property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e90555ed6baf-30
For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. property output_schema: Type[BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html
e21f80fc45fb-0
langchain_community.llms.bigdl_llm.BigdlLLM¶ class langchain_community.llms.bigdl_llm.BigdlLLM[source]¶ Bases: IpexLLM Wrapper around the BigdlLLM model Example from langchain_community.llms import BigdlLLM llm = BigdlLLM.from_model_id(model_id="THUDM/chatglm-6b") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Union[BaseCache, bool, None] = None¶ Whether to cache the response. If true, will use the global cache. If false, will not use a cache If None, will use the global cache if it’s set, otherwise no cache. If instance of BaseCache, will use the provided cache. Caching is not currently supported for streaming methods of models. param callback_manager: Optional[BaseCallbackManager] = None¶ [DEPRECATED] param callbacks: Callbacks = None¶ Callbacks to add to the run trace. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model: Any = None¶ IpexLLM model. param model_id: str = 'gpt2'¶ Model name or model path to use. param model_kwargs: Optional[dict] = None¶ Keyword arguments passed to the model. param streaming: bool = True¶ Whether to stream the results, token by token. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param tokenizer: Any = None¶ Huggingface tokenizer model. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.bigdl_llm.BigdlLLM.html
e21f80fc45fb-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ [Deprecated] Check Cache and run the LLM on the given prompt and input. Notes Deprecated since version 0.1.7: Use invoke instead. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) – tags (Optional[List[str]]) – metadata (Optional[Dict[str, Any]]) – kwargs (Any) – Return type str async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. Parameters inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Any) – Return type List[str]
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.bigdl_llm.BigdlLLM.html
e21f80fc45fb-2
kwargs (Any) – Return type List[str] async abatch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶ Run ainvoke in parallel on a list of inputs, yielding results as they complete. Parameters inputs (List[Input]) – config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) – return_exceptions (bool) – kwargs (Optional[Any]) – Return type AsyncIterator[Tuple[int, Union[Output, Exception]]] async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts to a model and return generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts (List[str]) – List of string prompts.
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.bigdl_llm.BigdlLLM.html