id
stringlengths 14
16
| text
stringlengths 20
3.3k
| source
stringlengths 60
181
|
---|---|---|
49cb8b56a3fa-14
|
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include (Optional[Sequence[str]]) – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
Return type
Type[BaseModel]
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
Configure alternatives for runnables that can be set at runtime.
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-3-sonnet-20240229"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI()
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenaAI
print(
model.with_config(
configurable={"llm": "openai"}
).invoke("which organization created you?").content
)
Parameters
which (ConfigurableField) –
default_key (str) –
prefix_keys (bool) –
kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) –
Return type
RunnableSerializable[Input, Output]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-15
|
Return type
RunnableSerializable[Input, Output]
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
Configure particular runnable fields at runtime.
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print(
"max_tokens_20: ",
model.invoke("tell me something about chess").content
)
# max_tokens = 200
print("max_tokens_200: ", model.with_config(
configurable={"output_token_number": 200}
).invoke("tell me something about chess").content
)
Parameters
kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) –
Return type
RunnableSerializable[Input, Output]
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) –
values (Any) –
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-16
|
values (Any) –
Return type
Model
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) – set to True to make a deep copy of the model
self (Model) –
Returns
new model instance
Return type
Model
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
Parameters
kwargs (Any) –
Return type
Dict
classmethod from_model_id(model_id: str, tokenizer_config: Optional[dict] = None, adapter_file: Optional[str] = None, lazy: bool = False, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) → MLXPipeline[source]¶
Construct the pipeline object from model_id and task.
Parameters
model_id (str) –
tokenizer_config (Optional[dict]) –
adapter_file (Optional[str]) –
lazy (bool) –
pipeline_kwargs (Optional[dict]) –
kwargs (Any) –
Return type
MLXPipeline
classmethod from_orm(obj: Any) → Model¶
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-17
|
MLXPipeline
classmethod from_orm(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[str]) – List of string prompts.
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
tags (Optional[Union[List[str], List[List[str]]]]) –
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-18
|
tags (Optional[Union[List[str], List[List[str]]]]) –
metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –
run_name (Optional[Union[str, List[str]]]) –
run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –
**kwargs –
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-19
|
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
Parameters
config (Optional[RunnableConfig]) –
Return type
Graph
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config (Optional[RunnableConfig]) – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
Return type
Type[BaseModel]
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
Return type
List[str]
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
Parameters
suffix (Optional[str]) –
name (Optional[str]) –
Return type
str
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-20
|
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text (str) – The string input to tokenize.
Returns
The integer number of tokens in the text.
Return type
int
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages (List[BaseMessage]) – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
Return type
int
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config (Optional[RunnableConfig]) – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
Return type
Type[BaseModel]
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
Parameters
config (Optional[RunnableConfig]) –
Return type
List[BasePromptTemplate]
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text (str) – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
Return type
List[int]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-21
|
Return type
List[int]
invoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Transform a single input into an output. Override to implement.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – The input to the runnable.
config (Optional[RunnableConfig]) – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
stop (Optional[List[str]]) –
kwargs (Any) –
Returns
The output of the runnable.
Return type
str
classmethod is_lc_serializable() → bool¶
Is this class serializable?
Return type
bool
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-22
|
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
by_alias (bool) –
skip_defaults (Optional[bool]) –
exclude_unset (bool) –
exclude_defaults (bool) –
exclude_none (bool) –
encoder (Optional[Callable[[Any], Any]]) –
models_as_dict (bool) –
dumps_kwargs (Any) –
Return type
unicode
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
Return type
List[str]
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
Example
from langchain_core.runnables import RunnableLambda
def _lambda(x: int) -> int:
return x + 1
runnable = RunnableLambda(_lambda)
print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4]
Return type
Runnable[List[Input], List[Output]]
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
path (Union[str, Path]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
classmethod parse_obj(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-23
|
Parameters
obj (Any) –
Return type
Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
b (Union[str, bytes]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Pick single key:import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick list of keys:from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str,
json=as_json,
bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-24
|
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
Parameters
keys (Union[str, List[str]]) –
Return type
RunnableSerializable[Any, Any]
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this Runnable with Runnable-like objects to make a RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | …
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-25
|
# -> [4, 6, 8]
Parameters
others (Union[Runnable[Any, Other], Callable[[Any], Other]]) –
name (Optional[str]) –
Return type
RunnableSerializable[Input, Other]
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
text (str) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
str
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
messages (List[BaseMessage]) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
BaseMessage
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path (Union[Path, str]) – Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
Parameters
by_alias (bool) –
ref_template (unicode) –
Return type
DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
Parameters
by_alias (bool) –
ref_template (unicode) –
dumps_kwargs (Any) –
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-26
|
ref_template (unicode) –
dumps_kwargs (Any) –
Return type
unicode
stream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
Iterator[str]
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
Serialize the runnable to JSON.
Return type
Union[SerializedConstructor, SerializedNotImplemented]
to_json_not_implemented() → SerializedNotImplemented¶
Return type
SerializedNotImplemented
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
Parameters
input (Iterator[Input]) –
config (Optional[RunnableConfig]) –
kwargs (Optional[Any]) –
Return type
Iterator[Output]
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) –
Return type
None
classmethod validate(value: Any) → Model¶
Parameters
value (Any) –
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-27
|
Parameters
value (Any) –
Return type
Model
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
Parameters
config (Optional[RunnableConfig]) –
kwargs (Any) –
Return type
Runnable[Input, Output]
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print(''.join(runnable.stream({}))) #foo bar
Parameters
fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle (Tuple[Type[BaseException], ...]) – A tuple of exception types to handle.
exception_key (Optional[str]) – If string is specified then handled exceptions will be passed
to fallbacks as part of the input under the specified key. If None,
exceptions will not be passed to fallbacks. If used, the base runnable
and its fallbacks must accept a dictionary as input.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-28
|
fallback in order, upon failures.
Return type
RunnableWithFallbacksT[Input, Output]
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
Example:
Parameters
on_start (Optional[Listener]) –
on_end (Optional[Listener]) –
on_error (Optional[Listener]) –
Return type
Runnable[Input, Output]
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Example:
Parameters
retry_if_exception_type (Tuple[Type[BaseException], ...]) – A tuple of exception types to retry on
wait_exponential_jitter (bool) – Whether to add jitter to the wait time
between retries
stop_after_attempt (int) – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
Return type
Runnable[Input, Output]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-29
|
Return type
Runnable[Input, Output]
with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶
[Beta] Implement this if there is a way of steering the model to generate responses that match a given schema.
Notes
Parameters
schema (Union[Dict, Type[BaseModel]]) –
kwargs (Any) –
Return type
Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
Parameters
input_type (Optional[Type[Input]]) –
output_type (Optional[Type[Output]]) –
Return type
Runnable[Input, Output]
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Type[str]¶
Get the input type for this runnable.
property config_specs: List[ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
49cb8b56a3fa-30
|
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.mlx_pipeline.MLXPipeline.html
|
a8cf86da388d-0
|
langchain_community.llms.moonshot.MoonshotCommon¶
class langchain_community.llms.moonshot.MoonshotCommon[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param base_url: str = 'https://api.moonshot.cn/v1'¶
param max_tokens = 1024¶
Maximum number of tokens to generate.
param model_name: str = 'moonshot-v1-8k' (alias 'model')¶
Model name. Available models listed here: https://platform.moonshot.cn/pricing
param moonshot_api_key: Optional[SecretStr] = None (alias 'api_key')¶
Moonshot API key. Get it here: https://platform.moonshot.cn/console/api-keys
Constraints
type = string
writeOnly = True
format = password
param temperature = 0.3¶
Temperature parameter (higher values make the model more creative).
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) –
values (Any) –
Return type
Model
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.moonshot.MoonshotCommon.html
|
a8cf86da388d-1
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) – set to True to make a deep copy of the model
self (Model) –
Returns
new model instance
Return type
Model
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
by_alias (bool) –
skip_defaults (Optional[bool]) –
exclude_unset (bool) –
exclude_defaults (bool) –
exclude_none (bool) –
Return type
DictStrAny
classmethod from_orm(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.moonshot.MoonshotCommon.html
|
a8cf86da388d-2
|
Parameters
obj (Any) –
Return type
Model
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
by_alias (bool) –
skip_defaults (Optional[bool]) –
exclude_unset (bool) –
exclude_defaults (bool) –
exclude_none (bool) –
encoder (Optional[Callable[[Any], Any]]) –
models_as_dict (bool) –
dumps_kwargs (Any) –
Return type
unicode
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
path (Union[str, Path]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
classmethod parse_obj(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.moonshot.MoonshotCommon.html
|
a8cf86da388d-3
|
Parameters
obj (Any) –
Return type
Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
b (Union[str, bytes]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
Parameters
by_alias (bool) –
ref_template (unicode) –
Return type
DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
Parameters
by_alias (bool) –
ref_template (unicode) –
dumps_kwargs (Any) –
Return type
unicode
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) –
Return type
None
classmethod validate(value: Any) → Model¶
Parameters
value (Any) –
Return type
Model
property lc_secrets: dict¶
A map of constructor argument names to secret ids.
For example,{“moonshot_api_key”: “MOONSHOT_API_KEY”}
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.moonshot.MoonshotCommon.html
|
1de7f978c9c1-0
|
langchain_community.llms.xinference.Xinference¶
class langchain_community.llms.xinference.Xinference[source]¶
Bases: LLM
Xinference large-scale model inference service.
To use, you should have the xinference library installed:
pip install "xinference[all]"
Check out: https://github.com/xorbitsai/inference
To run, you need to start a Xinference supervisor on one server and Xinference workers on the other servers
Example
To start a local instance of Xinference, run
$ xinference
You can also deploy Xinference in a distributed cluster. Here are the steps:
Starting the supervisor:
$ xinference-supervisor
Starting the worker:
$ xinference-worker
Then, launch a model using command line interface (CLI).
Example:
$ xinference launch -n orca -s 3 -q q4_0
It will return a model UID. Then, you can use Xinference with LangChain.
Example:
from langchain_community.llms import Xinference
llm = Xinference(
server_url="http://0.0.0.0:9997",
model_uid = {model_uid} # replace model_uid with the model UID return from launching the model
)
llm(
prompt="Q: where can we visit in the capital of France? A:",
generate_config={"max_tokens": 1024, "stream": True},
)
To view all the supported builtin models, run:
$ xinference list --all
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Union[BaseCache, bool, None] = None¶
Whether to cache the response.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-1
|
Whether to cache the response.
If true, will use the global cache.
If false, will not use a cache
If None, will use the global cache if it’s set, otherwise no cache.
If instance of BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
param callback_manager: Optional[BaseCallbackManager] = None¶
[DEPRECATED]
param callbacks: Callbacks = None¶
Callbacks to add to the run trace.
param client: Any = None¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Required]¶
Keyword arguments to be passed to xinference.LLM
param model_uid: Optional[str] = None¶
UID of the launched model
param server_url: Optional[str] = None¶
URL of the xinference server
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
[Deprecated] Check Cache and run the LLM on the given prompt and input.
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
prompt (str) –
stop (Optional[List[str]]) –
callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-2
|
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
str
async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
Parameters
inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Any) –
Return type
List[str]
async abatch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶
Run ainvoke in parallel on a list of inputs,
yielding results as they complete.
Parameters
inputs (List[Input]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Optional[Any]) –
Return type
AsyncIterator[Tuple[int, Union[Output, Exception]]]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-3
|
Return type
AsyncIterator[Tuple[int, Union[Output, Exception]]]
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[str]) – List of string prompts.
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
tags (Optional[Union[List[str], List[List[str]]]]) –
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-4
|
tags (Optional[Union[List[str], List[List[str]]]]) –
metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –
run_name (Optional[Union[str, List[str]]]) –
run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –
**kwargs –
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-5
|
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
async ainvoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
str
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use ainvoke instead.
Parameters
text (str) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
str
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use ainvoke instead.
Parameters
messages (List[BaseMessage]) –
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-6
|
Parameters
messages (List[BaseMessage]) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
BaseMessage
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | llm | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | llm)
print(chain_with_assign.input_schema.schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.schema()) #
{'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
Parameters
kwargs (Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) –
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-7
|
Return type
RunnableSerializable[Any, Any]
async astream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
AsyncIterator[str]
astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → AsyncIterator[StreamEvent]¶
[Beta] Generate a stream of events.
Use to create an iterator over StreamEvents that provide real-time information
about the progress of the runnable, including StreamEvents from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: str - Event names are of theformat: on_[runnable_type]_(start|stream|end).
name: str - The name of the runnable that generated the event.
run_id: str - randomly generated ID associated with the given execution ofthe runnable that emitted the event.
A child runnable that gets invoked as part of the execution of a
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-8
|
A child runnable that gets invoked as part of the execution of a
parent runnable is assigned its own unique ID.
tags: Optional[List[str]] - The tags of the runnable that generatedthe event.
metadata: Optional[Dict[str, Any]] - The metadata of the runnablethat generated the event.
data: Dict[str, Any]
Below is a table that illustrates some evens that might be emitted by various
chains. Metadata fields have been omitted from the table for brevity.
Chain definitions have been included after the table.
event
name
chunk
input
output
on_chat_model_start
[model name]
{“messages”: [[SystemMessage, HumanMessage]]}
on_chat_model_stream
[model name]
AIMessageChunk(content=”hello”)
on_chat_model_end
[model name]
{“messages”: [[SystemMessage, HumanMessage]]}
{“generations”: […], “llm_output”: None, …}
on_llm_start
[model name]
{‘input’: ‘hello’}
on_llm_stream
[model name]
‘Hello’
on_llm_end
[model name]
‘Hello human!’
on_chain_start
format_docs
on_chain_stream
format_docs
“hello world!, goodbye world!”
on_chain_end
format_docs
[Document(…)]
“hello world!, goodbye world!”
on_tool_start
some_tool
{“x”: 1, “y”: “2”}
on_tool_stream
some_tool
{“x”: 1, “y”: “2”}
on_tool_end
some_tool
{“x”: 1, “y”: “2”}
on_retriever_start
[retriever name]
{“query”: “hello”}
on_retriever_chunk
[retriever name]
{documents: […]}
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-9
|
on_retriever_chunk
[retriever name]
{documents: […]}
on_retriever_end
[retriever name]
{“query”: “hello”}
{documents: […]}
on_prompt_start
[template_name]
{“question”: “hello”}
on_prompt_end
[template_name]
{“question”: “hello”}
ChatPromptValue(messages: [SystemMessage, …])
Here are declarations associated with the events shown above:
format_docs:
def format_docs(docs: List[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
@tool
def some_tool(x: int, y: str) -> dict:
'''Some_tool.'''
return {"x": x, "y": y}
prompt:
template = ChatPromptTemplate.from_messages(
[("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [
event async for event in chain.astream_events("hello", version="v1")
]
# will produce the following events (run_id has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-10
|
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
Parameters
input (Any) – The input to the runnable.
config (Optional[RunnableConfig]) – The config to use for the runnable.
version (Literal['v1']) – The version of the schema to use.
Currently only version 1 is available.
No default will be assigned until the API is stabilized.
include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names.
include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types.
include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags.
exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names.
exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types.
exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags.
kwargs (Any) – Additional keyword arguments to pass to the runnable.
These will be passed to astream_log as this implementation
of astream_events is built on top of astream_log.
Returns
An async stream of StreamEvents.
Return type
AsyncIterator[StreamEvent]
Notes
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-11
|
An async stream of StreamEvents.
Return type
AsyncIterator[StreamEvent]
Notes
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input (Any) – The input to the runnable.
config (Optional[RunnableConfig]) – The config to use for the runnable.
diff (bool) – Whether to yield diffs between each step, or the current state.
with_streamed_output_list (bool) – Whether to yield the streamed_output list.
include_names (Optional[Sequence[str]]) – Only include logs with these names.
include_types (Optional[Sequence[str]]) – Only include logs with these types.
include_tags (Optional[Sequence[str]]) – Only include logs with these tags.
exclude_names (Optional[Sequence[str]]) – Exclude logs with these names.
exclude_types (Optional[Sequence[str]]) – Exclude logs with these types.
exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-12
|
exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags.
kwargs (Any) –
Return type
Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
Parameters
input (AsyncIterator[Input]) –
config (Optional[RunnableConfig]) –
kwargs (Optional[Any]) –
Return type
AsyncIterator[Output]
batch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
Parameters
inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Any) –
Return type
List[str]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-13
|
kwargs (Any) –
Return type
List[str]
batch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → Iterator[Tuple[int, Union[Output, Exception]]]¶
Run invoke in parallel on a list of inputs,
yielding results as they complete.
Parameters
inputs (List[Input]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Optional[Any]) –
Return type
Iterator[Tuple[int, Union[Output, Exception]]]
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
Useful when a runnable in a chain requires an argument that is not
in the output of the previous runnable or included in the user input.
Example:
from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
llm = ChatOllama(model='llama2')
# Without bind.
chain = (
llm
| StrOutputParser()
)
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind.
chain = (
llm.bind(stop=["three"])
| StrOutputParser()
)
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
Parameters
kwargs (Any) –
Return type
Runnable[Input, Output]
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-14
|
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include (Optional[Sequence[str]]) – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
Return type
Type[BaseModel]
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
Configure alternatives for runnables that can be set at runtime.
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-3-sonnet-20240229"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI()
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenaAI
print(
model.with_config(
configurable={"llm": "openai"}
).invoke("which organization created you?").content
)
Parameters
which (ConfigurableField) –
default_key (str) –
prefix_keys (bool) –
kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) –
Return type
RunnableSerializable[Input, Output]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-15
|
Return type
RunnableSerializable[Input, Output]
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
Configure particular runnable fields at runtime.
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print(
"max_tokens_20: ",
model.invoke("tell me something about chess").content
)
# max_tokens = 200
print("max_tokens_200: ", model.with_config(
configurable={"output_token_number": 200}
).invoke("tell me something about chess").content
)
Parameters
kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) –
Return type
RunnableSerializable[Input, Output]
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) –
values (Any) –
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-16
|
values (Any) –
Return type
Model
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) – set to True to make a deep copy of the model
self (Model) –
Returns
new model instance
Return type
Model
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
Parameters
kwargs (Any) –
Return type
Dict
classmethod from_orm(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-17
|
Parameters
obj (Any) –
Return type
Model
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[str]) – List of string prompts.
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
tags (Optional[Union[List[str], List[List[str]]]]) –
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-18
|
tags (Optional[Union[List[str], List[List[str]]]]) –
metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –
run_name (Optional[Union[str, List[str]]]) –
run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –
**kwargs –
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-19
|
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
Parameters
config (Optional[RunnableConfig]) –
Return type
Graph
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config (Optional[RunnableConfig]) – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
Return type
Type[BaseModel]
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
Return type
List[str]
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
Parameters
suffix (Optional[str]) –
name (Optional[str]) –
Return type
str
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-20
|
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text (str) – The string input to tokenize.
Returns
The integer number of tokens in the text.
Return type
int
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages (List[BaseMessage]) – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
Return type
int
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config (Optional[RunnableConfig]) – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
Return type
Type[BaseModel]
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
Parameters
config (Optional[RunnableConfig]) –
Return type
List[BasePromptTemplate]
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text (str) – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
Return type
List[int]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-21
|
Return type
List[int]
invoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Transform a single input into an output. Override to implement.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – The input to the runnable.
config (Optional[RunnableConfig]) – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
stop (Optional[List[str]]) –
kwargs (Any) –
Returns
The output of the runnable.
Return type
str
classmethod is_lc_serializable() → bool¶
Is this class serializable?
Return type
bool
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-22
|
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
by_alias (bool) –
skip_defaults (Optional[bool]) –
exclude_unset (bool) –
exclude_defaults (bool) –
exclude_none (bool) –
encoder (Optional[Callable[[Any], Any]]) –
models_as_dict (bool) –
dumps_kwargs (Any) –
Return type
unicode
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
Return type
List[str]
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
Example
from langchain_core.runnables import RunnableLambda
def _lambda(x: int) -> int:
return x + 1
runnable = RunnableLambda(_lambda)
print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4]
Return type
Runnable[List[Input], List[Output]]
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
path (Union[str, Path]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
classmethod parse_obj(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-23
|
Parameters
obj (Any) –
Return type
Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
b (Union[str, bytes]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Pick single key:import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick list of keys:from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str,
json=as_json,
bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-24
|
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
Parameters
keys (Union[str, List[str]]) –
Return type
RunnableSerializable[Any, Any]
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this Runnable with Runnable-like objects to make a RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | …
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-25
|
# -> [4, 6, 8]
Parameters
others (Union[Runnable[Any, Other], Callable[[Any], Other]]) –
name (Optional[str]) –
Return type
RunnableSerializable[Input, Other]
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
text (str) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
str
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
messages (List[BaseMessage]) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
BaseMessage
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path (Union[Path, str]) – Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
Parameters
by_alias (bool) –
ref_template (unicode) –
Return type
DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
Parameters
by_alias (bool) –
ref_template (unicode) –
dumps_kwargs (Any) –
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-26
|
ref_template (unicode) –
dumps_kwargs (Any) –
Return type
unicode
stream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
Iterator[str]
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
Serialize the runnable to JSON.
Return type
Union[SerializedConstructor, SerializedNotImplemented]
to_json_not_implemented() → SerializedNotImplemented¶
Return type
SerializedNotImplemented
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
Parameters
input (Iterator[Input]) –
config (Optional[RunnableConfig]) –
kwargs (Optional[Any]) –
Return type
Iterator[Output]
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) –
Return type
None
classmethod validate(value: Any) → Model¶
Parameters
value (Any) –
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-27
|
Parameters
value (Any) –
Return type
Model
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
Parameters
config (Optional[RunnableConfig]) –
kwargs (Any) –
Return type
Runnable[Input, Output]
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print(''.join(runnable.stream({}))) #foo bar
Parameters
fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle (Tuple[Type[BaseException], ...]) – A tuple of exception types to handle.
exception_key (Optional[str]) – If string is specified then handled exceptions will be passed
to fallbacks as part of the input under the specified key. If None,
exceptions will not be passed to fallbacks. If used, the base runnable
and its fallbacks must accept a dictionary as input.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-28
|
fallback in order, upon failures.
Return type
RunnableWithFallbacksT[Input, Output]
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
Example:
Parameters
on_start (Optional[Listener]) –
on_end (Optional[Listener]) –
on_error (Optional[Listener]) –
Return type
Runnable[Input, Output]
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Example:
Parameters
retry_if_exception_type (Tuple[Type[BaseException], ...]) – A tuple of exception types to retry on
wait_exponential_jitter (bool) – Whether to add jitter to the wait time
between retries
stop_after_attempt (int) – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
Return type
Runnable[Input, Output]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-29
|
Return type
Runnable[Input, Output]
with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶
[Beta] Implement this if there is a way of steering the model to generate responses that match a given schema.
Notes
Parameters
schema (Union[Dict, Type[BaseModel]]) –
kwargs (Any) –
Return type
Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
Parameters
input_type (Optional[Type[Input]]) –
output_type (Optional[Type[Output]]) –
Return type
Runnable[Input, Output]
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Type[str]¶
Get the input type for this runnable.
property config_specs: List[ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
1de7f978c9c1-30
|
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using Xinference¶
Xorbits Inference (Xinference)
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.xinference.Xinference.html
|
be6ccf0a7b90-0
|
langchain_community.llms.volcengine_maas.VolcEngineMaasLLM¶
class langchain_community.llms.volcengine_maas.VolcEngineMaasLLM[source]¶
Bases: LLM, VolcEngineMaasBase
volc engine maas hosts a plethora of models.
You can utilize these models through this class.
To use, you should have the volcengine python package installed.
and set access key and secret key by environment variable or direct pass those to
this class.
access key, secret key are required parameters which you could get help
https://www.volcengine.com/docs/6291/65568
In order to use them, it is necessary to install the ‘volcengine’ Python package.
The access key and secret key must be set either via environment variables or
passed directly to this class.
access key and secret key are mandatory parameters for which assistance can be
sought at https://www.volcengine.com/docs/6291/65568.
Example
from langchain_community.llms import VolcEngineMaasLLM
model = VolcEngineMaasLLM(model="skylark-lite-public",
volc_engine_maas_ak="your_ak",
volc_engine_maas_sk="your_sk")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Union[BaseCache, bool, None] = None¶
Whether to cache the response.
If true, will use the global cache.
If false, will not use a cache
If None, will use the global cache if it’s set, otherwise no cache.
If instance of BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-1
|
Caching is not currently supported for streaming methods of models.
param callback_manager: Optional[BaseCallbackManager] = None¶
[DEPRECATED]
param callbacks: Callbacks = None¶
Callbacks to add to the run trace.
param client: Any = None¶
param connect_timeout: Optional[int] = 60¶
Timeout for connect to volc engine maas endpoint. Default is 60 seconds.
param endpoint: Optional[str] = 'maas-api.ml-platform-cn-beijing.volces.com'¶
Endpoint of the VolcEngineMaas LLM.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model: str = 'skylark-lite-public'¶
Model name. you could check this model details here
https://www.volcengine.com/docs/82379/1133187
and you could choose other models by change this field
param model_kwargs: Dict[str, Any] [Optional]¶
model special arguments, you could check detail on model page
param model_version: Optional[str] = None¶
Model version. Only used in moonshot large language model.
you could check details here https://www.volcengine.com/docs/82379/1158281
param read_timeout: Optional[int] = 60¶
Timeout for read response from volc engine maas endpoint.
Default is 60 seconds.
param region: Optional[str] = 'Region'¶
Region of the VolcEngineMaas LLM.
param streaming: bool = False¶
Whether to stream the results.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: Optional[float] = 0.95¶
A non-negative float that tunes the degree of randomness in generation.
param top_p: Optional[float] = 0.8¶
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-2
|
param top_p: Optional[float] = 0.8¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
param volc_engine_maas_ak: Optional[SecretStr] = None¶
access key for volc engine
Constraints
type = string
writeOnly = True
format = password
param volc_engine_maas_sk: Optional[SecretStr] = None¶
secret key for volc engine
Constraints
type = string
writeOnly = True
format = password
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
[Deprecated] Check Cache and run the LLM on the given prompt and input.
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
prompt (str) –
stop (Optional[List[str]]) –
callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
str
async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-3
|
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
Parameters
inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Any) –
Return type
List[str]
async abatch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶
Run ainvoke in parallel on a list of inputs,
yielding results as they complete.
Parameters
inputs (List[Input]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Optional[Any]) –
Return type
AsyncIterator[Tuple[int, Union[Output, Exception]]]
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-4
|
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[str]) – List of string prompts.
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
tags (Optional[Union[List[str], List[List[str]]]]) –
metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –
run_name (Optional[Union[str, List[str]]]) –
run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –
**kwargs –
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-5
|
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
async ainvoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-6
|
Subclasses should override this method if they can run asynchronously.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
str
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use ainvoke instead.
Parameters
text (str) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
str
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use ainvoke instead.
Parameters
messages (List[BaseMessage]) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
BaseMessage
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-7
|
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | llm | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | llm)
print(chain_with_assign.input_schema.schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.schema()) #
{'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
Parameters
kwargs (Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) –
Return type
RunnableSerializable[Any, Any]
async astream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-8
|
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
AsyncIterator[str]
astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → AsyncIterator[StreamEvent]¶
[Beta] Generate a stream of events.
Use to create an iterator over StreamEvents that provide real-time information
about the progress of the runnable, including StreamEvents from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: str - Event names are of theformat: on_[runnable_type]_(start|stream|end).
name: str - The name of the runnable that generated the event.
run_id: str - randomly generated ID associated with the given execution ofthe runnable that emitted the event.
A child runnable that gets invoked as part of the execution of a
parent runnable is assigned its own unique ID.
tags: Optional[List[str]] - The tags of the runnable that generatedthe event.
metadata: Optional[Dict[str, Any]] - The metadata of the runnablethat generated the event.
data: Dict[str, Any]
Below is a table that illustrates some evens that might be emitted by various
chains. Metadata fields have been omitted from the table for brevity.
Chain definitions have been included after the table.
event
name
chunk
input
output
on_chat_model_start
[model name]
{“messages”: [[SystemMessage, HumanMessage]]}
on_chat_model_stream
[model name]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-9
|
on_chat_model_stream
[model name]
AIMessageChunk(content=”hello”)
on_chat_model_end
[model name]
{“messages”: [[SystemMessage, HumanMessage]]}
{“generations”: […], “llm_output”: None, …}
on_llm_start
[model name]
{‘input’: ‘hello’}
on_llm_stream
[model name]
‘Hello’
on_llm_end
[model name]
‘Hello human!’
on_chain_start
format_docs
on_chain_stream
format_docs
“hello world!, goodbye world!”
on_chain_end
format_docs
[Document(…)]
“hello world!, goodbye world!”
on_tool_start
some_tool
{“x”: 1, “y”: “2”}
on_tool_stream
some_tool
{“x”: 1, “y”: “2”}
on_tool_end
some_tool
{“x”: 1, “y”: “2”}
on_retriever_start
[retriever name]
{“query”: “hello”}
on_retriever_chunk
[retriever name]
{documents: […]}
on_retriever_end
[retriever name]
{“query”: “hello”}
{documents: […]}
on_prompt_start
[template_name]
{“question”: “hello”}
on_prompt_end
[template_name]
{“question”: “hello”}
ChatPromptValue(messages: [SystemMessage, …])
Here are declarations associated with the events shown above:
format_docs:
def format_docs(docs: List[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
@tool
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-10
|
format_docs = RunnableLambda(format_docs)
some_tool:
@tool
def some_tool(x: int, y: str) -> dict:
'''Some_tool.'''
return {"x": x, "y": y}
prompt:
template = ChatPromptTemplate.from_messages(
[("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [
event async for event in chain.astream_events("hello", version="v1")
]
# will produce the following events (run_id has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
Parameters
input (Any) – The input to the runnable.
config (Optional[RunnableConfig]) – The config to use for the runnable.
version (Literal['v1']) – The version of the schema to use.
Currently only version 1 is available.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-11
|
Currently only version 1 is available.
No default will be assigned until the API is stabilized.
include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names.
include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types.
include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags.
exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names.
exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types.
exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags.
kwargs (Any) – Additional keyword arguments to pass to the runnable.
These will be passed to astream_log as this implementation
of astream_events is built on top of astream_log.
Returns
An async stream of StreamEvents.
Return type
AsyncIterator[StreamEvent]
Notes
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-12
|
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input (Any) – The input to the runnable.
config (Optional[RunnableConfig]) – The config to use for the runnable.
diff (bool) – Whether to yield diffs between each step, or the current state.
with_streamed_output_list (bool) – Whether to yield the streamed_output list.
include_names (Optional[Sequence[str]]) – Only include logs with these names.
include_types (Optional[Sequence[str]]) – Only include logs with these types.
include_tags (Optional[Sequence[str]]) – Only include logs with these tags.
exclude_names (Optional[Sequence[str]]) – Exclude logs with these names.
exclude_types (Optional[Sequence[str]]) – Exclude logs with these types.
exclude_tags (Optional[Sequence[str]]) – Exclude logs with these tags.
kwargs (Any) –
Return type
Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
Parameters
input (AsyncIterator[Input]) –
config (Optional[RunnableConfig]) –
kwargs (Optional[Any]) –
Return type
AsyncIterator[Output]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-13
|
kwargs (Optional[Any]) –
Return type
AsyncIterator[Output]
batch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
Parameters
inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Any) –
Return type
List[str]
batch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → Iterator[Tuple[int, Union[Output, Exception]]]¶
Run invoke in parallel on a list of inputs,
yielding results as they complete.
Parameters
inputs (List[Input]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Optional[Any]) –
Return type
Iterator[Tuple[int, Union[Output, Exception]]]
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-14
|
Bind arguments to a Runnable, returning a new Runnable.
Useful when a runnable in a chain requires an argument that is not
in the output of the previous runnable or included in the user input.
Example:
from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
llm = ChatOllama(model='llama2')
# Without bind.
chain = (
llm
| StrOutputParser()
)
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two three four five.'
# With bind.
chain = (
llm.bind(stop=["three"])
| StrOutputParser()
)
chain.invoke("Repeat quoted words exactly: 'One two three four five.'")
# Output is 'One two'
Parameters
kwargs (Any) –
Return type
Runnable[Input, Output]
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include (Optional[Sequence[str]]) – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
Return type
Type[BaseModel]
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
Configure alternatives for runnables that can be set at runtime.
from langchain_anthropic import ChatAnthropic
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-15
|
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-3-sonnet-20240229"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
openai=ChatOpenAI()
)
# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)
# uses ChatOpenaAI
print(
model.with_config(
configurable={"llm": "openai"}
).invoke("which organization created you?").content
)
Parameters
which (ConfigurableField) –
default_key (str) –
prefix_keys (bool) –
kwargs (Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) –
Return type
RunnableSerializable[Input, Output]
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
Configure particular runnable fields at runtime.
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(max_tokens=20).configurable_fields(
max_tokens=ConfigurableField(
id="output_token_number",
name="Max tokens in the output",
description="The maximum number of tokens in the output",
)
)
# max_tokens = 20
print(
"max_tokens_20: ",
model.invoke("tell me something about chess").content
)
# max_tokens = 200
print("max_tokens_200: ", model.with_config(
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-16
|
# max_tokens = 200
print("max_tokens_200: ", model.with_config(
configurable={"output_token_number": 200}
).invoke("tell me something about chess").content
)
Parameters
kwargs (Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) –
Return type
RunnableSerializable[Input, Output]
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) –
values (Any) –
Return type
Model
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) – set to True to make a deep copy of the model
self (Model) –
Returns
new model instance
Return type
Model
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-17
|
self (Model) –
Returns
new model instance
Return type
Model
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
Parameters
kwargs (Any) –
Return type
Dict
classmethod from_orm(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[str]) – List of string prompts.
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-18
|
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
tags (Optional[Union[List[str], List[List[str]]]]) –
metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –
run_name (Optional[Union[str, List[str]]]) –
run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –
**kwargs –
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-19
|
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
Parameters
config (Optional[RunnableConfig]) –
Return type
Graph
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config (Optional[RunnableConfig]) – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
Return type
Type[BaseModel]
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
Return type
List[str]
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-20
|
Get the name of the runnable.
Parameters
suffix (Optional[str]) –
name (Optional[str]) –
Return type
str
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text (str) – The string input to tokenize.
Returns
The integer number of tokens in the text.
Return type
int
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages (List[BaseMessage]) – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
Return type
int
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config (Optional[RunnableConfig]) – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
Return type
Type[BaseModel]
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
Parameters
config (Optional[RunnableConfig]) –
Return type
List[BasePromptTemplate]
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text (str) – The string input to tokenize.
Returns
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-21
|
Parameters
text (str) – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
Return type
List[int]
invoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Transform a single input into an output. Override to implement.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) – The input to the runnable.
config (Optional[RunnableConfig]) – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
stop (Optional[List[str]]) –
kwargs (Any) –
Returns
The output of the runnable.
Return type
str
classmethod is_lc_serializable() → bool¶
Is this class serializable?
Return type
bool
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-22
|
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
by_alias (bool) –
skip_defaults (Optional[bool]) –
exclude_unset (bool) –
exclude_defaults (bool) –
exclude_none (bool) –
encoder (Optional[Callable[[Any], Any]]) –
models_as_dict (bool) –
dumps_kwargs (Any) –
Return type
unicode
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
Return type
List[str]
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
Example
from langchain_core.runnables import RunnableLambda
def _lambda(x: int) -> int:
return x + 1
runnable = RunnableLambda(_lambda)
print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4]
Return type
Runnable[List[Input], List[Output]]
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
path (Union[str, Path]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-23
|
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
classmethod parse_obj(obj: Any) → Model¶
Parameters
obj (Any) –
Return type
Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
Parameters
b (Union[str, bytes]) –
content_type (unicode) –
encoding (unicode) –
proto (Protocol) –
allow_pickle (bool) –
Return type
Model
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Pick single key:import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
chain = RunnableMap(str=as_str, json=as_json)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3]}
json_only_chain = chain.pick("json")
json_only_chain.invoke("[1, 2, 3]")
# -> [1, 2, 3]
Pick list of keys:from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str)
as_json = RunnableLambda(json.loads)
def as_bytes(x: Any) -> bytes:
return bytes(x, "utf-8")
chain = RunnableMap(
str=as_str,
json=as_json,
bytes=RunnableLambda(as_bytes)
)
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-24
|
json=as_json,
bytes=RunnableLambda(as_bytes)
)
chain.invoke("[1, 2, 3]")
# -> {"str": "[1, 2, 3]", "json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
json_and_bytes_chain = chain.pick(["json", "bytes"])
json_and_bytes_chain.invoke("[1, 2, 3]")
# -> {"json": [1, 2, 3], "bytes": b"[1, 2, 3]"}
Parameters
keys (Union[str, List[str]]) –
Return type
RunnableSerializable[Any, Any]
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this Runnable with Runnable-like objects to make a RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | …
Example
from langchain_core.runnables import RunnableLambda
def add_one(x: int) -> int:
return x + 1
def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one)
runnable_2 = RunnableLambda(mul_two)
sequence = runnable_1.pipe(runnable_2)
# Or equivalently:
# sequence = runnable_1 | runnable_2
# sequence = RunnableSequence(first=runnable_1, last=runnable_2)
sequence.invoke(1)
await sequence.ainvoke(1)
# -> 4
sequence.batch([1, 2, 3])
await sequence.abatch([1, 2, 3])
# -> [4, 6, 8]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-25
|
# -> [4, 6, 8]
Parameters
others (Union[Runnable[Any, Other], Callable[[Any], Other]]) –
name (Optional[str]) –
Return type
RunnableSerializable[Input, Other]
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
text (str) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
str
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
messages (List[BaseMessage]) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
BaseMessage
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path (Union[Path, str]) – Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
Parameters
by_alias (bool) –
ref_template (unicode) –
Return type
DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
Parameters
by_alias (bool) –
ref_template (unicode) –
dumps_kwargs (Any) –
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-26
|
ref_template (unicode) –
dumps_kwargs (Any) –
Return type
unicode
stream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
Iterator[str]
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
Serialize the runnable to JSON.
Return type
Union[SerializedConstructor, SerializedNotImplemented]
to_json_not_implemented() → SerializedNotImplemented¶
Return type
SerializedNotImplemented
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
Parameters
input (Iterator[Input]) –
config (Optional[RunnableConfig]) –
kwargs (Optional[Any]) –
Return type
Iterator[Output]
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) –
Return type
None
classmethod validate(value: Any) → Model¶
Parameters
value (Any) –
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-27
|
Parameters
value (Any) –
Return type
Model
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
Parameters
config (Optional[RunnableConfig]) –
kwargs (Any) –
Return type
Runnable[Input, Output]
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Example
from typing import Iterator
from langchain_core.runnables import RunnableGenerator
def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError()
yield ""
def _generate(input: Iterator) -> Iterator[str]:
yield from "foo bar"
runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
)
print(''.join(runnable.stream({}))) #foo bar
Parameters
fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle (Tuple[Type[BaseException], ...]) – A tuple of exception types to handle.
exception_key (Optional[str]) – If string is specified then handled exceptions will be passed
to fallbacks as part of the input under the specified key. If None,
exceptions will not be passed to fallbacks. If used, the base runnable
and its fallbacks must accept a dictionary as input.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
Return type
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-28
|
fallback in order, upon failures.
Return type
RunnableWithFallbacksT[Input, Output]
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
Example:
Parameters
on_start (Optional[Listener]) –
on_end (Optional[Listener]) –
on_error (Optional[Listener]) –
Return type
Runnable[Input, Output]
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Example:
Parameters
retry_if_exception_type (Tuple[Type[BaseException], ...]) – A tuple of exception types to retry on
wait_exponential_jitter (bool) – Whether to add jitter to the wait time
between retries
stop_after_attempt (int) – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
Return type
Runnable[Input, Output]
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-29
|
Return type
Runnable[Input, Output]
with_structured_output(schema: Union[Dict, Type[BaseModel]], **kwargs: Any) → Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]¶
[Beta] Implement this if there is a way of steering the model to generate responses that match a given schema.
Notes
Parameters
schema (Union[Dict, Type[BaseModel]]) –
kwargs (Any) –
Return type
Runnable[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], Union[Dict, BaseModel]]
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
Parameters
input_type (Optional[Type[Input]]) –
output_type (Optional[Type[Output]]) –
Return type
Runnable[Input, Output]
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Type[str]¶
Get the input type for this runnable.
property config_specs: List[ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
be6ccf0a7b90-30
|
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
|
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.volcengine_maas.VolcEngineMaasLLM.html
|
99da7f1f7aab-0
|
langchain_nvidia_trt.llms.StreamingResponseGenerator¶
class langchain_nvidia_trt.llms.StreamingResponseGenerator(llm: TritonTensorRTLLM, request_id: str, force_batch: bool, stop_words: Sequence[str])[source]¶
A Generator that provides the inference results from an LLM.
Instantiate the generator class.
Methods
__init__(llm, request_id, force_batch, ...)
Instantiate the generator class.
empty()
Return True if the queue is empty, False otherwise (not reliable!).
full()
Return True if the queue is full, False otherwise (not reliable!).
get([block, timeout])
Remove and return an item from the queue.
get_nowait()
Remove and return an item from the queue without blocking.
join()
Blocks until all items in the Queue have been gotten and processed.
put(item[, block, timeout])
Put an item into the queue.
put_nowait(item)
Put an item into the queue without blocking.
qsize()
Return the approximate size of the queue (not reliable!).
task_done()
Indicate that a formerly enqueued task is complete.
Parameters
llm (TritonTensorRTLLM) –
request_id (str) –
force_batch (bool) –
stop_words (Sequence[str]) –
__init__(llm: TritonTensorRTLLM, request_id: str, force_batch: bool, stop_words: Sequence[str]) → None[source]¶
Instantiate the generator class.
Parameters
llm (TritonTensorRTLLM) –
request_id (str) –
force_batch (bool) –
stop_words (Sequence[str]) –
Return type
None
empty()¶
|
https://api.python.langchain.com/en/latest/llms/langchain_nvidia_trt.llms.StreamingResponseGenerator.html
|
99da7f1f7aab-1
|
stop_words (Sequence[str]) –
Return type
None
empty()¶
Return True if the queue is empty, False otherwise (not reliable!).
This method is likely to be removed at some point. Use qsize() == 0
as a direct substitute, but be aware that either approach risks a race
condition where a queue can grow before the result of empty() or
qsize() can be used.
To create code that needs to wait for all queued tasks to be
completed, the preferred technique is to use the join() method.
full()¶
Return True if the queue is full, False otherwise (not reliable!).
This method is likely to be removed at some point. Use qsize() >= n
as a direct substitute, but be aware that either approach risks a race
condition where a queue can shrink before the result of full() or
qsize() can be used.
get(block=True, timeout=None)¶
Remove and return an item from the queue.
If optional args ‘block’ is true and ‘timeout’ is None (the default),
block if necessary until an item is available. If ‘timeout’ is
a non-negative number, it blocks at most ‘timeout’ seconds and raises
the Empty exception if no item was available within that time.
Otherwise (‘block’ is false), return an item if one is immediately
available, else raise the Empty exception (‘timeout’ is ignored
in that case).
get_nowait()¶
Remove and return an item from the queue without blocking.
Only get an item if one is immediately available. Otherwise
raise the Empty exception.
join()¶
Blocks until all items in the Queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the
queue. The count goes down whenever a consumer thread calls task_done()
|
https://api.python.langchain.com/en/latest/llms/langchain_nvidia_trt.llms.StreamingResponseGenerator.html
|
99da7f1f7aab-2
|
queue. The count goes down whenever a consumer thread calls task_done()
to indicate the item was retrieved and all work on it is complete.
When the count of unfinished tasks drops to zero, join() unblocks.
put(item, block=True, timeout=None)¶
Put an item into the queue.
If optional args ‘block’ is true and ‘timeout’ is None (the default),
block if necessary until a free slot is available. If ‘timeout’ is
a non-negative number, it blocks at most ‘timeout’ seconds and raises
the Full exception if no free slot was available within that time.
Otherwise (‘block’ is false), put an item on the queue if a free slot
is immediately available, else raise the Full exception (‘timeout’
is ignored in that case).
put_nowait(item)¶
Put an item into the queue without blocking.
Only enqueue the item if a free slot is immediately available.
Otherwise raise the Full exception.
qsize()¶
Return the approximate size of the queue (not reliable!).
task_done()¶
Indicate that a formerly enqueued task is complete.
Used by Queue consumer threads. For each get() used to fetch a task,
a subsequent call to task_done() tells the queue that the processing
on the task is complete.
If a join() is currently blocking, it will resume when all items
have been processed (meaning that a task_done() call was received
for every item that had been put() into the queue).
Raises a ValueError if called more times than there were items
placed in the queue.
|
https://api.python.langchain.com/en/latest/llms/langchain_nvidia_trt.llms.StreamingResponseGenerator.html
|
4ada54afb26a-0
|
langchain_experimental.llms.anthropic_functions.TagParser¶
class langchain_experimental.llms.anthropic_functions.TagParser[source]¶
Parser for the tool tags.
A heavy-handed solution, but it’s fast for prototyping.
Might be re-implemented later to restrict scope to the limited grammar, and
more efficiency.
Uses an HTML parser to parse a limited grammar that allows
for syntax of the form:
INPUT -> JUNK? VALUE*
JUNK -> JUNK_CHARACTER+
JUNK_CHARACTER -> whitespace | ,
VALUE -> <IDENTIFIER>DATA</IDENTIFIER> | OBJECT
OBJECT -> <IDENTIFIER>VALUE+</IDENTIFIER>
IDENTIFIER -> [a-Z][a-Z0-9_]*
DATA -> .*
Interprets the data to allow repetition of tags and recursion
to support representation of complex types.
^ Just another approximately wrong grammar specification.
Attributes
CDATA_CONTENT_ELEMENTS
Methods
__init__()
A heavy-handed solution, but it's fast for prototyping.
check_for_whole_start_tag(i)
clear_cdata_mode()
close()
Handle any buffered data.
feed(data)
Feed data to the parser.
get_starttag_text()
Return full source of start tag: '<...>'.
getpos()
Return current line number and offset.
goahead(end)
handle_charref(name)
handle_comment(data)
handle_data(data)
Hook when handling data.
handle_decl(decl)
handle_endtag(tag)
Hook when a tag is closed.
handle_entityref(name)
handle_pi(data)
handle_startendtag(tag, attrs)
handle_starttag(tag, attrs)
Hook when a new tag is encountered.
parse_bogus_comment(i[, report])
parse_comment(i[, report])
parse_declaration(i)
parse_endtag(i)
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
|
4ada54afb26a-1
|
parse_comment(i[, report])
parse_declaration(i)
parse_endtag(i)
parse_html_declaration(i)
parse_marked_section(i[, report])
parse_pi(i)
parse_starttag(i)
reset()
Reset this instance.
set_cdata_mode(elem)
unknown_decl(data)
updatepos(i, j)
__init__() → None[source]¶
A heavy-handed solution, but it’s fast for prototyping.
Might be re-implemented later to restrict scope to the limited grammar, and
more efficiency.
Uses an HTML parser to parse a limited grammar that allows
for syntax of the form:
INPUT -> JUNK? VALUE*
JUNK -> JUNK_CHARACTER+
JUNK_CHARACTER -> whitespace | ,
VALUE -> <IDENTIFIER>DATA</IDENTIFIER> | OBJECT
OBJECT -> <IDENTIFIER>VALUE+</IDENTIFIER>
IDENTIFIER -> [a-Z][a-Z0-9_]*
DATA -> .*
Interprets the data to allow repetition of tags and recursion
to support representation of complex types.
^ Just another approximately wrong grammar specification.
Return type
None
check_for_whole_start_tag(i)¶
clear_cdata_mode()¶
close()¶
Handle any buffered data.
feed(data)¶
Feed data to the parser.
Call this as often as you want, with as little or as much text
as you want (may include ‘n’).
get_starttag_text()¶
Return full source of start tag: ‘<…>’.
getpos()¶
Return current line number and offset.
goahead(end)¶
handle_charref(name)¶
handle_comment(data)¶
handle_data(data: str) → None[source]¶
Hook when handling data.
Parameters
data (str) –
Return type
None
handle_decl(decl)¶
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
|
4ada54afb26a-2
|
data (str) –
Return type
None
handle_decl(decl)¶
handle_endtag(tag: str) → None[source]¶
Hook when a tag is closed.
Parameters
tag (str) –
Return type
None
handle_entityref(name)¶
handle_pi(data)¶
handle_startendtag(tag, attrs)¶
handle_starttag(tag: str, attrs: Any) → None[source]¶
Hook when a new tag is encountered.
Parameters
tag (str) –
attrs (Any) –
Return type
None
parse_bogus_comment(i, report=1)¶
parse_comment(i, report=1)¶
parse_declaration(i)¶
parse_endtag(i)¶
parse_html_declaration(i)¶
parse_marked_section(i, report=1)¶
parse_pi(i)¶
parse_starttag(i)¶
reset()¶
Reset this instance. Loses all unprocessed data.
set_cdata_mode(elem)¶
unknown_decl(data)¶
updatepos(i, j)¶
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
|
be44b8defb38-0
|
langchain_experimental.llms.rellm_decoder.RELLM¶
class langchain_experimental.llms.rellm_decoder.RELLM[source]¶
Bases: HuggingFacePipeline
RELLM wrapped LLM using HuggingFace Pipeline API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param batch_size: int = 4¶
Batch size to use when passing multiple documents to generate.
param cache: Union[BaseCache, bool, None] = None¶
Whether to cache the response.
If true, will use the global cache.
If false, will not use a cache
If None, will use the global cache if it’s set, otherwise no cache.
If instance of BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
param callback_manager: Optional[BaseCallbackManager] = None¶
[DEPRECATED]
param callbacks: Callbacks = None¶
Callbacks to add to the run trace.
param max_new_tokens: int = 200¶
Maximum number of new tokens to generate.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_id: str = 'gpt2'¶
Model name to use.
param model_kwargs: Optional[dict] = None¶
Keyword arguments passed to the model.
param pipeline_kwargs: Optional[dict] = None¶
Keyword arguments passed to the pipeline.
param regex: RegexPattern [Required]¶
The structured format to complete.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-1
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
[Deprecated] Check Cache and run the LLM on the given prompt and input.
Notes
Deprecated since version 0.1.7: Use invoke instead.
Parameters
prompt (str) –
stop (Optional[List[str]]) –
callbacks (Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
str
async abatch(inputs: List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
Parameters
inputs (List[Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Any) –
Return type
List[str]
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-2
|
kwargs (Any) –
Return type
List[str]
async abatch_as_completed(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → AsyncIterator[Tuple[int, Union[Output, Exception]]]¶
Run ainvoke in parallel on a list of inputs,
yielding results as they complete.
Parameters
inputs (List[Input]) –
config (Optional[Union[RunnableConfig, List[RunnableConfig]]]) –
return_exceptions (bool) –
kwargs (Optional[Any]) –
Return type
AsyncIterator[Tuple[int, Union[Output, Exception]]]
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, run_id: Optional[Union[UUID, List[Optional[UUID]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts (List[str]) – List of string prompts.
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-3
|
Parameters
prompts (List[str]) – List of string prompts.
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
tags (Optional[Union[List[str], List[List[str]]]]) –
metadata (Optional[Union[Dict[str, Any], List[Dict[str, Any]]]]) –
run_name (Optional[Union[str, List[str]]]) –
run_id (Optional[Union[UUID, List[Optional[UUID]]]]) –
**kwargs –
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-4
|
Parameters
prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop (Optional[List[str]]) – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks (Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]]) – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
Return type
LLMResult
async ainvoke(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
str
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-5
|
[Deprecated]
Notes
Deprecated since version 0.1.7: Use ainvoke instead.
Parameters
text (str) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
str
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
[Deprecated]
Notes
Deprecated since version 0.1.7: Use ainvoke instead.
Parameters
messages (List[BaseMessage]) –
stop (Optional[Sequence[str]]) –
kwargs (Any) –
Return type
BaseMessage
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
from operator import itemgetter
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | llm | {"str": StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter("str") | llm)
print(chain_with_assign.input_schema.schema())
# {'title': 'PromptInput', 'type': 'object', 'properties':
{'question': {'title': 'Question', 'type': 'string'}}}
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-6
|
{'question': {'title': 'Question', 'type': 'string'}}}
print(chain_with_assign.output_schema.schema()) #
{'title': 'RunnableSequenceOutput', 'type': 'object', 'properties':
{'str': {'title': 'Str',
'type': 'string'}, 'hello': {'title': 'Hello', 'type': 'string'}}}
Parameters
kwargs (Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) –
Return type
RunnableSerializable[Any, Any]
async astream(input: Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
Parameters
input (Union[PromptValue, str, Sequence[Union[BaseMessage, Tuple[str, str], str, Dict[str, Any]]]]) –
config (Optional[RunnableConfig]) –
stop (Optional[List[str]]) –
kwargs (Any) –
Return type
AsyncIterator[str]
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-7
|
kwargs (Any) –
Return type
AsyncIterator[str]
astream_events(input: Any, config: Optional[RunnableConfig] = None, *, version: Literal['v1'], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → AsyncIterator[StreamEvent]¶
[Beta] Generate a stream of events.
Use to create an iterator over StreamEvents that provide real-time information
about the progress of the runnable, including StreamEvents from intermediate
results.
A StreamEvent is a dictionary with the following schema:
event: str - Event names are of theformat: on_[runnable_type]_(start|stream|end).
name: str - The name of the runnable that generated the event.
run_id: str - randomly generated ID associated with the given execution ofthe runnable that emitted the event.
A child runnable that gets invoked as part of the execution of a
parent runnable is assigned its own unique ID.
tags: Optional[List[str]] - The tags of the runnable that generatedthe event.
metadata: Optional[Dict[str, Any]] - The metadata of the runnablethat generated the event.
data: Dict[str, Any]
Below is a table that illustrates some evens that might be emitted by various
chains. Metadata fields have been omitted from the table for brevity.
Chain definitions have been included after the table.
event
name
chunk
input
output
on_chat_model_start
[model name]
{“messages”: [[SystemMessage, HumanMessage]]}
on_chat_model_stream
[model name]
AIMessageChunk(content=”hello”)
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-8
|
on_chat_model_stream
[model name]
AIMessageChunk(content=”hello”)
on_chat_model_end
[model name]
{“messages”: [[SystemMessage, HumanMessage]]}
{“generations”: […], “llm_output”: None, …}
on_llm_start
[model name]
{‘input’: ‘hello’}
on_llm_stream
[model name]
‘Hello’
on_llm_end
[model name]
‘Hello human!’
on_chain_start
format_docs
on_chain_stream
format_docs
“hello world!, goodbye world!”
on_chain_end
format_docs
[Document(…)]
“hello world!, goodbye world!”
on_tool_start
some_tool
{“x”: 1, “y”: “2”}
on_tool_stream
some_tool
{“x”: 1, “y”: “2”}
on_tool_end
some_tool
{“x”: 1, “y”: “2”}
on_retriever_start
[retriever name]
{“query”: “hello”}
on_retriever_chunk
[retriever name]
{documents: […]}
on_retriever_end
[retriever name]
{“query”: “hello”}
{documents: […]}
on_prompt_start
[template_name]
{“question”: “hello”}
on_prompt_end
[template_name]
{“question”: “hello”}
ChatPromptValue(messages: [SystemMessage, …])
Here are declarations associated with the events shown above:
format_docs:
def format_docs(docs: List[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs)
some_tool:
@tool
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-9
|
format_docs = RunnableLambda(format_docs)
some_tool:
@tool
def some_tool(x: int, y: str) -> dict:
'''Some_tool.'''
return {"x": x, "y": y}
prompt:
template = ChatPromptTemplate.from_messages(
[("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:
from langchain_core.runnables import RunnableLambda
async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
events = [
event async for event in chain.astream_events("hello", version="v1")
]
# will produce the following events (run_id has been omitted for brevity):
[
{
"data": {"input": "hello"},
"event": "on_chain_start",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"chunk": "olleh"},
"event": "on_chain_stream",
"metadata": {},
"name": "reverse",
"tags": [],
},
{
"data": {"output": "olleh"},
"event": "on_chain_end",
"metadata": {},
"name": "reverse",
"tags": [],
},
]
Parameters
input (Any) – The input to the runnable.
config (Optional[RunnableConfig]) – The config to use for the runnable.
version (Literal['v1']) – The version of the schema to use.
Currently only version 1 is available.
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
be44b8defb38-10
|
Currently only version 1 is available.
No default will be assigned until the API is stabilized.
include_names (Optional[Sequence[str]]) – Only include events from runnables with matching names.
include_types (Optional[Sequence[str]]) – Only include events from runnables with matching types.
include_tags (Optional[Sequence[str]]) – Only include events from runnables with matching tags.
exclude_names (Optional[Sequence[str]]) – Exclude events from runnables with matching names.
exclude_types (Optional[Sequence[str]]) – Exclude events from runnables with matching types.
exclude_tags (Optional[Sequence[str]]) – Exclude events from runnables with matching tags.
kwargs (Any) – Additional keyword arguments to pass to the runnable.
These will be passed to astream_log as this implementation
of astream_events is built on top of astream_log.
Returns
An async stream of StreamEvents.
Return type
AsyncIterator[StreamEvent]
Notes
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Any) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
|
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.rellm_decoder.RELLM.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.