status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
def _completion_with_retry(**kwargs: Any) -> Any:
return llm.client.create(**kwargs)
return _completion_with_retry(**kwargs)
async def acompletion_with_retry(
llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any
) -> Any:
"""Use tenacity to retry the async completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
return await llm.client.acreate(**kwargs)
return await _completion_with_retry(**kwargs)
class BaseOpenAI(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Wrapper around OpenAI large language models."""
client: Any
model_name: str = Field("text-davinci-003", alias="model")
"""Model name to use."""
temperature: float = 0.7
"""What sampling temperature to use."""
max_tokens: int = 256
"""The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size."""
top_p: float = 1
"""Total probability mass of tokens to consider at each step."""
frequency_penalty: float = 0
"""Penalizes repeated tokens according to frequency."""
presence_penalty: float = 0
"""Penalizes repeated tokens."""
n: int = 1
"""How many completions to generate for each prompt."""
best_of: int = 1
"""Generates best_of completions server-side and returns the "best"."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
openai_organization: Optional[str] = None
batch_size: int = 20
"""Batch size to use when passing multiple documents to generate."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout for requests to OpenAI completion API. Default is 600 seconds."""
logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)
"""Adjust the probability of specific tokens being generated."""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # ty
"""Initialize the OpenAI object."""
model_name = data.get("model_name", "")
if model_name.startswith("gpt-3.5-turbo") or model_name.startswith("gpt-4"):
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return OpenAIChat(**data)
return super().__new__(cls)
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Configuration for this pydantic object."""
extra = Extra.ignore
allow_population_by_field_name = True
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = cls.all_required_field_names()
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
if field_name not in all_required_field_names:
logger.warning(
f"""WARNING! {field_name} is not default parameter.
{field_name} was transferred to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
extra[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
f"Instead they were passed in as part of `model_kwargs` parameter."
)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_organization = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
openai.api_key = openai_api_key
if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | values["client"] = openai.Completion
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
normal_params = {
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"n": self.n,
"request_timeout": self.request_timeout,
"logit_bias": self.logit_bias,
}
# Az
# do
if self.best_of > 1:
normal_params["best_of"] = self.best_of
return {**normal_params, **self.model_kwargs}
def _generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> LLMResult:
"""Call out to OpenAI's endpoint with k unique prompts.
Args:
prompts: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The full LLM output.
Example:
.. code-block:: python
response = openai.generate(["Tell me a joke."])
"""
# TO
params = self._invocation_params
sub_prompts = self.get_sub_prompts(params, prompts, stop) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True
response = _streaming_response_template()
for stream_resp in completion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"])
else:
response = completion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
async def _agenerate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> LLMResult:
"""Call out to OpenAI's endpoint async with k unique prompts."""
params = self._invocation_params
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True
response = _streaming_response_template()
async for stream_resp in await acompletion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
await run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"]) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | else:
response = await acompletion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
def get_sub_prompts(
self,
params: Dict[str, Any],
prompts: List[str],
stop: Optional[List[str]] = None,
) -> List[List[str]]:
"""Get the sub prompts for llm call."""
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params["max_tokens"] == -1:
if len(prompts) != 1:
raise ValueError(
"max_tokens set to -1 not supported for multiple inputs."
)
params["max_tokens"] = self.max_tokens_for_prompt(prompts[0])
sub_prompts = [
prompts[i : i + self.batch_size]
for i in range(0, len(prompts), self.batch_size)
]
return sub_prompts
def create_llm_result( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
generations.append(
[
Generation(
text=choice["text"],
generation_info=dict(
finish_reason=choice.get("finish_reason"),
logprobs=choice.get("logprobs"),
),
)
for choice in sub_choices
]
)
llm_output = {"token_usage": token_usage, "model_name": self.model_name}
return LLMResult(generations=generations, llm_output=llm_output)
def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Args:
prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
A generator representing the stream of tokens from OpenAI.
Example:
.. code-block:: python
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
"""
params = self.prep_streaming_params(stop)
generator = self.client.create(prompt=prompt, **params)
return generator
def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:
"""Prepare the params for streaming."""
params = self._invocation_params
if "best_of" in params and params["best_of"] != 1:
raise ValueError("OpenAI only supports best_of == 1 for streaming")
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
params["stream"] = True
return params
@property
def _invocation_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Get the parameters used to invoke the model."""
return self._default_params
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_num_tokens(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
)
def modelname_to_contextsize(self, modelname: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Calculate the maximum number of tokens possible to generate for a model.
Args:
modelname: The modelname we want to know the context size for.
Returns:
The maximum context size
Example:
.. code-block:: python
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
"""
model_token_mapping = {
"gpt-4": 8192,
"gpt-4-0314": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0314": 32768,
"gpt-3.5-turbo": 4096,
"gpt-3.5-turbo-0301": 4096,
"text-ada-001": 2049,
"ada": 2049,
"text-babbage-001": 2040,
"babbage": 2049,
"text-curie-001": 2049,
"curie": 2049,
"davinci": 2049,
"text-davinci-003": 4097,
"text-davinci-002": 4097, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | "code-davinci-002": 8001,
"code-davinci-001": 8001,
"code-cushman-002": 2048,
"code-cushman-001": 2048,
}
# ha
if "ft-" in modelname:
modelname = modelname.split(":")[0]
context_size = model_token_mapping.get(modelname, None)
if context_size is None:
raise ValueError(
f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(model_token_mapping.keys())
)
return context_size
def max_tokens_for_prompt(self, prompt: str) -> int:
"""Calculate the maximum number of tokens possible to generate for a prompt.
Args:
prompt: The prompt to pass into the model.
Returns:
The maximum number of tokens to generate for a prompt.
Example:
.. code-block:: python
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
"""
num_tokens = self.get_num_tokens(prompt)
# ge
max_size = self.modelname_to_contextsize(self.model_name)
return max_size - num_tokens
class OpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Wrapper around OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
"""
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"model": self.model_name}, **super()._invocation_params}
class AzureOpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Wrapper around Azure-specific OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
"""
deployment_name: str = ""
"""Deployment name to use."""
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"engine": self.deployment_name}, **super()._invocation_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "azure"
class OpenAIChat(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Wrapper around OpenAI Chat large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
"""
client: Any
model_name: str = "gpt-3.5-turbo"
"""Model name to use."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
max_retries: int = 6
"""Maximum number of retries to make when generating."""
prefix_messages: List = Field(default_factory=list)
"""Series of messages for Chat input."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Configuration for this pydantic object."""
extra = Extra.ignore
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name not in all_required_field_names:
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
extra[field_name] = values.pop(field_name)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_organization = get_from_dict_or_env( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | values, "openai_organization", "OPENAI_ORGANIZATION", default=""
)
try:
import openai
openai.api_key = openai_api_key
if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return values
@property
def _default_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Get the default parameters for calling OpenAI API."""
return self.model_kwargs
def _get_chat_params(
self, prompts: List[str], stop: Optional[List[str]] = None
) -> Tuple:
if len(prompts) > 1:
raise ValueError(
f"OpenAIChat currently only supports single prompt, got {prompts}"
)
messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}]
params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params.get("max_tokens") == -1:
# for Ch
del params["max_tokens"]
return messages, params
def _generate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
if self.streaming:
response = ""
params["stream"] = True |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | for stream_resp in completion_with_retry(self, messages=messages, **params):
token = stream_resp["choices"][0]["delta"].get("content", "")
response += token
if run_manager:
run_manager.on_llm_new_token(
token,
)
return LLMResult(
generations=[[Generation(text=response)]],
)
else:
full_response = completion_with_retry(self, messages=messages, **params)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output,
)
async def _agenerate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
if self.streaming: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | response = ""
params["stream"] = True
async for stream_resp in await acompletion_with_retry(
self, messages=messages, **params
):
token = stream_resp["choices"][0]["delta"].get("content", "")
response += token
if run_manager:
await run_manager.on_llm_new_token(
token,
)
return LLMResult(
generations=[[Generation(text=response)]],
)
else:
full_response = await acompletion_with_retry(
self, messages=messages, **params
)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output,
)
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | "2023-05-25T13:00:09Z" | python | "2023-05-25T16:50:25Z" | langchain/llms/openai.py | """Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai-chat"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_token_ids(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | """Wrapper around OpenSearch vector database."""
from __future__ import annotations
import uuid
from typing import Any, Dict, Iterable, List, Optional, Tuple
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
IMPORT_OPENSEARCH_PY_ERROR = (
"Could not import OpenSearch. Please install it with `pip install opensearch-py`."
)
SCRIPT_SCORING_SEARCH = "script_scoring"
PAINLESS_SCRIPTING_SEARCH = "painless_scripting"
MATCH_ALL_QUERY = {"match_all": {}}
def _import_opensearch() -> Any:
"""Import OpenSearch if available, otherwise raise error."""
try:
from opensearchpy import OpenSearch
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return OpenSearch
def _import_bulk() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | """Import bulk if available, otherwise raise error."""
try:
from opensearchpy.helpers import bulk
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return bulk
def _import_not_found_error() -> Any:
"""Import not found error if available, otherwise raise error."""
try:
from opensearchpy.exceptions import NotFoundError
except ImportError:
raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)
return NotFoundError
def _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:
"""Get OpenSearch client from the opensearch_url, otherwise raise error."""
try:
opensearch = _import_opensearch()
client = opensearch(opensearch_url, **kwargs)
except ValueError as e:
raise ValueError(
f"OpenSearch client string provided is not in proper format. "
f"Got error: {e} "
)
return client
def _validate_embeddings_and_bulk_size(embeddings_length: int, bulk_size: int) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | """Validate Embeddings Length and Bulk Size."""
if embeddings_length == 0:
raise RuntimeError("Embeddings size is zero")
if bulk_size < embeddings_length:
raise RuntimeError(
f"The embeddings count, {embeddings_length} is more than the "
f"[bulk_size], {bulk_size}. Increase the value of [bulk_size]."
)
def _bulk_ingest_embeddings(
client: Any,
index_name: str,
embeddings: List[List[float]],
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
vector_field: str = "vector_field",
text_field: str = "text",
mapping: Dict = {},
) -> List[str]:
"""Bulk Ingest Embeddings into given index."""
bulk = _import_bulk()
not_found_error = _import_not_found_error()
requests = []
ids = []
mapping = mapping
try:
client.indices.get(index=index_name)
except not_found_error: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | client.indices.create(index=index_name, body=mapping)
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
_id = str(uuid.uuid4())
request = {
"_op_type": "index",
"_index": index_name,
vector_field: embeddings[i],
text_field: text,
"metadata": metadata,
"_id": _id,
}
requests.append(request)
ids.append(_id)
bulk(client, requests)
client.indices.refresh(index=index_name)
return ids
def _default_scripting_text_mapping(
dim: int,
vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting or Script Scoring,the default mapping to create index."""
return {
"mappings": {
"properties": {
vector_field: {"type": "knn_vector", "dimension": dim},
}
}
}
def _default_text_mapping( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | dim: int,
engine: str = "nmslib",
space_type: str = "l2",
ef_search: int = 512,
ef_construction: int = 512,
m: int = 16,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default mapping to create index."""
return {
"settings": {"index": {"knn": True, "knn.algo_param.ef_search": ef_search}},
"mappings": {
"properties": {
vector_field: {
"type": "knn_vector",
"dimension": dim,
"method": {
"name": "hnsw",
"space_type": space_type,
"engine": engine,
"parameters": {"ef_construction": ef_construction, "m": m},
},
}
}
},
}
def _default_approximate_search_query( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | query_vector: List[float],
size: int = 4,
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default query."""
return {
"size": size,
"query": {"knn": {vector_field: {"vector": query_vector, "k": k}}},
}
def _approximate_search_query_with_boolean_filter( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | query_vector: List[float],
boolean_filter: Dict,
size: int = 4,
k: int = 4,
vector_field: str = "vector_field",
subquery_clause: str = "must",
) -> Dict:
"""For Approximate k-NN Search, with Boolean Filter."""
return {
"size": size,
"query": {
"bool": {
"filter": boolean_filter,
subquery_clause: [
{"knn": {vector_field: {"vector": query_vector, "k": k}}}
],
}
},
}
def _approximate_search_query_with_lucene_filter( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | query_vector: List[float],
lucene_filter: Dict,
size: int = 4,
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, with Lucene Filter."""
search_query = _default_approximate_search_query(
query_vector, size, k, vector_field
)
search_query["query"]["knn"][vector_field]["filter"] = lucene_filter
return search_query
def _default_script_query( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | query_vector: List[float],
space_type: str = "l2",
pre_filter: Dict = MATCH_ALL_QUERY,
vector_field: str = "vector_field",
) -> Dict:
"""For Script Scoring Search, this is the default query."""
return {
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": "knn_score",
"lang": "knn",
"params": {
"field": vector_field,
"query_value": query_vector,
"space_type": space_type,
},
},
}
}
}
def __get_painless_scripting_source( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | space_type: str, query_vector: List[float], vector_field: str = "vector_field"
) -> str:
"""For Painless Scripting, it returns the script source based on space type."""
source_value = (
"(1.0 + "
+ space_type
+ "("
+ str(query_vector)
+ ", doc['"
+ vector_field
+ "']))"
)
if space_type == "cosineSimilarity":
return source_value
else:
return "1/" + source_value
def _default_painless_scripting_query( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | query_vector: List[float],
space_type: str = "l2Squared",
pre_filter: Dict = MATCH_ALL_QUERY,
vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting Search, this is the default query."""
source = __get_painless_scripting_source(space_type, query_vector)
return {
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": source,
"params": {
"field": vector_field,
"query_value": query_vector,
},
},
}
}
}
def _get_kwargs_value(kwargs: Any, key: str, default_value: Any) -> Any:
"""Get the value of the key if present. Else get the default_value."""
if key in kwargs:
return kwargs.get(key)
return default_value
class OpenSearchVectorSearch(VectorStore): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | """Wrapper around OpenSearch as a vector database.
Example:
.. code-block:: python
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
"""
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | self,
opensearch_url: str,
index_name: str,
embedding_function: Embeddings,
**kwargs: Any,
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index_name = index_name
self.client = _get_opensearch_client(opensearch_url, **kwargs)
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
bulk_size: int = 500,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
bulk_size: Bulk API request count; Default: 500
Returns:
List of ids from adding the texts into the vectorstore. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
"""
embeddings = self.embedding_function.embed_documents(list(texts))
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
text_field = _get_kwargs_value(kwargs, "text_field", "text")
dim = len(embeddings[0])
engine = _get_kwargs_value(kwargs, "engine", "nmslib")
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
ef_search = _get_kwargs_value(kwargs, "ef_search", 512)
ef_construction = _get_kwargs_value(kwargs, "ef_construction", 512)
m = _get_kwargs_value(kwargs, "m", 16)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
mapping = _default_text_mapping(
dim, engine, space_type, ef_search, ef_construction, m, vector_field
)
return _bulk_ingest_embeddings(
self.client,
self.index_name,
embeddings,
texts,
metadatas,
vector_field,
text_field,
mapping,
)
def similarity_search( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
metadata_field: Document field that metadata is stored in. Defaults to
"metadata".
Can be set to a special value "*" to include the entire document.
Optional Args for Approximate Search:
search_type: "approximate_search"; default: "approximate_search"
size: number of results the query actually returns; default: 4
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: "must"
lucene_filter: the Lucene algorithm decides whether to perform an exact |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:
search_type: "script_scoring"; default: "approximate_search"
space_type: "l2", "l1", "linf", "cosinesimil", "innerproduct",
"hammingbit"; default: "l2"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
Optional Args for Painless Scripting Search:
search_type: "painless_scripting"; default: "approximate_search"
space_type: "l2Squared", "l1Norm", "cosineSimilarity"; default: "l2Squared"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
"""
docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)
return [doc[0] for doc in docs_with_scores]
def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Return docs and it's scores most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents along with its scores most similar to the query.
Optional Args:
same as `similarity_search`
""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | embedding = self.embedding_function.embed_query(query)
search_type = _get_kwargs_value(kwargs, "search_type", "approximate_search")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
metadata_field = _get_kwargs_value(kwargs, "metadata_field", "metadata")
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
if search_type == "approximate_search":
size = _get_kwargs_value(kwargs, "size", 4)
boolean_filter = _get_kwargs_value(kwargs, "boolean_filter", {})
subquery_clause = _get_kwargs_value(kwargs, "subquery_clause", "must")
lucene_filter = _get_kwargs_value(kwargs, "lucene_filter", {})
if boolean_filter != {} and lucene_filter != {}:
raise ValueError(
"Both `boolean_filter` and `lucene_filter` are provided which "
"is invalid"
)
if boolean_filter != {}:
search_query = _approximate_search_query_with_boolean_filter(
embedding, boolean_filter, size, k, vector_field, subquery_clause
)
elif lucene_filter != {}:
search_query = _approximate_search_query_with_lucene_filter(
embedding, lucene_filter, size, k, vector_field
)
else:
search_query = _default_approximate_search_query(
embedding, size, k, vector_field
)
elif search_type == SCRIPT_SCORING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | search_query = _default_script_query(
embedding, space_type, pre_filter, vector_field
)
elif search_type == PAINLESS_SCRIPTING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2Squared")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY)
search_query = _default_painless_scripting_query(
embedding, space_type, pre_filter, vector_field
)
else:
raise ValueError("Invalid `search_type` provided as an argument")
response = self.client.search(index=self.index_name, body=search_query)
hits = [hit for hit in response["hits"]["hits"][:k]]
documents_with_scores = [
(
Document(
page_content=hit["_source"][text_field],
metadata=hit["_source"]
if metadata_field == "*" or metadata_field not in hit["_source"]
else hit["_source"][metadata_field],
),
hit["_score"],
)
for hit in hits
]
return documents_with_scores
@classmethod
def from_texts(
cls,
texts: List[str], |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
bulk_size: int = 500,
**kwargs: Any,
) -> OpenSearchVectorSearch:
"""Construct OpenSearchVectorSearch wrapper from raw documents.
Example:
.. code-block:: python
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
Optional Keyword Args for Approximate Search:
engine: "nmslib", "faiss", "lucene"; default: "nmslib"
space_type: "l2", "l1", "cosinesimil", "linf", "innerproduct"; default: "l2"
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:
is_appx_search: False
"""
opensearch_url = get_from_dict_or_env(
kwargs, "opensearch_url", "OPENSEARCH_URL"
)
keys_list = [
"opensearch_url",
"index_name",
"is_appx_search",
"vector_field",
"text_field",
"engine",
"space_type",
"ef_search",
"ef_construction",
"m",
]
embeddings = embedding.embed_documents(texts)
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
dim = len(embeddings[0])
index_name = get_from_dict_or_env( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | "2023-05-24T20:49:47Z" | python | "2023-05-25T16:51:23Z" | langchain/vectorstores/opensearch_vector_search.py | kwargs, "index_name", "OPENSEARCH_INDEX_NAME", default=uuid.uuid4().hex
)
is_appx_search = _get_kwargs_value(kwargs, "is_appx_search", True)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
if is_appx_search:
engine = _get_kwargs_value(kwargs, "engine", "nmslib")
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
ef_search = _get_kwargs_value(kwargs, "ef_search", 512)
ef_construction = _get_kwargs_value(kwargs, "ef_construction", 512)
m = _get_kwargs_value(kwargs, "m", 16)
mapping = _default_text_mapping(
dim, engine, space_type, ef_search, ef_construction, m, vector_field
)
else:
mapping = _default_scripting_text_mapping(dim)
[kwargs.pop(key, None) for key in keys_list]
client = _get_opensearch_client(opensearch_url, **kwargs)
_bulk_ingest_embeddings(
client,
index_name,
embeddings,
texts,
metadatas,
vector_field,
text_field,
mapping,
)
return cls(opensearch_url, index_name, embedding, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,423 | Do Q/A with csv agent and multiple txt files at the same time. | ### Issue you'd like to raise.
I want to do Q/A with csv agent and multiple txt files at the same time. But I do not want to use csv loader and txt loader because they did not perform very well when handling cross file scenario. For example, the model needs to find answers from both the csv and txt file and then return the result.
How should I do it? I think I may need to create a custom agent.
### Suggestion:
_No response_ | https://github.com/langchain-ai/langchain/issues/4423 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-05-09T22:33:44Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/csv/base.py | """Agent for working with csvs."""
from typing import Any, Optional
from langchain.agents.agent import AgentExecutor
from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent
from langchain.base_language import BaseLanguageModel
def create_csv_agent(
llm: BaseLanguageModel,
path: str,
pandas_kwargs: Optional[dict] = None,
**kwargs: Any
) -> AgentExecutor:
"""Create csv agent by loading to a dataframe and using pandas agent."""
import pandas as pd
_kwargs = pandas_kwargs or {}
df = pd.read_csv(path, **_kwargs)
return create_pandas_dataframe_agent(llm, df, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,423 | Do Q/A with csv agent and multiple txt files at the same time. | ### Issue you'd like to raise.
I want to do Q/A with csv agent and multiple txt files at the same time. But I do not want to use csv loader and txt loader because they did not perform very well when handling cross file scenario. For example, the model needs to find answers from both the csv and txt file and then return the result.
How should I do it? I think I may need to create a custom agent.
### Suggestion:
_No response_ | https://github.com/langchain-ai/langchain/issues/4423 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-05-09T22:33:44Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/base.py | """Agent for working with pandas objects."""
from typing import Any, Dict, List, Optional
from langchain.agents.agent import AgentExecutor
from langchain.agents.agent_toolkits.pandas.prompt import (
PREFIX,
SUFFIX_NO_DF,
SUFFIX_WITH_DF,
)
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.base_language import BaseLanguageModel
from langchain.callbacks.base import BaseCallbackManager
from langchain.chains.llm import LLMChain
from langchain.tools.python.tool import PythonAstREPLTool
def create_pandas_dataframe_agent(
llm: BaseLanguageModel,
df: Any,
callback_manager: Optional[BaseCallbackManager] = None,
prefix: str = PREFIX,
suffix: Optional[str] = None,
input_variables: Optional[List[str]] = None,
verbose: bool = False,
return_intermediate_steps: bool = False,
max_iterations: Optional[int] = 15, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,423 | Do Q/A with csv agent and multiple txt files at the same time. | ### Issue you'd like to raise.
I want to do Q/A with csv agent and multiple txt files at the same time. But I do not want to use csv loader and txt loader because they did not perform very well when handling cross file scenario. For example, the model needs to find answers from both the csv and txt file and then return the result.
How should I do it? I think I may need to create a custom agent.
### Suggestion:
_No response_ | https://github.com/langchain-ai/langchain/issues/4423 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-05-09T22:33:44Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/base.py | max_execution_time: Optional[float] = None,
early_stopping_method: str = "force",
agent_executor_kwargs: Optional[Dict[str, Any]] = None,
include_df_in_prompt: Optional[bool] = True,
**kwargs: Dict[str, Any],
) -> AgentExecutor:
"""Construct a pandas agent from an LLM and dataframe."""
try:
import pandas as pd
except ImportError:
raise ImportError(
"pandas package not found, please install with `pip install pandas`"
)
if not isinstance(df, pd.DataFrame):
raise ValueError(f"Expected pandas object, got {type(df)}")
if include_df_in_prompt is not None and suffix is not None:
raise ValueError("If suffix is specified, include_df_in_prompt should not be.")
if suffix is not None:
suffix_to_use = suffix
if input_variables is None:
input_variables = ["df", "input", "agent_scratchpad"]
else:
if include_df_in_prompt:
suffix_to_use = SUFFIX_WITH_DF
input_variables = ["df", "input", "agent_scratchpad"]
else:
suffix_to_use = SUFFIX_NO_DF
input_variables = ["input", "agent_scratchpad"]
tools = [PythonAstREPLTool(locals={"df": df})]
prompt = ZeroShotAgent.create_prompt( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,423 | Do Q/A with csv agent and multiple txt files at the same time. | ### Issue you'd like to raise.
I want to do Q/A with csv agent and multiple txt files at the same time. But I do not want to use csv loader and txt loader because they did not perform very well when handling cross file scenario. For example, the model needs to find answers from both the csv and txt file and then return the result.
How should I do it? I think I may need to create a custom agent.
### Suggestion:
_No response_ | https://github.com/langchain-ai/langchain/issues/4423 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-05-09T22:33:44Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/base.py | tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables
)
if "df" in input_variables:
partial_prompt = prompt.partial(df=str(df.head().to_markdown()))
else:
partial_prompt = prompt
llm_chain = LLMChain(
llm=llm,
prompt=partial_prompt,
callback_manager=callback_manager,
)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(
llm_chain=llm_chain,
allowed_tools=tool_names,
callback_manager=callback_manager,
**kwargs,
)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
callback_manager=callback_manager,
verbose=verbose,
return_intermediate_steps=return_intermediate_steps,
max_iterations=max_iterations,
max_execution_time=max_execution_time,
early_stopping_method=early_stopping_method,
**(agent_executor_kwargs or {}),
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,423 | Do Q/A with csv agent and multiple txt files at the same time. | ### Issue you'd like to raise.
I want to do Q/A with csv agent and multiple txt files at the same time. But I do not want to use csv loader and txt loader because they did not perform very well when handling cross file scenario. For example, the model needs to find answers from both the csv and txt file and then return the result.
How should I do it? I think I may need to create a custom agent.
### Suggestion:
_No response_ | https://github.com/langchain-ai/langchain/issues/4423 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-05-09T22:33:44Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/prompt.py | PREFIX = """
You are working with a pandas dataframe in Python. The name of the dataframe is `df`.
You should use the tools below to answer the question posed of you:"""
SUFFIX_NO_DF = """
Begin!
Question: {input}
{agent_scratchpad}"""
SUFFIX_WITH_DF = """
This is the result of `print(df.head())`:
{df}
Begin!
Question: {input}
{agent_scratchpad}""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,423 | Do Q/A with csv agent and multiple txt files at the same time. | ### Issue you'd like to raise.
I want to do Q/A with csv agent and multiple txt files at the same time. But I do not want to use csv loader and txt loader because they did not perform very well when handling cross file scenario. For example, the model needs to find answers from both the csv and txt file and then return the result.
How should I do it? I think I may need to create a custom agent.
### Suggestion:
_No response_ | https://github.com/langchain-ai/langchain/issues/4423 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-05-09T22:33:44Z" | python | "2023-05-25T21:23:11Z" | tests/integration_tests/agent/test_pandas_agent.py | import re
import numpy as np
import pytest
from pandas import DataFrame
from langchain.agents import create_pandas_dataframe_agent
from langchain.agents.agent import AgentExecutor
from langchain.llms import OpenAI
@pytest.fixture(scope="module")
def df() -> DataFrame:
random_data = np.random.rand(4, 4)
df = DataFrame(random_data, columns=["name", "age", "food", "sport"])
return df
def test_pandas_agent_creation(df: DataFrame) -> None:
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df)
assert isinstance(agent, AgentExecutor)
def test_data_reading(df: DataFrame) -> None:
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df)
assert isinstance(agent, AgentExecutor)
response = agent.run("how many rows in df? Give me a number.")
result = re.search(rf".*({df.shape[0]}).*", response)
assert result is not None
assert result.group(1) is not None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,958 | How to work with multiple csv files in the same agent session ? is there any option to call agent with multiple csv files, so that the model can interact multiple files and answer us. | null | https://github.com/langchain-ai/langchain/issues/1958 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-03-24T07:46:39Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/csv/base.py | """Agent for working with csvs."""
from typing import Any, Optional
from langchain.agents.agent import AgentExecutor
from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent
from langchain.base_language import BaseLanguageModel
def create_csv_agent(
llm: BaseLanguageModel,
path: str,
pandas_kwargs: Optional[dict] = None,
**kwargs: Any
) -> AgentExecutor:
"""Create csv agent by loading to a dataframe and using pandas agent."""
import pandas as pd
_kwargs = pandas_kwargs or {}
df = pd.read_csv(path, **_kwargs)
return create_pandas_dataframe_agent(llm, df, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,958 | How to work with multiple csv files in the same agent session ? is there any option to call agent with multiple csv files, so that the model can interact multiple files and answer us. | null | https://github.com/langchain-ai/langchain/issues/1958 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-03-24T07:46:39Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/base.py | """Agent for working with pandas objects."""
from typing import Any, Dict, List, Optional
from langchain.agents.agent import AgentExecutor
from langchain.agents.agent_toolkits.pandas.prompt import (
PREFIX,
SUFFIX_NO_DF,
SUFFIX_WITH_DF,
)
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.base_language import BaseLanguageModel
from langchain.callbacks.base import BaseCallbackManager
from langchain.chains.llm import LLMChain
from langchain.tools.python.tool import PythonAstREPLTool
def create_pandas_dataframe_agent(
llm: BaseLanguageModel,
df: Any,
callback_manager: Optional[BaseCallbackManager] = None,
prefix: str = PREFIX,
suffix: Optional[str] = None,
input_variables: Optional[List[str]] = None,
verbose: bool = False,
return_intermediate_steps: bool = False,
max_iterations: Optional[int] = 15, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,958 | How to work with multiple csv files in the same agent session ? is there any option to call agent with multiple csv files, so that the model can interact multiple files and answer us. | null | https://github.com/langchain-ai/langchain/issues/1958 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-03-24T07:46:39Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/base.py | max_execution_time: Optional[float] = None,
early_stopping_method: str = "force",
agent_executor_kwargs: Optional[Dict[str, Any]] = None,
include_df_in_prompt: Optional[bool] = True,
**kwargs: Dict[str, Any],
) -> AgentExecutor:
"""Construct a pandas agent from an LLM and dataframe."""
try:
import pandas as pd
except ImportError:
raise ImportError(
"pandas package not found, please install with `pip install pandas`"
)
if not isinstance(df, pd.DataFrame):
raise ValueError(f"Expected pandas object, got {type(df)}")
if include_df_in_prompt is not None and suffix is not None:
raise ValueError("If suffix is specified, include_df_in_prompt should not be.")
if suffix is not None:
suffix_to_use = suffix
if input_variables is None:
input_variables = ["df", "input", "agent_scratchpad"]
else:
if include_df_in_prompt:
suffix_to_use = SUFFIX_WITH_DF
input_variables = ["df", "input", "agent_scratchpad"]
else:
suffix_to_use = SUFFIX_NO_DF
input_variables = ["input", "agent_scratchpad"]
tools = [PythonAstREPLTool(locals={"df": df})]
prompt = ZeroShotAgent.create_prompt( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,958 | How to work with multiple csv files in the same agent session ? is there any option to call agent with multiple csv files, so that the model can interact multiple files and answer us. | null | https://github.com/langchain-ai/langchain/issues/1958 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-03-24T07:46:39Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/base.py | tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables
)
if "df" in input_variables:
partial_prompt = prompt.partial(df=str(df.head().to_markdown()))
else:
partial_prompt = prompt
llm_chain = LLMChain(
llm=llm,
prompt=partial_prompt,
callback_manager=callback_manager,
)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(
llm_chain=llm_chain,
allowed_tools=tool_names,
callback_manager=callback_manager,
**kwargs,
)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
callback_manager=callback_manager,
verbose=verbose,
return_intermediate_steps=return_intermediate_steps,
max_iterations=max_iterations,
max_execution_time=max_execution_time,
early_stopping_method=early_stopping_method,
**(agent_executor_kwargs or {}),
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,958 | How to work with multiple csv files in the same agent session ? is there any option to call agent with multiple csv files, so that the model can interact multiple files and answer us. | null | https://github.com/langchain-ai/langchain/issues/1958 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-03-24T07:46:39Z" | python | "2023-05-25T21:23:11Z" | langchain/agents/agent_toolkits/pandas/prompt.py | PREFIX = """
You are working with a pandas dataframe in Python. The name of the dataframe is `df`.
You should use the tools below to answer the question posed of you:"""
SUFFIX_NO_DF = """
Begin!
Question: {input}
{agent_scratchpad}"""
SUFFIX_WITH_DF = """
This is the result of `print(df.head())`:
{df}
Begin!
Question: {input}
{agent_scratchpad}""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,958 | How to work with multiple csv files in the same agent session ? is there any option to call agent with multiple csv files, so that the model can interact multiple files and answer us. | null | https://github.com/langchain-ai/langchain/issues/1958 | https://github.com/langchain-ai/langchain/pull/5009 | 3223a97dc61366f7cbda815242c9354bff25ae9d | 7652d2abb01208fd51115e34e18b066824e7d921 | "2023-03-24T07:46:39Z" | python | "2023-05-25T21:23:11Z" | tests/integration_tests/agent/test_pandas_agent.py | import re
import numpy as np
import pytest
from pandas import DataFrame
from langchain.agents import create_pandas_dataframe_agent
from langchain.agents.agent import AgentExecutor
from langchain.llms import OpenAI
@pytest.fixture(scope="module")
def df() -> DataFrame:
random_data = np.random.rand(4, 4)
df = DataFrame(random_data, columns=["name", "age", "food", "sport"])
return df
def test_pandas_agent_creation(df: DataFrame) -> None:
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df)
assert isinstance(agent, AgentExecutor)
def test_data_reading(df: DataFrame) -> None:
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df)
assert isinstance(agent, AgentExecutor)
response = agent.run("how many rows in df? Give me a number.")
result = re.search(rf".*({df.shape[0]}).*", response)
assert result is not None
assert result.group(1) is not None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,279 | Issue Passing in Credential to VertexAI model | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | https://github.com/langchain-ai/langchain/issues/5279 | https://github.com/langchain-ai/langchain/pull/5297 | a669abf16b3ac3dcf10629936d3c58411469bb3c | aa3c7b32715ee22b29aebae763f6183c4609be22 | "2023-05-26T04:34:54Z" | python | "2023-05-26T15:31:02Z" | langchain/llms/vertexai.py | """Wrapper around Google VertexAI models."""
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from pydantic import BaseModel, root_validator
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens
from langchain.utilities.vertexai import (
init_vertexai,
raise_vertex_import_error,
)
if TYPE_CHECKING:
from google.auth.credentials import Credentials
from vertexai.language_models._language_models import _LanguageModel
class _VertexAICommon(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,279 | Issue Passing in Credential to VertexAI model | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | https://github.com/langchain-ai/langchain/issues/5279 | https://github.com/langchain-ai/langchain/pull/5297 | a669abf16b3ac3dcf10629936d3c58411469bb3c | aa3c7b32715ee22b29aebae763f6183c4609be22 | "2023-05-26T04:34:54Z" | python | "2023-05-26T15:31:02Z" | langchain/llms/vertexai.py | client: "_LanguageModel" = None
model_name: str
"Model name to use."
temperature: float = 0.0
"Sampling temperature, it controls the degree of randomness in token selection."
max_output_tokens: int = 128
"Token limit determines the maximum amount of text output from one prompt."
top_p: float = 0.95
"Tokens are selected from most probable to least until the sum of their "
"probabilities equals the top-p value."
top_k: int = 40
"How the model selects tokens for output, the next token is selected from "
"among the top-k most probable tokens."
project: Optional[str] = None
"The default GCP project to use when making Vertex API calls."
location: str = "us-central1"
"The default location to use when making API calls."
credentials: Optional["Credentials"] = None
"The default custom credentials to use when making API calls. If not provided "
"credentials will be ascertained from the environment." ""
@property
def _default_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,279 | Issue Passing in Credential to VertexAI model | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | https://github.com/langchain-ai/langchain/issues/5279 | https://github.com/langchain-ai/langchain/pull/5297 | a669abf16b3ac3dcf10629936d3c58411469bb3c | aa3c7b32715ee22b29aebae763f6183c4609be22 | "2023-05-26T04:34:54Z" | python | "2023-05-26T15:31:02Z" | langchain/llms/vertexai.py | base_params = {
"temperature": self.temperature,
"max_output_tokens": self.max_output_tokens,
"top_k": self.top_p,
"top_p": self.top_k,
}
return {**base_params}
def _predict(self, prompt: str, stop: Optional[List[str]]) -> str:
res = self.client.predict(prompt, **self._default_params)
return self._enforce_stop_words(res.text, stop)
def _enforce_stop_words(self, text: str, stop: Optional[List[str]]) -> str:
if stop:
return enforce_stop_tokens(text, stop)
return text
@property
def _llm_type(self) -> str:
return "vertexai"
@classmethod
def _try_init_vertexai(cls, values: Dict) -> None:
allowed_params = ["project", "location", "credentials"]
params = {k: v for k, v in values.items() if v in allowed_params}
init_vertexai(**params)
return None
class VertexAI(_VertexAICommon, LLM):
"""Wrapper around Google Vertex AI large language models."""
model_name: str = "text-bison"
tuned_model_name: Optional[str] = None
"The name of a tuned model, if it's provided, model_name is ignored."
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,279 | Issue Passing in Credential to VertexAI model | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | https://github.com/langchain-ai/langchain/issues/5279 | https://github.com/langchain-ai/langchain/pull/5297 | a669abf16b3ac3dcf10629936d3c58411469bb3c | aa3c7b32715ee22b29aebae763f6183c4609be22 | "2023-05-26T04:34:54Z" | python | "2023-05-26T15:31:02Z" | langchain/llms/vertexai.py | """Validate that the python package exists in environment."""
cls._try_init_vertexai(values)
try:
from vertexai.preview.language_models import TextGenerationModel
except ImportError:
raise_vertex_import_error()
tuned_model_name = values.get("tuned_model_name")
if tuned_model_name:
values["client"] = TextGenerationModel.get_tuned_model(tuned_model_name)
else:
values["client"] = TextGenerationModel.from_pretrained(values["model_name"])
return values
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
"""Call Vertex model to get predictions based on the prompt.
Args:
prompt: The prompt to pass into the model.
stop: A list of stop words (optional).
run_manager: A Callbackmanager for LLM run, optional.
Returns:
The string generated by the model.
"""
return self._predict(prompt, stop) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,304 | CohereAPIError thrown when base retriever returns empty documents in ContextualCompressionRetriever using Cohere Rank | ### System Info
- 5.19.0-42-generic # 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- langchain==0.0.180
- Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Set up a retriever using any type of retriever (for example, I used Pinecone).
2. Pass it into the ContextualCompressionRetriever.
3. If the base retriever returns empty documents,
4. It throws an error: **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/contextual_compression.py", line 37, in get_relevant_documents
> compressed_docs = self.base_compressor.compress_documents(docs, query)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py", line 57, in compress_documents
> results = self.client.rerank(
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 633, in rerank
> reranking = Reranking(self._request(cohere.RERANK_URL, json=json_body))
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 692, in _request
> self._check_response(json_response, response.headers, response.status_code)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 642, in _check_response
> raise CohereAPIError(
> **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
Code is Like
```python
retriever = vectorstore.as_retriever()
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
return compression_retriever
```
### Expected behavior
**no error throws** and return empty list | https://github.com/langchain-ai/langchain/issues/5304 | https://github.com/langchain-ai/langchain/pull/5306 | 1366d070fc656813c0c33cb5733290ade0fddf7c | 99a1e3f3a309852da989af080ba47288dcb9a348 | "2023-05-26T16:10:47Z" | python | "2023-05-28T20:19:34Z" | langchain/retrievers/document_compressors/cohere_rerank.py | from __future__ import annotations
from typing import TYPE_CHECKING, Dict, Sequence
from pydantic import Extra, root_validator
from langchain.retrievers.document_compressors.base import BaseDocumentCompressor
from langchain.schema import Document
from langchain.utils import get_from_dict_or_env
if TYPE_CHECKING:
from cohere import Client
else:
try:
from cohere import Client
except ImportError:
pass
class CohereRerank(BaseDocumentCompressor): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,304 | CohereAPIError thrown when base retriever returns empty documents in ContextualCompressionRetriever using Cohere Rank | ### System Info
- 5.19.0-42-generic # 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- langchain==0.0.180
- Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Set up a retriever using any type of retriever (for example, I used Pinecone).
2. Pass it into the ContextualCompressionRetriever.
3. If the base retriever returns empty documents,
4. It throws an error: **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/contextual_compression.py", line 37, in get_relevant_documents
> compressed_docs = self.base_compressor.compress_documents(docs, query)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py", line 57, in compress_documents
> results = self.client.rerank(
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 633, in rerank
> reranking = Reranking(self._request(cohere.RERANK_URL, json=json_body))
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 692, in _request
> self._check_response(json_response, response.headers, response.status_code)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 642, in _check_response
> raise CohereAPIError(
> **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
Code is Like
```python
retriever = vectorstore.as_retriever()
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
return compression_retriever
```
### Expected behavior
**no error throws** and return empty list | https://github.com/langchain-ai/langchain/issues/5304 | https://github.com/langchain-ai/langchain/pull/5306 | 1366d070fc656813c0c33cb5733290ade0fddf7c | 99a1e3f3a309852da989af080ba47288dcb9a348 | "2023-05-26T16:10:47Z" | python | "2023-05-28T20:19:34Z" | langchain/retrievers/document_compressors/cohere_rerank.py | client: Client
top_n: int = 3
model: str = "rerank-english-v2.0"
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
cohere_api_key = get_from_dict_or_env(
values, "cohere_api_key", "COHERE_API_KEY"
)
try:
import cohere
values["client"] = cohere.Client(cohere_api_key)
except ImportError:
raise ImportError(
"Could not import cohere python package. "
"Please install it with `pip install cohere`."
)
return values
def compress_documents( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,304 | CohereAPIError thrown when base retriever returns empty documents in ContextualCompressionRetriever using Cohere Rank | ### System Info
- 5.19.0-42-generic # 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- langchain==0.0.180
- Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Set up a retriever using any type of retriever (for example, I used Pinecone).
2. Pass it into the ContextualCompressionRetriever.
3. If the base retriever returns empty documents,
4. It throws an error: **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/contextual_compression.py", line 37, in get_relevant_documents
> compressed_docs = self.base_compressor.compress_documents(docs, query)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py", line 57, in compress_documents
> results = self.client.rerank(
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 633, in rerank
> reranking = Reranking(self._request(cohere.RERANK_URL, json=json_body))
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 692, in _request
> self._check_response(json_response, response.headers, response.status_code)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 642, in _check_response
> raise CohereAPIError(
> **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
Code is Like
```python
retriever = vectorstore.as_retriever()
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
return compression_retriever
```
### Expected behavior
**no error throws** and return empty list | https://github.com/langchain-ai/langchain/issues/5304 | https://github.com/langchain-ai/langchain/pull/5306 | 1366d070fc656813c0c33cb5733290ade0fddf7c | 99a1e3f3a309852da989af080ba47288dcb9a348 | "2023-05-26T16:10:47Z" | python | "2023-05-28T20:19:34Z" | langchain/retrievers/document_compressors/cohere_rerank.py | self, documents: Sequence[Document], query: str
) -> Sequence[Document]:
doc_list = list(documents)
_docs = [d.page_content for d in doc_list]
results = self.client.rerank(
model=self.model, query=query, documents=_docs, top_n=self.top_n
)
final_results = []
for r in results:
doc = doc_list[r.index]
doc.metadata["relevance_score"] = r.relevance_score
final_results.append(doc)
return final_results
async def acompress_documents(
self, documents: Sequence[Document], query: str
) -> Sequence[Document]:
raise NotImplementedError |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | """Agent that interacts with OpenAPI APIs via a hierarchical planning approach."""
import json
import re
from functools import partial
from typing import Any, Callable, Dict, List, Optional
import yaml
from pydantic import Field
from langchain.agents.agent import AgentExecutor
from langchain.agents.agent_toolkits.openapi.planner_prompt import (
API_CONTROLLER_PROMPT,
API_CONTROLLER_TOOL_DESCRIPTION,
API_CONTROLLER_TOOL_NAME,
API_ORCHESTRATOR_PROMPT,
API_PLANNER_PROMPT,
API_PLANNER_TOOL_DESCRIPTION,
API_PLANNER_TOOL_NAME,
PARSING_DELETE_PROMPT,
PARSING_GET_PROMPT,
PARSING_PATCH_PROMPT,
PARSING_POST_PROMPT,
REQUESTS_DELETE_TOOL_DESCRIPTION,
REQUESTS_GET_TOOL_DESCRIPTION, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | REQUESTS_PATCH_TOOL_DESCRIPTION,
REQUESTS_POST_TOOL_DESCRIPTION,
)
from langchain.agents.agent_toolkits.openapi.spec import ReducedOpenAPISpec
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.agents.tools import Tool
from langchain.base_language import BaseLanguageModel
from langchain.callbacks.base import BaseCallbackManager
from langchain.chains.llm import LLMChain
from langchain.llms.openai import OpenAI
from langchain.memory import ReadOnlySharedMemory
from langchain.prompts import PromptTemplate
from langchain.prompts.base import BasePromptTemplate
from langchain.requests import RequestsWrapper
from langchain.tools.base import BaseTool
from langchain.tools.requests.tool import BaseRequestsTool
#
#
MAX_RESPONSE_LENGTH = 5000
def _get_default_llm_chain(prompt: BasePromptTemplate) -> LLMChain:
return LLMChain(
llm=OpenAI(),
prompt=prompt,
)
def _get_default_llm_chain_factory(
prompt: BasePromptTemplate,
) -> Callable[[], LLMChain]:
"""Returns a default LLMChain factory."""
return partial(_get_default_llm_chain, prompt)
class RequestsGetToolWithParsing(BaseRequestsTool, BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | name = "requests_get"
description = REQUESTS_GET_TOOL_DESCRIPTION
response_length: Optional[int] = MAX_RESPONSE_LENGTH
llm_chain: LLMChain = Field(
default_factory=_get_default_llm_chain_factory(PARSING_GET_PROMPT)
)
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
data_params = data.get("params")
response = self.requests_wrapper.get(data["url"], params=data_params)
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str:
raise NotImplementedError()
class RequestsPostToolWithParsing(BaseRequestsTool, BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | name = "requests_post"
description = REQUESTS_POST_TOOL_DESCRIPTION
response_length: Optional[int] = MAX_RESPONSE_LENGTH
llm_chain: LLMChain = Field(
default_factory=_get_default_llm_chain_factory(PARSING_POST_PROMPT)
)
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
response = self.requests_wrapper.post(data["url"], data["data"])
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str:
raise NotImplementedError()
class RequestsPatchToolWithParsing(BaseRequestsTool, BaseTool):
name = "requests_patch"
description = REQUESTS_PATCH_TOOL_DESCRIPTION
response_length: Optional[int] = MAX_RESPONSE_LENGTH
llm_chain = LLMChain(
llm=OpenAI(),
prompt=PARSING_PATCH_PROMPT,
)
def _run(self, text: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
response = self.requests_wrapper.patch(data["url"], data["data"])
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str:
raise NotImplementedError()
class RequestsDeleteToolWithParsing(BaseRequestsTool, BaseTool):
name = "requests_delete"
description = REQUESTS_DELETE_TOOL_DESCRIPTION
response_length: Optional[int] = MAX_RESPONSE_LENGTH
llm_chain = LLMChain(
llm=OpenAI(),
prompt=PARSING_DELETE_PROMPT,
)
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
response = self.requests_wrapper.delete(data["url"])
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | raise NotImplementedError()
#
#
def _create_api_planner_tool(
api_spec: ReducedOpenAPISpec, llm: BaseLanguageModel
) -> Tool:
endpoint_descriptions = [
f"{name} {description}" for name, description, _ in api_spec.endpoints
]
prompt = PromptTemplate(
template=API_PLANNER_PROMPT,
input_variables=["query"],
partial_variables={"endpoints": "- " + "- ".join(endpoint_descriptions)},
)
chain = LLMChain(llm=llm, prompt=prompt)
tool = Tool(
name=API_PLANNER_TOOL_NAME,
description=API_PLANNER_TOOL_DESCRIPTION,
func=chain.run,
)
return tool
def _create_api_controller_agent( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | api_url: str,
api_docs: str,
requests_wrapper: RequestsWrapper,
llm: BaseLanguageModel,
) -> AgentExecutor:
get_llm_chain = LLMChain(llm=llm, prompt=PARSING_GET_PROMPT)
post_llm_chain = LLMChain(llm=llm, prompt=PARSING_POST_PROMPT)
tools: List[BaseTool] = [
RequestsGetToolWithParsing(
requests_wrapper=requests_wrapper, llm_chain=get_llm_chain
),
RequestsPostToolWithParsing(
requests_wrapper=requests_wrapper, llm_chain=post_llm_chain
), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | ]
prompt = PromptTemplate(
template=API_CONTROLLER_PROMPT,
input_variables=["input", "agent_scratchpad"],
partial_variables={
"api_url": api_url,
"api_docs": api_docs,
"tool_names": ", ".join([tool.name for tool in tools]),
"tool_descriptions": "\n".join(
[f"{tool.name}: {tool.description}" for tool in tools]
),
},
)
agent = ZeroShotAgent(
llm_chain=LLMChain(llm=llm, prompt=prompt),
allowed_tools=[tool.name for tool in tools],
)
return AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
def _create_api_controller_tool(
api_spec: ReducedOpenAPISpec,
requests_wrapper: RequestsWrapper,
llm: BaseLanguageModel,
) -> Tool:
"""Expose controller as a tool.
The tool is invoked with a plan from the planner, and dynamically
creates a controller agent with relevant documentation only to
constrain the context.
"""
base_url = api_spec.servers[0]["url"]
def _create_and_run_api_controller_agent(plan_str: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | pattern = r"\b(GET|POST|PATCH|DELETE)\s+(/\S+)*"
matches = re.findall(pattern, plan_str)
endpoint_names = [
"{method} {route}".format(method=method, route=route.split("?")[0])
for method, route in matches
]
endpoint_docs_by_name = {name: docs for name, _, docs in api_spec.endpoints}
docs_str = ""
for endpoint_name in endpoint_names:
docs = endpoint_docs_by_name.get(endpoint_name)
if not docs:
raise ValueError(f"{endpoint_name} endpoint does not exist.")
docs_str += f"== Docs for {endpoint_name} == \n{yaml.dump(docs)}\n"
agent = _create_api_controller_agent(base_url, docs_str, requests_wrapper, llm)
return agent.run(plan_str)
return Tool(
name=API_CONTROLLER_TOOL_NAME,
func=_create_and_run_api_controller_agent,
description=API_CONTROLLER_TOOL_DESCRIPTION,
)
def create_openapi_agent( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | api_spec: ReducedOpenAPISpec,
requests_wrapper: RequestsWrapper,
llm: BaseLanguageModel,
shared_memory: Optional[ReadOnlySharedMemory] = None,
callback_manager: Optional[BaseCallbackManager] = None,
verbose: bool = True,
agent_executor_kwargs: Optional[Dict[str, Any]] = None,
**kwargs: Dict[str, Any],
) -> AgentExecutor:
"""Instantiate API planner and controller for a given spec.
Inject credentials via requests_wrapper.
We use a top-level "orchestrator" agent to invoke the planner and controller, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | "2023-05-28T08:18:12Z" | python | "2023-05-29T13:22:35Z" | langchain/agents/agent_toolkits/openapi/planner.py | rather than a top-level planner
that invokes a controller with its plan. This is to keep the planner simple.
"""
tools = [
_create_api_planner_tool(api_spec, llm),
_create_api_controller_tool(api_spec, requests_wrapper, llm),
]
prompt = PromptTemplate(
template=API_ORCHESTRATOR_PROMPT,
input_variables=["input", "agent_scratchpad"],
partial_variables={
"tool_names": ", ".join([tool.name for tool in tools]),
"tool_descriptions": "\n".join(
[f"{tool.name}: {tool.description}" for tool in tools]
),
},
)
agent = ZeroShotAgent(
llm_chain=LLMChain(llm=llm, prompt=prompt, memory=shared_memory),
allowed_tools=[tool.name for tool in tools],
**kwargs,
)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
callback_manager=callback_manager,
verbose=verbose,
**(agent_executor_kwargs or {}),
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | """Functionality for loading chains."""
import json
from pathlib import Path
from typing import Any, Union
import yaml
from langchain.chains.api.base import APIChain
from langchain.chains.base import Chain
from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain
from langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain
from langchain.chains.combine_documents.refine import RefineDocumentsChain
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains.hyde.base import HypotheticalDocumentEmbedder
from langchain.chains.llm import LLMChain
from langchain.chains.llm_bash.base import LLMBashChain
from langchain.chains.llm_checker.base import LLMCheckerChain
from langchain.chains.llm_math.base import LLMMathChain
from langchain.chains.llm_requests import LLMRequestsChain
from langchain.chains.pal.base import PALChain
from langchain.chains.qa_with_sources.base import QAWithSourcesChain
from langchain.chains.qa_with_sources.vector_db import VectorDBQAWithSourcesChain
from langchain.chains.retrieval_qa.base import VectorDBQA
from langchain.chains.sql_database.base import SQLDatabaseChain
from langchain.llms.loading import load_llm, load_llm_from_config
from langchain.prompts.loading import load_prompt, load_prompt_from_config
from langchain.utilities.loading import try_load_from_hub
URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/chains/"
def _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | """Load LLM chain from config dict."""
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
else:
raise ValueError("One of `prompt` or `prompt_path` must be present.")
return LLMChain(llm=llm, prompt=prompt, **config)
def _load_hyde_chain(config: dict, **kwargs: Any) -> HypotheticalDocumentEmbedder: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | """Load hypothetical document embedder chain from config dict."""
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.")
if "embeddings" in kwargs:
embeddings = kwargs.pop("embeddings")
else:
raise ValueError("`embeddings` must be present.")
return HypotheticalDocumentEmbedder(
llm_chain=llm_chain, base_embeddings=embeddings, **config
)
def _load_stuff_documents_chain(config: dict, **kwargs: Any) -> StuffDocumentsChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.")
if not isinstance(llm_chain, LLMChain):
raise ValueError(f"Expected LLMChain, got {llm_chain}")
if "document_prompt" in config:
prompt_config = config.pop("document_prompt")
document_prompt = load_prompt_from_config(prompt_config)
elif "document_prompt_path" in config:
document_prompt = load_prompt(config.pop("document_prompt_path"))
else:
raise ValueError(
"One of `document_prompt` or `document_prompt_path` must be present."
)
return StuffDocumentsChain(
llm_chain=llm_chain, document_prompt=document_prompt, **config
)
def _load_map_reduce_documents_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | config: dict, **kwargs: Any
) -> MapReduceDocumentsChain:
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.")
if not isinstance(llm_chain, LLMChain):
raise ValueError(f"Expected LLMChain, got {llm_chain}")
if "combine_document_chain" in config:
combine_document_chain_config = config.pop("combine_document_chain")
combine_document_chain = load_chain_from_config(combine_document_chain_config)
elif "combine_document_chain_path" in config:
combine_document_chain = load_chain(config.pop("combine_document_chain_path"))
else:
raise ValueError(
"One of `combine_document_chain` or "
"`combine_document_chain_path` must be present."
)
if "collapse_document_chain" in config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | collapse_document_chain_config = config.pop("collapse_document_chain")
if collapse_document_chain_config is None:
collapse_document_chain = None
else:
collapse_document_chain = load_chain_from_config(
collapse_document_chain_config
)
elif "collapse_document_chain_path" in config:
collapse_document_chain = load_chain(config.pop("collapse_document_chain_path"))
return MapReduceDocumentsChain(
llm_chain=llm_chain,
combine_document_chain=combine_document_chain,
collapse_document_chain=collapse_document_chain,
**config,
)
def _load_llm_bash_chain(config: dict, **kwargs: Any) -> LLMBashChain:
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
return LLMBashChain(llm=llm, prompt=prompt, **config)
def _load_llm_checker_chain(config: dict, **kwargs: Any) -> LLMCheckerChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "create_draft_answer_prompt" in config:
create_draft_answer_prompt_config = config.pop("create_draft_answer_prompt")
create_draft_answer_prompt = load_prompt_from_config(
create_draft_answer_prompt_config
)
elif "create_draft_answer_prompt_path" in config:
create_draft_answer_prompt = load_prompt(
config.pop("create_draft_answer_prompt_path")
)
if "list_assertions_prompt" in config:
list_assertions_prompt_config = config.pop("list_assertions_prompt")
list_assertions_prompt = load_prompt_from_config(list_assertions_prompt_config)
elif "list_assertions_prompt_path" in config:
list_assertions_prompt = load_prompt(config.pop("list_assertions_prompt_path"))
if "check_assertions_prompt" in config:
check_assertions_prompt_config = config.pop("check_assertions_prompt")
check_assertions_prompt = load_prompt_from_config(
check_assertions_prompt_config
)
elif "check_assertions_prompt_path" in config:
check_assertions_prompt = load_prompt( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | config.pop("check_assertions_prompt_path")
)
if "revised_answer_prompt" in config:
revised_answer_prompt_config = config.pop("revised_answer_prompt")
revised_answer_prompt = load_prompt_from_config(revised_answer_prompt_config)
elif "revised_answer_prompt_path" in config:
revised_answer_prompt = load_prompt(config.pop("revised_answer_prompt_path"))
return LLMCheckerChain(
llm=llm,
create_draft_answer_prompt=create_draft_answer_prompt,
list_assertions_prompt=list_assertions_prompt,
check_assertions_prompt=check_assertions_prompt,
revised_answer_prompt=revised_answer_prompt,
**config,
)
def _load_llm_math_chain(config: dict, **kwargs: Any) -> LLMMathChain:
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
return LLMMathChain(llm=llm, prompt=prompt, **config)
def _load_map_rerank_documents_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | config: dict, **kwargs: Any
) -> MapRerankDocumentsChain:
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.")
return MapRerankDocumentsChain(llm_chain=llm_chain, **config)
def _load_pal_chain(config: dict, **kwargs: Any) -> PALChain:
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
else:
raise ValueError("One of `prompt` or `prompt_path` must be present.")
return PALChain(llm=llm, prompt=prompt, **config)
def _load_refine_documents_chain(config: dict, **kwargs: Any) -> RefineDocumentsChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | if "initial_llm_chain" in config:
initial_llm_chain_config = config.pop("initial_llm_chain")
initial_llm_chain = load_chain_from_config(initial_llm_chain_config)
elif "initial_llm_chain_path" in config:
initial_llm_chain = load_chain(config.pop("initial_llm_chain_path"))
else:
raise ValueError(
"One of `initial_llm_chain` or `initial_llm_chain_config` must be present."
)
if "refine_llm_chain" in config:
refine_llm_chain_config = config.pop("refine_llm_chain")
refine_llm_chain = load_chain_from_config(refine_llm_chain_config)
elif "refine_llm_chain_path" in config:
refine_llm_chain = load_chain(config.pop("refine_llm_chain_path"))
else:
raise ValueError(
"One of `refine_llm_chain` or `refine_llm_chain_config` must be present."
)
if "document_prompt" in config:
prompt_config = config.pop("document_prompt")
document_prompt = load_prompt_from_config(prompt_config)
elif "document_prompt_path" in config:
document_prompt = load_prompt(config.pop("document_prompt_path"))
return RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
**config,
)
def _load_qa_with_sources_chain(config: dict, **kwargs: Any) -> QAWithSourcesChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | if "combine_documents_chain" in config:
combine_documents_chain_config = config.pop("combine_documents_chain")
combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
elif "combine_documents_chain_path" in config:
combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
else:
raise ValueError(
"One of `combine_documents_chain` or "
"`combine_documents_chain_path` must be present."
)
return QAWithSourcesChain(combine_documents_chain=combine_documents_chain, **config)
def _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain:
if "database" in kwargs:
database = kwargs.pop("database")
else:
raise ValueError("`database` must be present.")
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
else:
prompt = None
return SQLDatabaseChain.from_llm(llm, database, prompt=prompt, **config)
def _load_vector_db_qa_with_sources_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | config: dict, **kwargs: Any
) -> VectorDBQAWithSourcesChain:
if "vectorstore" in kwargs:
vectorstore = kwargs.pop("vectorstore")
else:
raise ValueError("`vectorstore` must be present.")
if "combine_documents_chain" in config:
combine_documents_chain_config = config.pop("combine_documents_chain")
combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
elif "combine_documents_chain_path" in config:
combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
else:
raise ValueError(
"One of `combine_documents_chain` or "
"`combine_documents_chain_path` must be present."
)
return VectorDBQAWithSourcesChain(
combine_documents_chain=combine_documents_chain,
vectorstore=vectorstore,
**config,
)
def _load_vector_db_qa(config: dict, **kwargs: Any) -> VectorDBQA: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | if "vectorstore" in kwargs:
vectorstore = kwargs.pop("vectorstore")
else:
raise ValueError("`vectorstore` must be present.")
if "combine_documents_chain" in config:
combine_documents_chain_config = config.pop("combine_documents_chain")
combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
elif "combine_documents_chain_path" in config:
combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
else:
raise ValueError(
"One of `combine_documents_chain` or "
"`combine_documents_chain_path` must be present."
)
return VectorDBQA(
combine_documents_chain=combine_documents_chain,
vectorstore=vectorstore,
**config,
)
def _load_api_chain(config: dict, **kwargs: Any) -> APIChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | if "api_request_chain" in config:
api_request_chain_config = config.pop("api_request_chain")
api_request_chain = load_chain_from_config(api_request_chain_config)
elif "api_request_chain_path" in config:
api_request_chain = load_chain(config.pop("api_request_chain_path"))
else:
raise ValueError(
"One of `api_request_chain` or `api_request_chain_path` must be present."
)
if "api_answer_chain" in config:
api_answer_chain_config = config.pop("api_answer_chain")
api_answer_chain = load_chain_from_config(api_answer_chain_config)
elif "api_answer_chain_path" in config:
api_answer_chain = load_chain(config.pop("api_answer_chain_path"))
else:
raise ValueError(
"One of `api_answer_chain` or `api_answer_chain_path` must be present."
)
if "requests_wrapper" in kwargs:
requests_wrapper = kwargs.pop("requests_wrapper")
else:
raise ValueError("`requests_wrapper` must be present.")
return APIChain(
api_request_chain=api_request_chain,
api_answer_chain=api_answer_chain,
requests_wrapper=requests_wrapper,
**config,
)
def _load_llm_requests_chain(config: dict, **kwargs: Any) -> LLMRequestsChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.")
if "requests_wrapper" in kwargs:
requests_wrapper = kwargs.pop("requests_wrapper")
return LLMRequestsChain(
llm_chain=llm_chain, requests_wrapper=requests_wrapper, **config
)
else:
return LLMRequestsChain(llm_chain=llm_chain, **config)
type_to_loader_dict = {
"api_chain": _load_api_chain,
"hyde_chain": _load_hyde_chain,
"llm_chain": _load_llm_chain,
"llm_bash_chain": _load_llm_bash_chain,
"llm_checker_chain": _load_llm_checker_chain, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | "llm_math_chain": _load_llm_math_chain,
"llm_requests_chain": _load_llm_requests_chain,
"pal_chain": _load_pal_chain,
"qa_with_sources_chain": _load_qa_with_sources_chain,
"stuff_documents_chain": _load_stuff_documents_chain,
"map_reduce_documents_chain": _load_map_reduce_documents_chain,
"map_rerank_documents_chain": _load_map_rerank_documents_chain,
"refine_documents_chain": _load_refine_documents_chain,
"sql_database_chain": _load_sql_database_chain,
"vector_db_qa_with_sources_chain": _load_vector_db_qa_with_sources_chain,
"vector_db_qa": _load_vector_db_qa,
}
def load_chain_from_config(config: dict, **kwargs: Any) -> Chain:
"""Load chain from Config Dict."""
if "_type" not in config:
raise ValueError("Must specify a chain Type in config")
config_type = config.pop("_type")
if config_type not in type_to_loader_dict:
raise ValueError(f"Loading {config_type} chain not supported")
chain_loader = type_to_loader_dict[config_type]
return chain_loader(config, **kwargs)
def load_chain(path: Union[str, Path], **kwargs: Any) -> Chain:
"""Unified method for loading a chain from LangChainHub or local fs."""
if hub_result := try_load_from_hub(
path, _load_chain_from_file, "chains", {"json", "yaml"}, **kwargs
):
return hub_result
else:
return _load_chain_from_file(path, **kwargs)
def _load_chain_from_file(file: Union[str, Path], **kwargs: Any) -> Chain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | "2023-05-25T00:58:09Z" | python | "2023-05-29T13:44:47Z" | langchain/chains/loading.py | """Load chain from file."""
if isinstance(file, str):
file_path = Path(file)
else:
file_path = file
if file_path.suffix == ".json":
with open(file_path) as f:
config = json.load(f)
elif file_path.suffix == ".yaml":
with open(file_path, "r") as f:
config = yaml.safe_load(f)
else:
raise ValueError("File type must be json or yaml")
if "verbose" in kwargs:
config["verbose"] = kwargs.pop("verbose")
if "memory" in kwargs:
config["memory"] = kwargs.pop("memory")
return load_chain_from_config(config, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,316 | VertexAIEmbeddings error when passing a list with of length greater than 5. | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.181
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any list with len > 5 will cause an error.
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import VertexAIEmbeddings
text = ['text_1', 'text_2', 'text_3', 'text_4', 'text_5', 'text_6']
embeddings = VertexAIEmbeddings()
vectorstore = FAISS.from_texts(text, embeddings)
```
```python
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InvalidArgument: 400 5 instance(s) is allowed per prediction. Actual: 6
```
### Expected behavior
Excepted to successfully be able to vectorize a larger list of items. Maybe implement a step to | https://github.com/langchain-ai/langchain/issues/5316 | https://github.com/langchain-ai/langchain/pull/5325 | 3e164684234d3a51032b737dce2c25ba6cd3ec2d | c09f8e4ddc3be791bd0e8c8385ed1871bdd5d681 | "2023-05-26T20:31:56Z" | python | "2023-05-29T13:57:41Z" | langchain/embeddings/vertexai.py | """Wrapper around Google VertexAI embedding models."""
from typing import Dict, List
from pydantic import root_validator
from langchain.embeddings.base import Embeddings
from langchain.llms.vertexai import _VertexAICommon
from langchain.utilities.vertexai import raise_vertex_import_error
class VertexAIEmbeddings(_VertexAICommon, Embeddings): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,316 | VertexAIEmbeddings error when passing a list with of length greater than 5. | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.181
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any list with len > 5 will cause an error.
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import VertexAIEmbeddings
text = ['text_1', 'text_2', 'text_3', 'text_4', 'text_5', 'text_6']
embeddings = VertexAIEmbeddings()
vectorstore = FAISS.from_texts(text, embeddings)
```
```python
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InvalidArgument: 400 5 instance(s) is allowed per prediction. Actual: 6
```
### Expected behavior
Excepted to successfully be able to vectorize a larger list of items. Maybe implement a step to | https://github.com/langchain-ai/langchain/issues/5316 | https://github.com/langchain-ai/langchain/pull/5325 | 3e164684234d3a51032b737dce2c25ba6cd3ec2d | c09f8e4ddc3be791bd0e8c8385ed1871bdd5d681 | "2023-05-26T20:31:56Z" | python | "2023-05-29T13:57:41Z" | langchain/embeddings/vertexai.py | model_name: str = "textembedding-gecko"
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validates that the python package exists in environment."""
cls._try_init_vertexai(values)
try:
from vertexai.preview.language_models import TextEmbeddingModel
except ImportError:
raise_vertex_import_error()
values["client"] = TextEmbeddingModel.from_pretrained(values["model_name"])
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Embed a list of strings.
Args:
texts: List[str] The list of strings to embed.
Returns:
List of embeddings, one for each text.
"""
embeddings = self.client.get_embeddings(texts)
return [el.values for el in embeddings]
def embed_query(self, text: str) -> List[float]:
"""Embed a text.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
embeddings = self.client.get_embeddings([text])
return embeddings[0].values |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,316 | VertexAIEmbeddings error when passing a list with of length greater than 5. | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.181
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any list with len > 5 will cause an error.
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import VertexAIEmbeddings
text = ['text_1', 'text_2', 'text_3', 'text_4', 'text_5', 'text_6']
embeddings = VertexAIEmbeddings()
vectorstore = FAISS.from_texts(text, embeddings)
```
```python
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InvalidArgument: 400 5 instance(s) is allowed per prediction. Actual: 6
```
### Expected behavior
Excepted to successfully be able to vectorize a larger list of items. Maybe implement a step to | https://github.com/langchain-ai/langchain/issues/5316 | https://github.com/langchain-ai/langchain/pull/5325 | 3e164684234d3a51032b737dce2c25ba6cd3ec2d | c09f8e4ddc3be791bd0e8c8385ed1871bdd5d681 | "2023-05-26T20:31:56Z" | python | "2023-05-29T13:57:41Z" | tests/integration_tests/embeddings/test_vertexai.py | """Test Vertex AI API wrapper.
In order to run this test, you need to install VertexAI SDK
pip install google-cloud-aiplatform>=1.25.0
Your end-user credentials would be used to make the calls (make sure you've run
`gcloud auth login` first).
"""
from langchain.embeddings import VertexAIEmbeddings
def test_embedding_documents() -> None:
documents = ["foo bar"]
model = VertexAIEmbeddings()
output = model.embed_documents(documents)
assert len(output) == 1
assert len(output[0]) == 768
assert model._llm_type == "vertexai"
assert model.model_name == model.client._model_id
def test_embedding_query() -> None:
document = "foo bar"
model = VertexAIEmbeddings()
output = model.embed_query(document)
assert len(output) == 768 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | """All different types of document loaders."""
from langchain.document_loaders.airbyte_json import AirbyteJSONLoader
from langchain.document_loaders.apify_dataset import ApifyDatasetLoader
from langchain.document_loaders.arxiv import ArxivLoader
from langchain.document_loaders.azlyrics import AZLyricsLoader
from langchain.document_loaders.azure_blob_storage_container import (
AzureBlobStorageContainerLoader,
)
from langchain.document_loaders.azure_blob_storage_file import (
AzureBlobStorageFileLoader,
)
from langchain.document_loaders.bibtex import BibtexLoader
from langchain.document_loaders.bigquery import BigQueryLoader
from langchain.document_loaders.bilibili import BiliBiliLoader
from langchain.document_loaders.blackboard import BlackboardLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.blockchain import BlockchainDocumentLoader
from langchain.document_loaders.chatgpt import ChatGPTLoader
from langchain.document_loaders.college_confidential import CollegeConfidentialLoader
from langchain.document_loaders.confluence import ConfluenceLoader
from langchain.document_loaders.conllu import CoNLLULoader
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.document_loaders.dataframe import DataFrameLoader
from langchain.document_loaders.diffbot import DiffbotLoader
from langchain.document_loaders.directory import DirectoryLoader
from langchain.document_loaders.discord import DiscordChatLoader
from langchain.document_loaders.docugami import DocugamiLoader
from langchain.document_loaders.duckdb_loader import DuckDBLoader
from langchain.document_loaders.email import (
OutlookMessageLoader,
UnstructuredEmailLoader,
)
from langchain.document_loaders.epub import UnstructuredEPubLoader
from langchain.document_loaders.evernote import EverNoteLoader
from langchain.document_loaders.facebook_chat import FacebookChatLoader
from langchain.document_loaders.gcs_directory import GCSDirectoryLoader
from langchain.document_loaders.gcs_file import GCSFileLoader
from langchain.document_loaders.git import GitLoader
from langchain.document_loaders.gitbook import GitbookLoader
from langchain.document_loaders.googledrive import GoogleDriveLoader
from langchain.document_loaders.gutenberg import GutenbergLoader
from langchain.document_loaders.hn import HNLoader
from langchain.document_loaders.html import UnstructuredHTMLLoader
from langchain.document_loaders.html_bs import BSHTMLLoader
from langchain.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
from langchain.document_loaders.ifixit import IFixitLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.image import UnstructuredImageLoader
from langchain.document_loaders.image_captions import ImageCaptionLoader
from langchain.document_loaders.imsdb import IMSDbLoader
from langchain.document_loaders.joplin import JoplinLoader
from langchain.document_loaders.json_loader import JSONLoader
from langchain.document_loaders.markdown import UnstructuredMarkdownLoader
from langchain.document_loaders.mastodon import MastodonTootsLoader
from langchain.document_loaders.mediawikidump import MWDumpLoader
from langchain.document_loaders.modern_treasury import ModernTreasuryLoader
from langchain.document_loaders.notebook import NotebookLoader
from langchain.document_loaders.notion import NotionDirectoryLoader
from langchain.document_loaders.notiondb import NotionDBLoader
from langchain.document_loaders.obsidian import ObsidianLoader
from langchain.document_loaders.odt import UnstructuredODTLoader
from langchain.document_loaders.onedrive import OneDriveLoader
from langchain.document_loaders.pdf import (
MathpixPDFLoader,
OnlinePDFLoader,
PDFMinerLoader,
PDFMinerPDFasHTMLLoader,
PDFPlumberLoader,
PyMuPDFLoader,
PyPDFDirectoryLoader,
PyPDFium2Loader,
PyPDFLoader,
UnstructuredPDFLoader,
)
from langchain.document_loaders.powerpoint import UnstructuredPowerPointLoader
from langchain.document_loaders.psychic import PsychicLoader
from langchain.document_loaders.python import PythonLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.readthedocs import ReadTheDocsLoader
from langchain.document_loaders.reddit import RedditPostsLoader
from langchain.document_loaders.roam import RoamLoader
from langchain.document_loaders.rtf import UnstructuredRTFLoader
from langchain.document_loaders.s3_directory import S3DirectoryLoader
from langchain.document_loaders.s3_file import S3FileLoader
from langchain.document_loaders.sitemap import SitemapLoader
from langchain.document_loaders.slack_directory import SlackDirectoryLoader
from langchain.document_loaders.spreedly import SpreedlyLoader
from langchain.document_loaders.srt import SRTLoader
from langchain.document_loaders.stripe import StripeLoader
from langchain.document_loaders.telegram import (
TelegramChatApiLoader,
TelegramChatFileLoader,
)
from langchain.document_loaders.text import TextLoader
from langchain.document_loaders.tomarkdown import ToMarkdownLoader
from langchain.document_loaders.toml import TomlLoader
from langchain.document_loaders.trello import TrelloLoader
from langchain.document_loaders.twitter import TwitterTweetLoader
from langchain.document_loaders.unstructured import (
UnstructuredAPIFileIOLoader,
UnstructuredAPIFileLoader,
UnstructuredFileIOLoader,
UnstructuredFileLoader,
)
from langchain.document_loaders.url import UnstructuredURLLoader
from langchain.document_loaders.url_playwright import PlaywrightURLLoader
from langchain.document_loaders.url_selenium import SeleniumURLLoader
from langchain.document_loaders.weather import WeatherDataLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.web_base import WebBaseLoader
from langchain.document_loaders.whatsapp_chat import WhatsAppChatLoader
from langchain.document_loaders.wikipedia import WikipediaLoader
from langchain.document_loaders.word_document import (
Docx2txtLoader,
UnstructuredWordDocumentLoader,
)
from langchain.document_loaders.youtube import (
GoogleApiClient,
GoogleApiYoutubeLoader,
YoutubeLoader,
)
PagedPDFSplitter = PyPDFLoader
TelegramChatLoader = TelegramChatFileLoader
__all__ = [
"AZLyricsLoader",
"AirbyteJSONLoader",
"ApifyDatasetLoader",
"ArxivLoader",
"AzureBlobStorageContainerLoader",
"AzureBlobStorageFileLoader",
"BSHTMLLoader",
"BibtexLoader",
"BigQueryLoader",
"BiliBiliLoader",
"BlackboardLoader",
"BlockchainDocumentLoader",
"CSVLoader",
"ChatGPTLoader",
"CoNLLULoader", |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.