# Vespa Retriever

This shows how to use Vespa.ai as a LangChain retriever.
Vespa.ai is a platform for highly efficient structured text and vector search.
Please refer to [Vespa.ai](https://vespa.ai) for more information.

The following sets up a retriever that fetches results from Vespa's documentation search:

import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/vespa.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>

Here, up to 5 results are retrieved from the `content` field in the `paragraph` document type,
using `documentation` as the ranking method. The `userQuery()` is replaced with the actual query
passed from LangChain.

Please refer to the [pyvespa documentation](https://pyvespa.readthedocs.io/en/latest/getting-started-pyvespa.html#Query)
for more information.

The URL is the endpoint of the Vespa application.
You can connect to any Vespa endpoint, either a remote service or a local instance using Docker.
However, most Vespa Cloud instances are protected with mTLS.
If this is your case, you can, for instance set up a [CloudFlare Worker](https://cloud.vespa.ai/en/security/cloudflare-workers)
that contains the necessary credentials to connect to the instance.

Now you can return the results and continue using them in LangChain.
