You can use the `createChatAdapter` function to create a Hugging Face Inference API adapter.

```tsx
const hfAdapter = createChatAdapter()
    .withEndpoint('<YOUR ENDPOINT URL>');
```

If your Hugging Face endpoint is protected and requires a token, you can pass it as follows:

```tsx
const hfAdapter = createChatAdapter()
    .withEndpoint('<YOUR ENDPOINT URL>')
    .withAuthToken('<YOUR TOKEN>');
```

And because the user and the AI output require transformation, we will use use input and output
pre-processors specific to the model that we are using:

```tsx
import {createChatAdapter, llama2InputPreProcessor, llama2OutputPreProcessor} from '@nlux/hf'

const hfAdapter = createChatAdapter()
    .withEndpoint('<YOUR ENDPOINT URL>')
    .withAuthToken('<YOUR TOKEN>')
    .withInputPreProcessor(llama2InputPreProcessor)
    .withOutputPreProcessor(llama2OutputPreProcessor);
```

The `createChatAdapter` function returns an adapter builder that you can be configured by chaining methods.
Please refer to the [reference documentation](/reference/adapters/hugging-face) for more information on the available methods.
