---
title: Setup
---

<Card
  title="Local models in Python"
  icon="arrow-up-right"
  href="/language-model-setup/local-models/python"
>
  Learn how to connect our Python package to a local language model.
</Card>

Open Interpreter's terminal interface uses [LM Studio](https://lmstudio.ai/) to connect to local language models.

Simply run `interpreter` in local mode from the command line:

```shell
interpreter --local
```

**You will need to run LM Studio in the background.**

1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it.
2. Select a model then click **↓ Download**.
3. Click the **↔️** button on the left (below 💬).
4. Select your model at the top, then click **Start Server**.

Once the server is running, you can begin your conversation with Open Interpreter.

(When you run the command `interpreter --local`, the steps above will be displayed.)

<Info>Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000. If your model has different requirements, [set these parameters manually.](/language-model-setup/local-models/settings)</Info>
