---
title: "Cerebras"
description: "Configure Cerebras Inference with Continue for fast model inference using specialized silicon, including setup instructions for Llama 3.1 70B model"
---

Cerebras Inference uses specialized silicon to provides fast inference.

1. Create an account in the portal [here](https://cloud.cerebras.ai/).
2. Create and copy the API key for use in Continue.
3. Update your Continue config file:

<Tabs>
  <Tab title="YAML">
  ```yaml title="config.yaml"
  name: My Config
  version: 0.0.1
  schema: v1

  models:
    - name: Cerebras Llama 3.1 70B
      provider: cerebras
      model: llama3.1-70b
      apiKey:  <YOUR_CEREBRAS_API_KEY>
  ```
  </Tab>
  <Tab title="JSON">
  ```json title="config.json"
  {
    "models": [
      {
        "title": "Cerebras Llama 3.1 70B",
        "provider": "cerebras",
        "model": "llama3.1-70b",
        "apiKey": "<YOUR_CEREBRAS_API_KEY>"
      }
    ]
  }
  ```
  </Tab>
</Tabs>
