File size: 1,587 Bytes
76d9c4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
title: Mistral-7B Playground
description: Launch a text-generation Streamlit app using Mistral-7B
version: EN
---

## Try out this model on [VESSL Hub](https://vessl.ai/hub).

This example runs an app for inference using Mistral-7B which is an open-source LLM developed by [Mistral AI](https://mistral.ai/). The model utilizes a grouped query attention (GQA) and a sliding window attention mechanism (SWA), which enable faster inference and handling longer sequences at smaller cost than other models. As a result, it achieves both efficiency and high performance. Mistral-7B outperforms Llama 2 13B on all benchmarks and Llama 1 34B in reasoning, mathematics, and code generation benchmarks.

<img
  className="rounded-md"
  src="/images/mistral-streamlit.png"
/>

## Running the model

You can run the model with our quick command.
```sh
vessl run create -f mistral_7b.yaml
```

Here's a rundown of the `mistral_7b.yaml` file.
```yaml
name: mistral-7b-streamlit
description: A template Run for inference of Mistral-7B with streamlit app
resources:
  cluster: vessl-gcp-oregon
  preset: v1.l4-1.mem-42
image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
import:
  /model/: hf://huggingface.co/VESSL/Mistral-7B
  /code/:
    git:
      url: https://github.com/vessl-ai/hub-model
      ref: main
run:
  - command: |-
      pip install -r requirements_streamlit.txt
      streamlit run streamlit_demo.py --server.port 80
    workdir: /code/mistral-7B
interactive:
  max_runtime: 24h
  jupyter:
    idle_timeout: 120m
ports:
  - name: streamlit
    type: http
    port: 80
```