File size: 1,312 Bytes
32e1ef2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a052eb
 
32e1ef2
ec5582d
32e1ef2
 
 
c43faa6
 
 
 
 
 
 
 
 
 
 
 
0a052eb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: apache-2.0
tags:
- finetuned
pipeline_tag: text-generation
inference: true
widget:
- messages:
  - role: user
    content: What is your favorite condiment?

extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---


## Use below code to download the mistral.

```py

#pip install -U transformers accelerate torch

import torch
from transformers import pipeline, set_seed

model_path = "vicky4s4s/mistral-7b-v2-instruct"

pipe = pipeline("text-generation", model=model_path, torch_dtype=torch.bfloat16, device_map="cuda")
messages = [{"role": "user", "content": "what is meaning of life?"}]
outputs = pipe(messages, max_new_tokens=1000, do_sample=True, temperature=0.71, top_k=50, top_p=0.92,repetition_penalty=1)
print(outputs[0]["generated_text"][-1]["content"])

```


## Limitations

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. 
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.


## Develop By

Vignesh, vickys9715@gmail.com