|
--- |
|
license: apache-2.0 |
|
tags: |
|
- finetuned |
|
pipeline_tag: text-generation |
|
inference: true |
|
widget: |
|
- messages: |
|
- role: user |
|
content: What is your favorite condiment? |
|
|
|
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. |
|
--- |
|
|
|
|
|
## Use below code to download the mistral. |
|
|
|
```py |
|
|
|
#pip install -U transformers accelerate torch |
|
|
|
import torch |
|
from transformers import pipeline, set_seed |
|
|
|
model_path = "vicky4s4s/mistral-7b-v2-instruct" |
|
|
|
pipe = pipeline("text-generation", model=model_path, torch_dtype=torch.bfloat16, device_map="cuda") |
|
messages = [{"role": "user", "content": "what is meaning of life?"}] |
|
outputs = pipe(messages, max_new_tokens=1000, do_sample=True, temperature=0.71, top_k=50, top_p=0.92,repetition_penalty=1) |
|
print(outputs[0]["generated_text"][-1]["content"]) |
|
|
|
``` |
|
|
|
|
|
## Limitations |
|
|
|
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. |
|
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to |
|
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. |
|
|
|
|
|
## Develop By |
|
|
|
Vignesh, vickys9715@gmail.com |