Mistral 7B V0.1 - Alpaca
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
tokenizer = AutoTokenizer.from_pretrained("satyajitghana/mistral-7b-v0.1-alpaca-chat")
model = AutoModelForCausalLM.from_pretrained(
"satyajitghana/mistral-7b-v0.1-alpaca-chat",
torch_dtype=torch.bfloat16,
device_map="auto"
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
INPUT = """
### Instruction:
List 3 historical events related to the following country
### Input:
India
### Response:
"""
out = pipe(
INPUT,
max_new_tokens=200
)
print(out[0]['generated_text'])
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.