Edit model card

Not my model(obviously); downloaded the Mistral release model from https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar and uploaded for my own sanity(and fine-tuning), since it's still not uploaded on Mistral repo.

The standard code works:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model = AutoModelForCausalLM.from_pretrained("redscroll/Mistral-7B-v0.2", torch_dtype=torch.bfloat16, device_map = "auto")
tokenizer = AutoTokenizer.from_pretrained("redscroll/Mistral-7B-v0.2")

input_text = "In my younger and more vulnerable years"

input_ids = tokenizer(input_text, return_tensors = "pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens = 500, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id)

print(tokenizer.decode(outputs[0]))
Downloads last month
3
Safetensors
Model size
7.24B params
Tensor type
BF16
·