Edit model card

FP32 version of original BF16 mistral safetensors.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float)

model.save_pretrained("./Mistral-7B-v0.1-fp32")
tokenizer.save_pretrained("./Mistral-7B-v0.1-fp32")
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Collection including TeeZee/Mistral-7B-v0.1-fp32