Edit model card

Model Card for Vistral-7B-Chat

Model Details

  • Model Name: Vistral-7B-Chat
  • Version: 1.0
  • Model Type: Causal Language Model
  • Architecture: Transformer-based model with 7 billion parameters
  • Quantization: 8-bit quantized for efficiency

Usage

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "nhotin/vistral7B-chat-gguf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

input_text = "Your text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
85
Safetensors
Model size
7.3B params
Tensor type
F32
FP16
I8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.