Model Card for Vistral-7B-Chat
Model Details
- Model Name: Vistral-7B-Chat
- Version: 1.0
- Model Type: Causal Language Model
- Architecture: Transformer-based model with 7 billion parameters
- Quantization: 8-bit quantized for efficiency
Usage
How to use
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "nhotin/vistral7B-chat-gguf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "Your text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 85
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.