Mistral-Ita-7B GGUF
How to Use
How to utilize my Mistral for Italian text generation
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("DeepMount00/Mistral-Ita-7b-GGUF", model_file="mistral_ita-7b-Q4_K_M.gguf", model_type="mistral", context_length=4096, max_new_tokens=1000, gpu_layers=20)
prompt = "Trova la soluzione a questo problema: se Mario ha 12 mele e ne vende 4 a 8 euro e le restanti a 3 euro, quanto guadagna Mario?"
print(llm(prompt))
- Downloads last month
- 61
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.