ecastera-eva-westlake-7b-spanish

Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.

  • Exported in GGUF format, INT4 quantization
  • Refined version of my previous models, with new training data and methodology. This should produce more natural reponses in Spanish.
  • Base model Mistral-7b
  • Based on the excelent job of senseable/WestLake-7B-v2 and Eric Hartford's cognitivecomputations/WestLake-7B-v2-laser
  • Fine-tuned in Spanish with a collection of poetry, books, wikipedia articles, phylosophy texts and alpaca-es datasets.
  • Trained using Lora and PEFT and INT8 quantization on 2 GPUs for several days.

Usage:

Use in llamacpp or other framework that supports GGUF format.

Downloads last month
8
GGUF
Model size
7.24B params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train ecastera/ecastera-eva-westlake-7b-spanish-int4-gguf