Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
jonasaise
/
mixtral-8x7b-lora-instruct-swe-v2
like
0
Text Generation
Transformers
Safetensors
mixtral
Inference Endpoints
text-generation-inference
4-bit precision
bitsandbytes
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
YAML Metadata Warning:
empty or missing yaml metadata in repo card (
https://huggingface.co/docs/hub/model-cards#model-card-metadata
)
Finetuned merged lora adaptors to Mixtral-8x7b-instruct-v0.1 on Swedish instruct data
license: apache-2.0 datasets: - jeremyc/Alpaca-Lora-GPT4-Swedish language: - sv
license: apache-2.0
Finetuned merged lora adaptors to Mixtral-8x7b-instruct-v0.1 on Swedish instruct data
You likely need tokenizer and tokenizer.config from original model to load properly.
license: apache-2.0 datasets: - jeremyc/Alpaca-Lora-GPT4-Swedish language: - sv
license: apache-2.0
Downloads last month
0
Safetensors
Model size
24.2B params
Tensor type
F32
路
FP16
路
U8
路