Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
solidrust
/
Mini-Mixtral-v0.2-2x7B-AWQ
like
0
Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
frankenmoe
Merge
mergekit
lazymergekit
unsloth/mistral-7b-v0.2
mistralai/Mistral-7B-Instruct-v0.2
quantized
4-bit precision
AWQ
Inference Endpoints
chatml
text-generation-inference
awq
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
NeuralNovel/Mini-Mixtral-v0.2 AWQ
Model Summary
NeuralNovel/Mini-Mixtral-v0.2 AWQ
Model creator:
NeuralNovel
Original model:
Mini-Mixtral-v0.2
Model Summary
Mini-Mixtral-v0.2 is a Mixture of Experts (MoE) made with the following models using
LazyMergekit
:
unsloth/mistral-7b-v0.2
mistralai/Mistral-7B-Instruct-v0.2
Downloads last month
1
Safetensors
Model size
1.95B params
Tensor type
I32
·
FP16
·
Inference Examples
Text Generation
Inference API (serverless) has been turned off for this model.
Maximize
Merge of
unsloth/mistral-7b-v0.2
mistralai/Mistral-7B-Instruct-v0.2
Collection including
solidrust/Mini-Mixtral-v0.2-2x7B-AWQ
2x7B AWQ
Collection
Mixture of experts 2 x 7B.
•
20 items
•
Updated
Apr 30