Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
solidrust
/
MixtureofMerges-MoE-2x7b-SLERPv0.9b-AWQ
like
0
Text Generation
Transformers
Safetensors
mixtral
4-bit precision
AWQ
Inference Endpoints
Merge
mergekit
lazymergekit
zhengr/MixTAO-7Bx2-MoE-v8.1
jsfs11/MixtureofMerges-MoE-2x7b-v6
text-generation-inference
awq
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9b AWQ
Model Summary
jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9b AWQ
Model creator:
jsfs11
Original model:
MixtureofMerges-MoE-2x7b-SLERPv0.9b
Model Summary
MixtureofMerges-MoE-2x7b-SLERPv0.9b is a merge of the following models using
LazyMergekit
:
zhengr/MixTAO-7Bx2-MoE-v8.1
jsfs11/MixtureofMerges-MoE-2x7b-v6
Downloads last month
0
Safetensors
Model size
1.95B params
Tensor type
I32
·
FP16
·
Inference Examples
Text Generation
Inference API (serverless) has been turned off for this model.
Maximize
Merge of
zhengr/MixTAO-7Bx2-MoE-v8.1
jsfs11/MixtureofMerges-MoE-2x7b-v6
Collection including
solidrust/MixtureofMerges-MoE-2x7b-SLERPv0.9b-AWQ
2x7B AWQ
Collection
Mixture of experts 2 x 7B.
•
20 items
•
Updated
Apr 30