Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mobiuslabsgmbh
/
Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-HQQ
like
38
Follow
Mobius Labs GmbH
55
Text Generation
Transformers
mixtral
Mixture of Experts
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
refs/pr/1
Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-HQQ
Commit History
Librarian Bot: Add moe tag to model
63d39ed
librarian-bot
commited on
Jan 8
Update README.md
4bf2205
mobicham
commited on
Dec 18, 2023
Update README.md
80be5fa
mobicham
commited on
Dec 18, 2023
Create README.md
49d95b0
mobicham
commited on
Dec 15, 2023
upload model
2bcddd5
mobicham
commited on
Dec 15, 2023
initial commit
aae3cbc
mobicham
commited on
Dec 15, 2023