Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
LoneStriker
/
miqu-1-103b-2.4bpw-h6-exl2
like
1
Text Generation
Transformers
Safetensors
5 languages
llama
mergekit
Merge
conversational
text-generation-inference
Inference Endpoints
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (1)
Any way to speed up generation on a Windows 11 PC, using a single 24GB card (4090), with Text-Generation-WebUI
2
#2 opened 10 months ago by
clevnumb