Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

TheBloke
/
Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF

Transformers
GGUF
English
mixtral
Mixtral
instruct
finetune
chatml
DPO
RLHF
gpt4
synthetic data
distillation
conversational
Model card Files Files and versions Community
4
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

is this filtered or unfiltered can it run on my 3090?

#4 opened about 1 year ago by
Avos0001

32k ctx doesn't work on this model for GGUF

#3 opened about 1 year ago by
danieloneill

Using the new gguf quant method may result in a worse overall performance than that of the old gguf quants.

#2 opened over 1 year ago by
TheYuriLover

Requesting IQ2_XXS and IQ2_XS quants.

3
#1 opened over 1 year ago by
benxh
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs