Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
MaziyarPanahi
/
Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF
like
9
Text Generation
Transformers
GGUF
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
llama
llama-3
conversational
Model card
Files
Files and versions
Community
4
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (2)
Warning: llm_load_vocab: missing pre-tokenizer type, using: 'default'
1
#4 opened 7 months ago by
supercharge19
The f16 with 32k ctx fits nicely in 24GB VRAM
5
#3 opened 7 months ago by
ubergarm