Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
oleg-go
/
mistral-7b-GGUF-Q16K
like
0
Text Generation
Transformers
PyTorch
GGUF
mistral
conversational
Inference Endpoints
text-generation-inference
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
mistral-7b-GGUF-Q16K
1 contributor
History:
6 commits
oleg-go
Rename ggml-model-f16.gguf to mistral-7b-GGUF-Q16K.gguf
e12d701
7 months ago
.editorconfig
12 Bytes
Upload folder using huggingface_hub
7 months ago
.gitattributes
2.05 kB
Rename ggml-model-f16.gguf to mistral-7b-GGUF-Q16K.gguf
7 months ago
added_tokens.json
51 Bytes
Upload folder using huggingface_hub
7 months ago
config.json
618 Bytes
Upload folder using huggingface_hub
7 months ago
generation_config.json
120 Bytes
Upload folder using huggingface_hub
7 months ago
ggml-vocab-aquila.gguf
4.83 MB
LFS
Upload folder using huggingface_hub
7 months ago
ggml-vocab-baichuan.gguf
1.34 MB
LFS
Upload folder using huggingface_hub
7 months ago
ggml-vocab-falcon.gguf
2.55 MB
LFS
Upload folder using huggingface_hub
7 months ago
ggml-vocab-gpt-neox.gguf
1.77 MB
LFS
Upload folder using huggingface_hub
7 months ago
ggml-vocab-llama.gguf
595 kB
Upload folder using huggingface_hub
7 months ago
ggml-vocab-mpt.gguf
1.77 MB
LFS
Upload folder using huggingface_hub
7 months ago
ggml-vocab-refact.gguf
1.72 MB
LFS
Upload folder using huggingface_hub
7 months ago
ggml-vocab-starcoder.gguf
1.72 MB
LFS
Upload folder using huggingface_hub
7 months ago
mistral-7b-GGUF-Q16K.gguf
14.5 GB
LFS
Rename ggml-model-f16.gguf to mistral-7b-GGUF-Q16K.gguf
7 months ago
pytorch_model.bin.index.json
24 kB
Upload folder using huggingface_hub
7 months ago
special_tokens_map.json
416 Bytes
Upload folder using huggingface_hub
7 months ago
tokenizer.json
1.8 MB
Upload folder using huggingface_hub
7 months ago
tokenizer.model
493 kB
LFS
Upload folder using huggingface_hub
7 months ago
tokenizer_config.json
1.64 kB
Upload folder using huggingface_hub
7 months ago