Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ZeroWw
/
Test
like
0
Text Generation
Transformers
Safetensors
GGUF
mistral
conversational
text-generation-inference
Inference Endpoints
8-bit precision
bitsandbytes
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
f16a678
Test
1 contributor
History:
18 commits
ZeroWw
Upload gemma-1.1-7b-it.f16.q5.gguf with huggingface_hub
f16a678
verified
6 months ago
.gitattributes
2.58 kB
Upload gemma-1.1-7b-it.f16.q5.gguf with huggingface_hub
6 months ago
Mistral-7B-Instruct-v0.3.f16.gguf
14.5 GB
LFS
Upload Mistral-7B-Instruct-v0.3.f16.gguf with huggingface_hub
7 months ago
Mistral-7B-Instruct-v0.3.f16.q4.gguf
4.27 GB
LFS
Upload Mistral-7B-Instruct-v0.3.f16.q4.gguf with huggingface_hub
7 months ago
Mistral-7B-Instruct-v0.3.f16.q4.q3.gguf
4.53 GB
LFS
Upload Mistral-7B-Instruct-v0.3.f16.q4.q3.gguf with huggingface_hub
7 months ago
Mistral-7B-Instruct-v0.3.f16.q5.gguf
5.47 GB
LFS
Upload Mistral-7B-Instruct-v0.3.f16.q5.gguf with huggingface_hub
7 months ago
Mistral-7B-Instruct-v0.3.f16.q6.gguf
6.26 GB
LFS
Upload Mistral-7B-Instruct-v0.3.f16.q6.gguf with huggingface_hub
7 months ago
Mistral-7B-Instruct-v0.3.q8.q5.gguf
5.14 GB
LFS
Upload Mistral-7B-Instruct-v0.3.q8.q5.gguf with huggingface_hub
7 months ago
Mistral-7B-Instruct-v0.3.q8.q6.gguf
5.95 GB
LFS
Upload Mistral-7B-Instruct-v0.3.q8.q6.gguf with huggingface_hub
7 months ago
README.md
24 Bytes
initial commit
7 months ago
TextBase-7B-v0.1.f16.gguf
14.5 GB
LFS
Upload TextBase-7B-v0.1.f16.gguf with huggingface_hub
6 months ago
TextBase-7B-v0.1.f16.q5.gguf
5.46 GB
LFS
Upload TextBase-7B-v0.1.f16.q5.gguf with huggingface_hub
6 months ago
TextBase-7B-v0.1.f16.q6.gguf
6.25 GB
LFS
Upload TextBase-7B-v0.1.f16.q6.gguf with huggingface_hub
6 months ago
TextBase-7B-v0.1.q8.gguf
7.7 GB
LFS
Upload TextBase-7B-v0.1.q8.gguf with huggingface_hub
6 months ago
config.json
1.13 kB
Upload folder using huggingface_hub
7 months ago
gemma-1.1-7b-it.f16.q5.gguf
7.07 GB
LFS
Upload gemma-1.1-7b-it.f16.q5.gguf with huggingface_hub
6 months ago
gemma-7b-it.f16.gguf
17.1 GB
LFS
Upload gemma-7b-it.f16.gguf with huggingface_hub
6 months ago
gemma-7b-it.f16.q5.gguf
7.07 GB
LFS
Upload gemma-7b-it.f16.q5.gguf with huggingface_hub
6 months ago
gemma-7b-it.f16.q6.gguf
7.94 GB
LFS
Upload gemma-7b-it.f16.q6.gguf with huggingface_hub
6 months ago
gemma-7b-it.q8.gguf
9.08 GB
LFS
Upload gemma-7b-it.q8.gguf with huggingface_hub
6 months ago
generation_config.json
111 Bytes
Upload folder using huggingface_hub
7 months ago
model-00001-of-00002.safetensors
4.95 GB
LFS
Upload folder using huggingface_hub
7 months ago
model-00002-of-00002.safetensors
2.57 GB
LFS
Upload folder using huggingface_hub
7 months ago
model.safetensors.index.json
61.2 kB
Upload folder using huggingface_hub
7 months ago
special_tokens_map.json
414 Bytes
Upload folder using huggingface_hub
7 months ago
tokenizer.json
1.96 MB
Upload folder using huggingface_hub
7 months ago
tokenizer.model
587 kB
LFS
Upload folder using huggingface_hub
7 months ago
tokenizer_config.json
137 kB
Upload folder using huggingface_hub
7 months ago