Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
TheBloke
/
Tulu-13B-SuperHOT-8K-GPTQ
like
1
Text Generation
Transformers
Safetensors
llama
custom_code
text-generation-inference
4-bit precision
gptq
arxiv:
2306.04751
arxiv:
2302.13971
arxiv:
2304.07327
License:
other
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Tulu-13B-SuperHOT-8K-GPTQ
1 contributor
History:
28 commits
TheBloke
Update for Transformers GPTQ support
4ee67f4
10 months ago
.gitattributes
1.52 kB
initial commit
12 months ago
README.md
9.94 kB
Update for Transformers GPTQ support
10 months ago
added_tokens.json
21 Bytes
Initial GPTQ model commit
12 months ago
config.json
1.02 kB
Update for Transformers GPTQ support
10 months ago
generation_config.json
137 Bytes
Initial GPTQ model commit
12 months ago
llama_rope_scaled_monkey_patch.py
2.59 kB
Initial GPTQ model commit
12 months ago
model.safetensors
7.45 GB
LFS
Update for Transformers GPTQ support
10 months ago
modelling_llama.py
39.5 kB
Initial GPTQ model commit
12 months ago
quantize_config.json
158 Bytes
Update for Transformers GPTQ support
10 months ago
special_tokens_map.json
96 Bytes
Initial GPTQ model commit
12 months ago
tokenizer.json
1.84 MB
Initial GPTQ model commit
12 months ago
tokenizer.model
500 kB
LFS
Initial GPTQ model commit
12 months ago
tokenizer_config.json
715 Bytes
Initial GPTQ model commit
12 months ago