Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
edumunozsala
/
llama-2-7b-int4-GPTQ-python-code-20k
like
0
Text Generation
Transformers
PyTorch
iamtarun/python_code_instructions_18k_alpaca
code
llama
llama-2
gptq
quantization
text-generation-inference
Inference Endpoints
4-bit precision
License:
gpl-3.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
a406798
llama-2-7b-int4-GPTQ-python-code-20k
1 contributor
History:
5 commits
SFconvertbot
Adding `safetensors` variant of this model
a406798
11 months ago
.gitattributes
1.52 kB
initial commit
11 months ago
README.md
3.19 kB
Upload README.md
11 months ago
config.json
1.13 kB
Upload LlamaForCausalLM
11 months ago
generation_config.json
132 Bytes
Upload LlamaForCausalLM
11 months ago
model.safetensors
3.9 GB
LFS
Adding `safetensors` variant of this model
11 months ago
pytorch_model.bin
pickle
Detected Pickle imports (4)
"torch.IntStorage"
,
"torch.HalfStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
3.9 GB
LFS
Upload LlamaForCausalLM
11 months ago
special_tokens_map.json
434 Bytes
Upload tokenizer
11 months ago
tokenizer.json
1.84 MB
Upload tokenizer
11 months ago
tokenizer_config.json
732 Bytes
Upload tokenizer
11 months ago