Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
SushiTokyo
/
ELYZA-japanese-Llama-2-13b-fast-instruct-4bit-quantized
like
0
Text Generation
Transformers
Safetensors
llama
text-generation-inference
4-bit precision
gptq
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
81f0fb3
ELYZA-japanese-Llama-2-13b-fast-instruct-4bit-quantized
Ctrl+K
Ctrl+K
2 contributors
History:
5 commits
YKKY
add sample and test
81f0fb3
about 1 year ago
.gitattributes
Safe
1.52 kB
initial commit
about 1 year ago
README.md
Safe
619 Bytes
add sample and test
about 1 year ago
config.json
Safe
1.35 kB
init
about 1 year ago
generation_config.json
Safe
132 Bytes
init
about 1 year ago
model-00001-of-00002.safetensors
Safe
4.97 GB
LFS
init
about 1 year ago
model-00002-of-00002.safetensors
Safe
2.8 GB
LFS
init
about 1 year ago
model.safetensors.index.json
Safe
120 kB
init
about 1 year ago
quantize.py
Safe
464 Bytes
init
about 1 year ago
sample.py
Safe
1.11 kB
add sample and test
about 1 year ago
special_tokens_map.json
Safe
624 Bytes
init
about 1 year ago
task100.csv
Safe
191 kB
add sample and test
about 1 year ago
task100.py
Safe
1.71 kB
add sample and test
about 1 year ago
tokenizer.json
Safe
2.4 MB
init
about 1 year ago
tokenizer.model
Safe
705 kB
LFS
init
about 1 year ago
tokenizer_config.json
Safe
1.03 kB
init
about 1 year ago