Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
flozi00
/
Llama-2-13B-german-assistant-v3
like
5
Text Generation
Transformers
PyTorch
flozi00/conversations
English
German
llama
text-generation-inference
Inference Endpoints
Model card
Files
Files and versions
Community
4
Train
Deploy
Use this model
387ad8e
Llama-2-13B-german-assistant-v3
1 contributor
History:
4 commits
flozi00
Create README.md
387ad8e
12 months ago
.gitattributes
1.52 kB
initial commit
12 months ago
README.md
822 Bytes
Create README.md
12 months ago
config.json
640 Bytes
Upload LlamaForCausalLM
12 months ago
generation_config.json
132 Bytes
Upload LlamaForCausalLM
12 months ago
pytorch_model-00001-of-00003.bin
pickle
Detected Pickle imports (3)
"torch.HalfStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
9.95 GB
LFS
Upload LlamaForCausalLM
12 months ago
pytorch_model-00002-of-00003.bin
pickle
Detected Pickle imports (3)
"torch.HalfStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
9.9 GB
LFS
Upload LlamaForCausalLM
12 months ago
pytorch_model-00003-of-00003.bin
pickle
Detected Pickle imports (3)
"torch.HalfStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
6.18 GB
LFS
Upload LlamaForCausalLM
12 months ago
pytorch_model.bin.index.json
33.4 kB
Upload LlamaForCausalLM
12 months ago
special_tokens_map.json
434 Bytes
Upload tokenizer
12 months ago
tokenizer.model
500 kB
LFS
Upload tokenizer
12 months ago
tokenizer_config.json
745 Bytes
Upload tokenizer
12 months ago