Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

20231206094523-pretrain-Llama-2-13b-hf-76000 - GGUF

Name Quant method Size
20231206094523-pretrain-Llama-2-13b-hf-76000.Q2_K.gguf Q2_K 4.65GB
20231206094523-pretrain-Llama-2-13b-hf-76000.IQ3_XS.gguf IQ3_XS 5.13GB
20231206094523-pretrain-Llama-2-13b-hf-76000.IQ3_S.gguf IQ3_S 5.41GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q3_K_S.gguf Q3_K_S 5.41GB
20231206094523-pretrain-Llama-2-13b-hf-76000.IQ3_M.gguf IQ3_M 5.71GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q3_K.gguf Q3_K 6.04GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q3_K_M.gguf Q3_K_M 6.04GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q3_K_L.gguf Q3_K_L 6.59GB
20231206094523-pretrain-Llama-2-13b-hf-76000.IQ4_XS.gguf IQ4_XS 6.69GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q4_0.gguf Q4_0 7.01GB
20231206094523-pretrain-Llama-2-13b-hf-76000.IQ4_NL.gguf IQ4_NL 7.06GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q4_K_S.gguf Q4_K_S 7.07GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q4_K.gguf Q4_K 7.48GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q4_K_M.gguf Q4_K_M 7.48GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q4_1.gguf Q4_1 7.77GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q5_0.gguf Q5_0 8.52GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q5_K_S.gguf Q5_K_S 8.52GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q5_K.gguf Q5_K 8.76GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q5_K_M.gguf Q5_K_M 8.76GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q5_1.gguf Q5_1 9.28GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q6_K.gguf Q6_K 10.13GB
20231206094523-pretrain-Llama-2-13b-hf-76000.Q8_0.gguf Q8_0 13.12GB

Original model description:

license: llama2 datasets: - YeungNLP/firefly-pretrain-dataset language: - zh - en

Model Details

  • Developed by: zyh3826
  • Backbone Model: llama-2-13B
  • Library: HuggingFace Transformers

Limitations & Biases:

Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/

License Disclaimer:

This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.

全参数继续预训练中文llama2-13B,取76000步ckpt测试,其他细节待补充

Downloads last month
283
GGUF
Model size
13.3B params
Architecture
llama
+2
Unable to determine this model's library. Check the docs .