Edit model card

Llama-2-7b-chat-hf-GGUF

Based on Llama-2-7b-chat-hf by Meta. This version has been converted to:

  • GGML_VERSION = "gguf"
  • Conversion = float16
  • Quantization method = q4_k_s (Uses Q4_K for all tensors - "q" + the number of bits + the variant used )
  • Learn More:

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.

  • This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format.

Model Details

  • Model Developers: Meta
  • Input: Models input text only.
  • Output: Models generate text only.
  • Model Dates: Llama 2 was trained between January 2023 and July 2023.
  • Status: This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
  • Model Architecture: Llama 2 is an auto-regressive language model that uses an optimized transformer architecture.The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
Downloads last month
0
Unable to determine this model's library. Check the docs .