gemma-2-2b-it-GGUF / README.md
ThomasBaruzier's picture
Update README.md
5fb000a verified
|
raw
history blame
3.06 kB
metadata
library_name: llama.cpp
license: gemma
widget:
  - text: |
      <start_of_turn>user
      How does the brain work?<end_of_turn>
      <start_of_turn>model
inference:
  parameters:
    max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
  To access Gemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged-in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license

Llama.cpp imatrix quantizations of google/gemma-2-2b-it-GGUF

gemma

Using llama.cpp commit 268c566 for quantization.

Original model: https://huggingface.co/google/gemma-2-2b-it

All quants were made using the imatrix option and Bartowski's calibration file.



Gemma Model Card

Model Page: Gemma

This model card corresponds to the 2b instruct version the Gemma 2 model in GGUF Format. The weights here are float32.

In llama.cpp, and other related tools such as Ollama and LM Studio, please make sure that you have these flags set correctly, especially repeat-penalty. Georgi Gerganov (llama.cpp's author) shared his experience in https://huggingface.co/google/gemma-2b-it/discussions/38#65d2b14adb51f7c160769fa1.

You can also visit the model card of the 2B pretrained v2 model GGUF.

Resources and Technical Documentation:

Terms of Use: Terms

Authors: Google

Model Information

Summary description and brief definition of inputs and outputs.

Description

Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.