bartowski's picture
measurement.json
82970c5 verified
metadata
library_name: transformers
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: >-
  To access CodeGemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged-in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
pipeline_tag: text-generation
widget:
  - text: >
      <start_of_turn>user Write a Python function to calculate the nth fibonacci
      number.<end_of_turn> <start_of_turn>model
inference:
  parameters:
    max_new_tokens: 200
license: gemma
license_link: https://ai.google.dev/gemma/terms
quantized_by: bartowski

Exllama v2 Quantizations of codegemma-1.1-7b-it

Using turboderp's ExLlamaV2 v0.0.20 for quantization.

The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/google/codegemma-1.1-7b-it

Prompt format

<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
<end_of_turn>
<start_of_turn>model

Note that this model does not support a System prompt.

Available sizes

No GQA - VRAM requirements will be higher

Branch Bits lm_head bits Size (4k) Size (16k) Description
8_0 8.0 8.0 14.0 GB 19.4 GB Maximum quality that ExLlamaV2 can produce, near unquantized performance.
6_5 6.5 8.0 12.5 GB 17.9 GB Near unquantized performance at vastly reduced size, recommended.
5_0 5.0 6.0 10.9 GB 16.3 GB Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context.
4_25 4.25 6.0 10.2 GB 15.7 GB GPTQ equivalent bits per weight.
3_5 3.5 6.0 9.5 GB 14.9 GB Lower quality, not recommended.

Download instructions

With git:

git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/codegemma-1.1-7b-it-exl2 codegemma-1.1-7b-it-exl2-6_5

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download a specific branch, use the --revision parameter. For example, to download the 6.5 bpw branch:

Linux:

huggingface-cli download bartowski/codegemma-1.1-7b-it-exl2 --revision 6_5 --local-dir codegemma-1.1-7b-it-exl2-6_5 --local-dir-use-symlinks False

Windows (which apparently doesn't like _ in folders sometimes?):

huggingface-cli download bartowski/codegemma-1.1-7b-it-exl2 --revision 6_5 --local-dir codegemma-1.1-7b-it-exl2-6.5 --local-dir-use-symlinks False

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski