Overview

The Gemma, state-of-the-art open model trained with the Gemma datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Gemma family with the 4B, 7B version in two variants 8K and 128K which is the context length (in tokens) that it can support.

Variants

No Variant Cortex CLI command
1 Gemma2-2b cortex run gemma2:2b
2 Gemma2-9b cortex run gemma2:9b
3 Gemma2-27b cortex run gemma2:27b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/gemma2
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run gemma2
    

Credits

Downloads last month
2,460
GGUF
Model size
27.2B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including cortexso/gemma2