Edit model card

tallgemma

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: google/gemma-2b
      layer_range: [0, 1]
  - sources:
    - model: google/codegemma-2b
      layer_range: [0, 1]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [0, 1]
  - sources:
    - model: google/gemma-2b
      layer_range: [1, 2]
  - sources:
    - model: google/codegemma-2b
      layer_range: [1, 2]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [1, 2]
  - sources:
    - model: google/gemma-2b
      layer_range: [2, 3]
  - sources:
    - model: google/codegemma-2b
      layer_range: [2, 3]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [2, 3]
  - sources:
    - model: google/gemma-2b
      layer_range: [3, 4]
  - sources:
    - model: google/codegemma-2b
      layer_range: [3, 4]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [3, 4]
  - sources:
    - model: google/gemma-2b
      layer_range: [4, 5]
  - sources:
    - model: google/codegemma-2b
      layer_range: [4, 5]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [4, 5]
  - sources:
    - model: google/gemma-2b
      layer_range: [5, 6]
  - sources:
    - model: google/codegemma-2b
      layer_range: [5, 6]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [5, 6]
  - sources:
    - model: google/gemma-2b
      layer_range: [6, 7]
  - sources:
    - model: google/codegemma-2b
      layer_range: [6, 7]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [6, 7]
  - sources:
    - model: google/gemma-2b
      layer_range: [7, 8]
  - sources:
    - model: google/codegemma-2b
      layer_range: [7, 8]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [7, 8]
  - sources:
    - model: google/gemma-2b
      layer_range: [8, 9]
  - sources:
    - model: google/codegemma-2b
      layer_range: [8, 9]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [8, 9]
  - sources:
    - model: google/gemma-2b
      layer_range: [9, 10]
  - sources:
    - model: google/codegemma-2b
      layer_range: [9, 10]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [9, 10]
  - sources:
    - model: google/gemma-2b
      layer_range: [10, 11]
  - sources:
    - model: google/codegemma-2b
      layer_range: [10, 11]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [10, 11]
  - sources:
    - model: google/gemma-2b
      layer_range: [11, 12]
  - sources:
    - model: google/codegemma-2b
      layer_range: [11, 12]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [11, 12]
  - sources:
    - model: google/gemma-2b
      layer_range: [12, 13]
  - sources:
    - model: google/codegemma-2b
      layer_range: [12, 13]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [12, 13]
  - sources:
    - model: google/gemma-2b
      layer_range: [13, 14]
  - sources:
    - model: google/codegemma-2b
      layer_range: [13, 14]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [13, 14]
  - sources:
    - model: google/gemma-2b
      layer_range: [14, 15]
  - sources:
    - model: google/codegemma-2b
      layer_range: [14, 15]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [14, 15]
  - sources:
    - model: google/gemma-2b
      layer_range: [15, 16]
  - sources:
    - model: google/codegemma-2b
      layer_range: [15, 16]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [15, 16]
  - sources:
    - model: google/gemma-2b
      layer_range: [16, 17]
  - sources:
    - model: google/codegemma-2b
      layer_range: [16, 17]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [16, 17]
  - sources:
    - model: google/gemma-2b
      layer_range: [17, 18]
  - sources:
    - model: google/codegemma-2b
      layer_range: [17, 18]
  - sources:
    - model: google/gemma-1.1-2b-it
      layer_range: [17, 18]
merge_method: passthrough
dtype: float16
Downloads last month
1
Safetensors
Model size
6.47B params
Tensor type
FP16
·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Merge of