Edit model card

Malaysian-llama-3-8b-instruct-16k-bonito-v1

Malaysian-llama-3-8b-instruct-16k-bonito-v1 is a merge of the following models using mergekit:

🧩 Configuration

slices:
  - sources:
      - model: BatsResearch/bonito-v1
        layer_range: [0, 32]
      - model: mesolitica/malaysian-llama-3-8b-instruct-16k-post
        layer_range: [0, 32]
merge_method: slerp
base_model: BatsResearch/bonito-v1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.