Edit model card

llama3-Fasal-Mitra

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using unsloth/llama-3-8b-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: unsloth/llama-3-8b-Instruct
    parameters:
     weight: 0.20
  - model: Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1
    parameters:
     weight: 0.40
  - model: KissanAI/llama3-8b-dhenu-0.1-sft-16bit
    parameters:
     weight: 0.40



base_model: unsloth/llama-3-8b-Instruct
merge_method: task_arithmetic
dtype: bfloat16
Downloads last month
0
Safetensors
Model size
8.03B params
Tensor type
BF16
·

Merge of