Edit model card

Merge Details

Merge Method

This model was merged using the SLERP merge method.

SLERP :

SLERP, or Spherical Linear Interpolation, serves as a method for seamlessly interpolating between two vectors while maintaining a consistent rate of change and upholding the geometric properties of the spherical space where the vectors exist.

It is favored over traditional linear interpolation, especially in high-dimensional spaces, as linear interpolation can result in a reduction of the interpolated vector's magnitude. In such spaces, the shift in the weights' direction often conveys more meaningful information, such as feature learning and representation, than the magnitude of the change itself. The SLERP implementation involves normalizing the input vectors to unit length, ensuring they signify directions rather than magnitudes. The process calculates the angle between the vectors through their dot product. In cases where the vectors are nearly collinear, it defaults to linear interpolation for efficiency. Otherwise, SLERP computes scale factors based on the interpolation factor (t=0 for 100% of the first vector, t=1 for 100% of the second vector) and the angle between the vectors.

These scale factors are utilized to weigh the original vectors, which are then combined to yield the interpolated vector.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: meta-math/MetaMath-Mistral-7B
        layer_range: [0, 32]
      - model: mlabonne/NeuralHermes-2.5-Mistral-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

Usage :

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/Mistral-Merged7B")
model = AutoModelForCausalLM.from_pretrained("ayoubkirouane/Mistral-Merged7B")

# 4 bit : 

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch

nf4_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
    "ayoubkirouane/Mistral-SLERP-Merged7B",
    device_map='auto',
    quantization_config=nf4_config,
    use_cache=False
)
tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/Mistral-SLERP-Merged7B")

tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Merge of