asiansoul's picture
Update README.md
8c17124 verified
|
raw
history blame
3.29 kB
metadata
base_model:
  - NousResearch/Hermes-2-Pro-Llama-3-8B
  - cognitivecomputations/dolphin-2.9-llama3-8b
  - Danielbrdz/Barcenas-Llama3-8b-ORPO
  - NousResearch/Meta-Llama-3-8B
  - maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
  - asiansoul/Llama-3-Open-Ko-Linear-8B
  - MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3
library_name: transformers
tags:
  - mergekit
  - merge

AIA-Llama-3-MAAL-Ko-8B

llm-v1

I'm not going to say that my merge style one is the best model ever made.

I'm not going to tell you that you'll enjoy chatting with my style merge model.

All I want to say is thank you for taking time out of your day to visit today.

Without users like you, my merge model would be meaningless.

Let's go on a fun trip together that we've never been on before to help each other.

Isn't it boring to just do LLM?

Soon I will open a very cool Streamlit base application based on the model I merged because i am an application engineer.Please wait until then.

I haven't tested this merge model in depth yet. I'm going to post it here and test it out.^^

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: NousResearch/Meta-Llama-3-8B
    # Base model providing a general foundation without specific parameters

  - model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
    parameters:
      density: 0.60  
      weight: 0.4  
  
  - model: asiansoul/Llama-3-Open-Ko-Linear-8B
    parameters:
      density: 0.55  
      weight: 0.25  

  - model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3
    parameters:
      density: 0.55  
      weight: 0.15  

  - model: cognitivecomputations/dolphin-2.9-llama3-8b
    parameters:
      density: 0.55  
      weight: 0.05  

  - model: Danielbrdz/Barcenas-Llama3-8b-ORPO
    parameters:
      density: 0.55  
      weight: 0.125 

  - model: NousResearch/Hermes-2-Pro-Llama-3-8B
    parameters:
      density: 0.55  
      weight: 0.125  

merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16