--- base_model: [] library_name: transformers tags: - mergekit - merge --- # B3E3_SLM_7b_v3_folder This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\bophades-mistral-truthy-DPO-7B as a base. ### Models Merged The following models were included in the merge: * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Calme-7B-Instruct-v0.9 * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\YamshadowExperiment28-7B * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\multi_verse_model ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\YamshadowExperiment28-7B parameters: weight: 0.5 - model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Calme-7B-Instruct-v0.9 parameters: weight: 0.25 - model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\multi_verse_model parameters: weight: 0.25 base_model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\bophades-mistral-truthy-DPO-7B merge_method: task_arithmetic dtype: bfloat16 ```