--- base_model: - amazingvince/Not-WizardLM-2-7B - CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2 library_name: transformers tags: - mergekit - merge license: mit language: - ko - en --- # output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) * [CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2](https://huggingface.co/CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2) ### Score ``` openai/gpt-4 : 0.6158 gemini-pro: 0.515 OpenCarrot-Mix-7B (this) : 0.4425 mistralai/Mixtral-8x7B-Instruct-v0.1 : 0.4304 openai/gpt-3.5-turbo : 0.4217 ``` | 평가 지표 | 점수 | |--------------|---------| | AVG_llm_kr_eval | 0.4425 | | EL | 0.0522 | | FA | 0.0865 | | NLI | 0.6700 | | QA | 0.5100 | | RC | 0.8937 | | klue_ner_set_f1| 0.0944 | | klue_re_exact_match | 0.0100 | | kmmlu_preview_exact_match | 0.4000 | | kobest_copa_exact_match | 0.8200 | | kobest_hs_exact_match | 0.5500 | | kobest_sn_exact_match | 0.9800 | | kobest_wic_exact_match | 0.6200 | | korea_cg_bleu | 0.0865 | | kornli_exact_match | 0.6400 | | korsts_pearson | 0.8547 | | korsts_spearman| 0.8464 | LogicKor | 카테고리 | 싱글 점수 평균 | 멀티 점수 평균 | |----------|------------------|-------------------| | 코딩(Coding) | 7.71 | 7.71 | | 수학(Math) | 5.57 | 3.86 | | 이해(Understanding) | 6.86 | 8.14 | | 추론(Reasoning) | 8.14 | 6.43 | | 글쓰기(Writing) | 8.71 | 6.86 | | 문법(Grammar) | 5.29 | 2.29 | | 카테고리 | 싱글 점수 평균 | 멀티 점수 평균 | |------------|------------------|-------------------| | 전체 싱글 | 7.05 | 5.88 | ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: amazingvince/Not-WizardLM-2-7B parameters: weight: 1.0 - model: CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2 parameters: weight: 0.5 merge_method: linear dtype: float16 ```