--- language: - en - id - jv - su license: gemma tags: - merge - mergekit - autoquant - gguf base_model: - GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct - aisingapore/gemma2-9b-cpt-sea-lionv3-instruct model-index: - name: gemma2-9b-sahabatai-v1-instruct-BaseTIES results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 73.78 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gmonsoon/gemma2-9b-sahabatai-v1-instruct-BaseTIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 43.4 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gmonsoon/gemma2-9b-sahabatai-v1-instruct-BaseTIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 19.34 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gmonsoon/gemma2-9b-sahabatai-v1-instruct-BaseTIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 9.4 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gmonsoon/gemma2-9b-sahabatai-v1-instruct-BaseTIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 19.13 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gmonsoon/gemma2-9b-sahabatai-v1-instruct-BaseTIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 37.19 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=gmonsoon/gemma2-9b-sahabatai-v1-instruct-BaseTIES name: Open LLM Leaderboard --- # SahabatAI-Lion-9B-TIES-v1 formerly gemma2-9b-cpt-sahabatai-v1-instruct-BaseTIES (model name too long :D ) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642b04e4ecec03b44649e318/rJ0ogty-DbLUEH48Ms5lE.png) Based on some research, when a finetuned model is merged with its base model with TIES method, there is possibility the merged model will achieve better output. **UPDATE!!! as 20 November 2024, this model is third best model (number one for Gemma2-9B based model) on HF's Open LLM Leaderboard (with Merge/MoErges hide model unchecked) for LLM model below 10B parameters.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642b04e4ecec03b44649e318/8Hv3YtWtzzFlJ0_kUpsT7.png) gmonsoon/SahabatAI-Lion-9B-TIES-v1 is a merge of the following models: * [GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct) * [aisingapore/gemma2-9b-cpt-sea-lionv3-instruct](https://huggingface.co/aisingapore/gemma2-9b-cpt-sea-lionv3-instruct) DEMO Spaces: [HERE](https://huggingface.co/spaces/gmonsoon/SahabatAI-Lion-9B-TIES-v1) ## 🧩 Configuration ```yaml models: - model: GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct parameters: weight: 1 density: 1 - model: GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct parameters: weight: 1 density: 1 merge_method: ties base_model: aisingapore/gemma2-9b-cpt-sea-lionv3-instruct parameters: density: 1 normalize: true int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "gmonsoon/SahabatAI-Lion-9B-TIES-v1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gmonsoon__gemma2-9b-sahabatai-v1-instruct-BaseTIES) | Metric |Value| |-------------------|----:| |Avg. |33.70| |IFEval (0-Shot) |73.78| |BBH (3-Shot) |43.40| |MATH Lvl 5 (4-Shot)|19.34| |GPQA (0-shot) | 9.40| |MuSR (0-shot) |19.13| |MMLU-PRO (5-shot) |37.19|