metadata
language:
- en
license: cc
library_name: transformers
tags:
- mergekit
- merge
datasets:
- Anthropic/hh-rlhf
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- Guilherme34/Samantha-v2
- jan-hq/stealth-v1.3
- mitultiwari/mistral-7B-instruct-dpo
- senseable/WestLake-7B-v2
model-index:
- name: sethuiyer/Aika-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.36
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.49
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.91
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.22
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.74
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.78
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
Aika-7B
Aika is a language model constructed using the DARE TIES merge method using mitultiwari/mistral-7B-instruct-dpo as a base. Aika is designed to interact with users in a way that feels natural and human-like, to solve problems and answer questions with a high degree of accuracy and truthfulness, and to engage in creative and logical tasks with proficiency.
Models Merged
The following models were included in the merge:
The base model is Mistral-7Bv0.1 fine tuned on Anthropic/hh-rlhf.
Why?
- Base model tuned on Anthropic RLHF dataset: Safe AI as a base model, to balance the uncensored model below.
- Silicon-Maid-7B: Boasts excellent multi-turn conversational skills and logical coherence, ensuring smooth interactions.
- Samantha-V2: Offers empathy and human-like responses, equipped with programmed "self-awareness" for a more personalized experience.
- Stealth-V1.3: Known for enhancing performance in merges when integrated as a component, optimizing Aika's functionality.
- WestLake-7B-V2: Sets a high benchmark for emotional intelligence (EQ) and excels in creative writing, enhancing Aika's ability to understand and respond to your needs.
You get Aika - a considerate, personal digital assistant.
Configuration
Please check mergekit_config.yml for the merge config.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 59.25 |
AI2 Reasoning Challenge (25-Shot) | 65.36 |
HellaSwag (10-Shot) | 81.49 |
MMLU (5-Shot) | 53.91 |
TruthfulQA (0-shot) | 51.22 |
Winogrande (5-shot) | 77.74 |
GSM8k (5-shot) | 25.78 |