--- language: - en license: cc-by-sa-4.0 library_name: transformers tags: - mergekit - merge base_model: - liminerity/Multiverse-Experiment-slerp-7b - jeiku/Alpaca_NSFW_Shuffled_Mistral - ResplendentAI/Datura_7B - ChaoticNeutrals/Eris_Remix_7B datasets: - ResplendentAI/Alpaca_NSFW_Shuffled - unalignment/toxic-dpo-v0.2 model-index: - name: Paradigm_7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.66 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.02 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 75.19 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B name: Open LLM Leaderboard --- # Paradigm This is a 8bpw exl2 quant of the Paradigm 7B model. ChatML or Alpaca instruct sequences both work. ---- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/euhpckYXwNiNiq-Lh4Fi-.jpeg) An incredibly effective and intelligent RP model designed to be the best bot you've ever used. I hope you like it! GGUF available here: https://huggingface.co/Lewdiculous/Paradigm_7B-GGUF-IQ-Imatrix # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Paradigm_7B) | Metric |Value| |---------------------------------|----:| |Avg. |75.47| |AI2 Reasoning Challenge (25-Shot)|73.63| |HellaSwag (10-Shot) |88.66| |MMLU (5-Shot) |64.02| |TruthfulQA (0-shot) |75.19| |Winogrande (5-shot) |84.53| |GSM8k (5-shot) |66.79| ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: ChaoticNeutrals/Eris_Remix_7B parameters: normalize: true models: - model: ChaoticNeutrals/Eris_Remix_7B parameters: weight: 1 - model: ResplendentAI/Datura_7B parameters: weight: 1 - model: liminerity/Multiverse-Experiment-slerp-7b+jeiku/Alpaca_NSFW_Shuffled_Mistral parameters: weight: 0.33 dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Paradigm_7B) | Metric |Value| |---------------------------------|----:| |Avg. |75.47| |AI2 Reasoning Challenge (25-Shot)|73.63| |HellaSwag (10-Shot) |88.66| |MMLU (5-Shot) |64.02| |TruthfulQA (0-shot) |75.19| |Winogrande (5-shot) |84.53| |GSM8k (5-shot) |66.79|