--- language: - en license: apache-2.0 library_name: transformers tags: - roleplay - text-generation-inference - merge - not-for-all-audiences model-index: - name: BigMaid-20B-v1.0 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.35 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.26 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 57.15 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.29 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 2.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/BigMaid-20B-v1.0 name: Open LLM Leaderboard --- > [!TIP] > **Support the Project:**
> You can send ETH or any BSC-compatible tokens to the following address: > `0xC37D7670729a5726EA642c7A11C5aaCB36D43dDE` AWQ quants for [TeeZee/BigMaid-20B-v1.0](https://huggingface.co/TeeZee/BigMaid-20B-v1.0). # Original model information by the author: # BigMaid-20B-v1.0 ![image/png](https://huggingface.co/TeeZee/BigMaid-20B-v1.0/resolve/main/BigMaid-20B-v1.0.jpg) ## Model Details - A result of interleaving layers of [KatyTheCutie/EstopianMaid-13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B) with itself. - The resulting model has approximately 20 billion parameters. - See [mergekit-config.yml](https://huggingface.co/TeeZee/BigMaid-20B-v1.0/resolve/main/mergekit-config.yml) for details on the merge method used. **Warning: This model can produce NSFW content!** ## Results - Bigger version of original, uncensored like oryginal. - Retains all good qualities of original with additional affinity for abstract and lighthearted humor All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: Buy Me A Coffee # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__BigMaid-20B-v1.0) | Metric |Value| |---------------------------------|----:| |Avg. |56.07| |AI2 Reasoning Challenge (25-Shot)|61.35| |HellaSwag (10-Shot) |85.26| |MMLU (5-Shot) |57.15| |TruthfulQA (0-shot) |55.29| |Winogrande (5-shot) |75.30| |GSM8k (5-shot) | 2.05|