File size: 5,465 Bytes
c2f6184 eb74e65 8e9d7c6 ff11734 eb74e65 8e9d7c6 eb74e65 c2f6184 4631b02 8e9d7c6 699ab25 8e9d7c6 d122c7f 8e9d7c6 58a7b34 8e9d7c6 7a36bef 04cdfc5 7a36bef 8e9d7c6 ee6c223 8e9d7c6 ee6c223 8e9d7c6 a66674d 8e9d7c6 eb74e65 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
---
license: apache-2.0
library_name: transformers
datasets:
- NeuralNovel/Neural-Story-v1
- NeuralNovel/Creative-Logic-v1
base_model: mistralai/Mistral-7B-Instruct-v0.2
inference: false
model-index:
- name: Tanuki-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.33
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1
name: Open LLM Leaderboard
---
![Neural-Story](https://i.ibb.co/FbBrb5H/OIG-4-Ffmd-Jvny-ZJvn.jpg)
# NeuralNovel/Tanuki-7B-v0.1
Designed to generate instructive and narrative text, with a specific focus on roleplay & short storytelling.
This fine-tune has been tailored to provide detailed and creative responses in the context of complex narrative.
Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2, with apache-2.0 license, suitable for commercial or non-commercial use.
<a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
<a href='https://discord.gg/FPUHxxCnvm' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a>
### Data-set
The model was finetuned using the Neural-Story-v1 and Creative-Logic-v1 datasets.
### Summary
Fine-tuned with the intention of generating creative and narrative text, making it more suitable for creative writing prompts and storytelling.
#### Out-of-Scope Use
The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.
### Bias, Risks, and Limitations
The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences.
This model and its datasets serves as an excellent starting point for testing language models, users are advised to exercise caution, as there might be some inherent genre or writing bias.
### Hardware and Training
Trained using NVIDIA Tesla T40 24 GB.
```
n_epochs = 4, # increased from 3
n_checkpoints = 2,
batch_size = 6, # decreased from 20
learning_rate = 1e-5,
```
*Sincere appreciation to Techmind for their generous sponsorship.*
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Tanuki-7B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.74|
|AI2 Reasoning Challenge (25-Shot)|62.80|
|HellaSwag (10-Shot) |83.14|
|MMLU (5-Shot) |60.54|
|TruthfulQA (0-shot) |66.33|
|Winogrande (5-shot) |75.85|
|GSM8k (5-shot) |39.80|
|