|
--- |
|
license: apache-2.0 |
|
library_name: transformers |
|
datasets: |
|
- NeuralNovel/Neural-Story-v1 |
|
- NeuralNovel/Creative-Logic-v1 |
|
base_model: mistralai/Mistral-7B-Instruct-v0.2 |
|
inference: false |
|
model-index: |
|
- name: Tanuki-7B-v0.1 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 62.8 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 83.14 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 60.54 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 66.33 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 75.85 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 39.8 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1 |
|
name: Open LLM Leaderboard |
|
--- |
|
![Neural-Story](https://i.ibb.co/FbBrb5H/OIG-4-Ffmd-Jvny-ZJvn.jpg) |
|
|
|
# NeuralNovel/Tanuki-7B-v0.1 |
|
|
|
Designed to generate instructive and narrative text, with a specific focus on roleplay & short storytelling. |
|
This fine-tune has been tailored to provide detailed and creative responses in the context of complex narrative. |
|
|
|
Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2, with apache-2.0 license, suitable for commercial or non-commercial use. |
|
|
|
<a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> |
|
<a href='https://discord.gg/FPUHxxCnvm' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a> |
|
|
|
|
|
### Data-set |
|
The model was finetuned using the Neural-Story-v1 and Creative-Logic-v1 datasets. |
|
|
|
|
|
### Summary |
|
|
|
Fine-tuned with the intention of generating creative and narrative text, making it more suitable for creative writing prompts and storytelling. |
|
|
|
#### Out-of-Scope Use |
|
|
|
The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes. |
|
|
|
### Bias, Risks, and Limitations |
|
|
|
The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences. |
|
|
|
This model and its datasets serves as an excellent starting point for testing language models, users are advised to exercise caution, as there might be some inherent genre or writing bias. |
|
|
|
### Hardware and Training |
|
|
|
Trained using NVIDIA Tesla T40 24 GB. |
|
|
|
``` |
|
|
|
n_epochs = 4, # increased from 3 |
|
n_checkpoints = 2, |
|
batch_size = 6, # decreased from 20 |
|
learning_rate = 1e-5, |
|
|
|
|
|
``` |
|
|
|
*Sincere appreciation to Techmind for their generous sponsorship.* |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Tanuki-7B-v0.1) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |64.74| |
|
|AI2 Reasoning Challenge (25-Shot)|62.80| |
|
|HellaSwag (10-Shot) |83.14| |
|
|MMLU (5-Shot) |60.54| |
|
|TruthfulQA (0-shot) |66.33| |
|
|Winogrande (5-shot) |75.85| |
|
|GSM8k (5-shot) |39.80| |
|
|
|
|