Edit model card

Neural-Story

NeuralNovel/Tanuki-7B-v0.1

Designed to generate instructive and narrative text, with a specific focus on roleplay & short storytelling. This fine-tune has been tailored to provide detailed and creative responses in the context of complex narrative.

Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2, with apache-2.0 license, suitable for commercial or non-commercial use.

Buy Me a Coffee at ko-fi.com Join Our Discord!

Data-set

The model was finetuned using the Neural-Story-v1 and Creative-Logic-v1 datasets.

Summary

Fine-tuned with the intention of generating creative and narrative text, making it more suitable for creative writing prompts and storytelling.

Out-of-Scope Use

The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.

Bias, Risks, and Limitations

The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences.

This model and its datasets serves as an excellent starting point for testing language models, users are advised to exercise caution, as there might be some inherent genre or writing bias.

Hardware and Training

Trained using NVIDIA Tesla T40 24 GB.


 n_epochs = 4, # increased from 3
 n_checkpoints = 2,
 batch_size = 6, # decreased from 20
 learning_rate = 1e-5,

Sincere appreciation to Techmind for their generous sponsorship.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 64.74
AI2 Reasoning Challenge (25-Shot) 62.80
HellaSwag (10-Shot) 83.14
MMLU (5-Shot) 60.54
TruthfulQA (0-shot) 66.33
Winogrande (5-shot) 75.85
GSM8k (5-shot) 39.80
Downloads last month
2,065
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference API
Input a message to start chatting with NeuralNovel/Tanuki-7B-v0.1.
Inference API (serverless) has been turned off for this model.

Finetuned from

Datasets used to train NeuralNovel/Tanuki-7B-v0.1

Evaluation results