NeuralNovel commited on
Commit
8e9d7c6
1 Parent(s): c2f6184

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
4
+ datasets:
5
+ - NeuralNovel/Neural-Story-v1
6
+ library_name: transformers
7
+ inference: false
8
  ---
9
+ ![Neural-Story](https://i.ibb.co/SdLF3bL/OIG-49.jpg)
10
+
11
+ # NeuralNovel/Tanuki-7B-v0.2
12
+
13
+ The **NeuralNovel/Tanuki-7B-v0.2**
14
+
15
+ Designed to generate instructive and narrative text, with a specific focus on storytelling.
16
+ This fine-tune has been tailored to provide detailed and creative responses in the context of narrative and optimised for short story telling.
17
+
18
+ Based on mistralAI, with apache-2.0 license, suitable for commercial or non-commercial use.
19
+
20
+ ### Data-set
21
+ The model was finetuned using the Neural-Story-v1 dataset.
22
+
23
+
24
+ ### Summary
25
+
26
+ Fine-tuned with the intention of generating creative and narrative text, making it more suitable for creative writing prompts and storytelling.
27
+
28
+ #### Out-of-Scope Use
29
+
30
+ The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.
31
+
32
+ ### Bias, Risks, and Limitations
33
+
34
+ The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences.
35
+
36
+ While the Neural-Story-v0.1 dataset serves as an excellent starting point for testing language models, users are advised to exercise caution, as there might be some inherent genre or writing bias.
37
+
38
+ ### Hardware and Training
39
+
40
+ Trained using NVIDIA Tesla T40 24 GB.
41
+
42
+ ```
43
+
44
+ n_epochs = 3,
45
+ n_checkpoints = 3,
46
+ batch_size = 12,
47
+ learning_rate = 1e-5,
48
+
49
+
50
+
51
+ ```
52
+
53
+ *Sincere appreciation to Techmind for their generous sponsorship.*