umisetokikaze commited on
Commit
4f7a641
β€’
1 Parent(s): b1fc51d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md CHANGED
@@ -1,3 +1,57 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ - ja
6
+ tags:
7
+ - finetuned
8
+ library_name: transformers
9
+ pipeline_tag: text-generation
10
  ---
11
+ <img src="./veteus_logo.svg" width="100%" height="20%" alt="">
12
+
13
+ # Our Models
14
+ - [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1)
15
+
16
+ - [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1)
17
+
18
+ - [Ninja-v1-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k)
19
+
20
+ - [Ninja-v1-NSFW-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k)
21
+
22
+ ## Model Card for Ninja-v1.0
23
+
24
+ The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1
25
+
26
+ Ninja has the following changes compared to Mistral-7B-v0.1.
27
+ - Achieving both high quality Japanese and English generation
28
+ - Memory ability that does not forget even after long-context generation
29
+
30
+ This model was created with the help of GPUs from the first LocalAI hackathon.
31
+
32
+ We would like to take this opportunity to thank
33
+
34
+ ## List of Creation Methods
35
+
36
+ - Chatvector for multiple models
37
+ - Simple linear merging of result models
38
+ - Domain and Sentence Enhancement with LORA
39
+ - Context expansion
40
+
41
+ ## Instruction format
42
+
43
+ Freed from templates. Congratulations
44
+
45
+ ## Example prompts to improve (Japanese)
46
+
47
+ - BAD: あγͺγŸγ―β—‹β—‹γ¨γ—γ¦ζŒ―γ‚‹θˆžγ„γΎγ™
48
+ - GOOD: あγͺγŸγ―β—‹β—‹γ§γ™
49
+
50
+ - BAD: あγͺγŸγ―β—‹β—‹γŒγ§γγΎγ™
51
+ - GOOD: あγͺγŸγ―β—‹β—‹γ‚’γ—γΎγ™
52
+
53
+ ## Merge recipe
54
+
55
+
56
+ ## Other points to keep in mind
57
+ If possible, we recommend inferring with llamacpp rather than Transformers.