umisetokikaze commited on
Commit
ee00017
โ€ข
1 Parent(s): 0a69541

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ja
6
+ tags:
7
+ - finetuned
8
+ library_name: transformers
9
+ pipeline_tag: text-generation
10
+ ---
11
+ <img src="./veteus_logo.svg" width="100%" height="20%" alt="">
12
+
13
+ # Our Models
14
+ - [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1)
15
+
16
+ - [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1)
17
+
18
+ - [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW)
19
+
20
+ - [Ninja-v1-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k)
21
+
22
+ - [Ninja-v1-NSFW-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k)
23
+
24
+ ## This is a prototype of Vecteus-v1
25
+
26
+
27
+ ## Model Card for VecTeus-Poet
28
+
29
+ The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1
30
+
31
+ VecTeus has the following changes compared to Mistral-7B-v0.1.
32
+ - Achieving both high quality Japanese and English generation
33
+ - Can be generated NSFW
34
+ - Memory ability that does not forget even after long-context generation
35
+
36
+ This model was created with the help of GPUs from the first LocalAI hackathon.
37
+
38
+ We would like to take this opportunity to thank
39
+
40
+ ## List of Creation Methods
41
+
42
+ - Chatvector for multiple models
43
+ - Simple linear merging of result models
44
+ - Domain and Sentence Enhancement with LORA
45
+ - Context expansion
46
+
47
+ ## Instruction format
48
+
49
+ Freed from templates. Congratulations
50
+
51
+ ## Example prompts to improve (Japanese)
52
+
53
+ - BAD:ใ€€ใ‚ใชใŸใฏโ—‹โ—‹ใจใ—ใฆๆŒฏใ‚‹่ˆžใ„ใพใ™
54
+ - GOOD: ใ‚ใชใŸใฏโ—‹โ—‹ใงใ™
55
+
56
+ - BAD: ใ‚ใชใŸใฏโ—‹โ—‹ใŒใงใใพใ™
57
+ - GOOD: ใ‚ใชใŸใฏโ—‹โ—‹ใ‚’ใ—ใพใ™
58
+
59
+ ## Performing inference
60
+
61
+ ```python
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
+ import torch
64
+
65
+ model_id = "Local-Novel-LLM-project/Vecteus-v1"
66
+ new_tokens = 1024
67
+
68
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
69
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
70
+
71
+ system_prompt = "ใ‚ใชใŸใฏใƒ—ใƒญใฎๅฐ่ชฌๅฎถใงใ™ใ€‚\nๅฐ่ชฌใ‚’ๆ›ธใ„ใฆใใ ใ•ใ„\n-------- "
72
+
73
+ prompt = input("Enter a prompt: ")
74
+ system_prompt += prompt + "\n-------- "
75
+ model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
76
+
77
+
78
+ generated_ids = model.generate(**model_inputs, max_new_tokens=new_tokens, do_sample=True)
79
+ print(tokenizer.batch_decode(generated_ids)[0])
80
+ ````
81
+
82
+ ## Merge recipe
83
+
84
+
85
+ - VT0.1 = Ninjav1 + Original Lora
86
+ - VT0.2 = Ninjav1 128k + Original Lora
87
+ - VT0.2on0.1 = VT0.1 + VT0.2
88
+
89
+ - VT1 = all VT Series + Lora + Ninja 128k and Normal
90
+
91
+ ## Other points to keep in mind
92
+ - The training data may be biased. Be careful with the generated sentences.
93
+ - Memory usage may be large for long inferences.
94
+ - If possible, we recommend inferring with llamacpp rather than Transformers.