umisetokikaze commited on
Commit
1a5c075
β€’
1 Parent(s): 4f7a641

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -50,8 +50,37 @@ We would like to take this opportunity to thank
50
  - BAD: あγͺγŸγ―β—‹β—‹γŒγ§γγΎγ™
51
  - GOOD: あγͺγŸγ―β—‹β—‹γ‚’γ—γΎγ™
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ## Merge recipe
54
 
 
 
 
 
 
 
 
 
 
 
55
 
56
  ## Other points to keep in mind
57
- If possible, we recommend inferring with llamacpp rather than Transformers.
 
 
 
50
  - BAD: あγͺγŸγ―β—‹β—‹γŒγ§γγΎγ™
51
  - GOOD: あγͺγŸγ―β—‹β—‹γ‚’γ—γΎγ™
52
 
53
+ ## Performing inference
54
+
55
+ ```python
56
+ from transformers import AutoModelForCausalLM, AutoTokenizer
57
+
58
+ model = AutoModelForCausalLM.from_pretrained("Local-Novel-LLM-project/Ninja-v1-128k", trust_remote_code=True)
59
+ tokenizer = AutoTokenizer.from_pretrained("Local-Novel-LLM-project/Ninja-v1-128k")
60
+
61
+ prompt = "Once upon a time,"
62
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
63
+
64
+ output = model.generate(input_ids, max_length=100, do_sample=True)
65
+ generated_text = tokenizer.decode(output)
66
+
67
+ print(generated_text)
68
+ ````
69
+
70
  ## Merge recipe
71
 
72
+ - WizardLM2 - mistralai/Mistral-7B-v0.1
73
+ - NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1
74
+ - Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b
75
+ - NTQAI/chatntq-ja-7b-v1.0
76
+
77
+ The characteristics of each model are as follows.
78
+
79
+ - WizardLM2: High quality multitasking model
80
+ - Antler-7B: Model specialized for novel writing
81
+ - NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model
82
 
83
  ## Other points to keep in mind
84
+ - The training data may be biased. Be careful with the generated sentences.
85
+ - Memory usage may be large for long inferences.
86
+ - If possible, we recommend inferring with llamacpp rather than Transformers.