s3nh commited on
Commit
1373fa2
1 Parent(s): 819111c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ language:
6
+ - zh
7
+ - en
8
+ ---
9
+
10
+
11
+ ## Original model card
12
+
13
+ Buy me a coffee if you like this project ;)
14
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
15
+
16
+ #### Description
17
+
18
+ GGML Format model files for [This project](https://huggingface.co/garage-bAInd/Stable-Platypus2-13B/).
19
+
20
+
21
+ ### inference
22
+
23
+
24
+ ```python
25
+
26
+ import ctransformers
27
+
28
+ from ctransformers import AutoModelForCausalLM
29
+
30
+ model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
31
+ gpu_layers=32, model_type="llama")
32
+
33
+ manual_input: str = "Tell me about your last dream, please."
34
+
35
+
36
+ llm(manual_input,
37
+ max_new_tokens=256,
38
+ temperature=0.9,
39
+ top_p= 0.7)
40
+
41
+ ```
42
+
43
+
44
+
45
+ # Original model card
46
+
47
+
48
+ ## Model details
49
+
50
+ The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
51
+
52
+ This type of merge is incapable of being illustrated, as each of its 360 tensors has an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
53
+
54
+ ## Prompt Format
55
+
56
+ This model primarily uses Alpaca formatting, so for optimal model performance, use:
57
+ ```
58
+ <System prompt/Character Card>
59
+
60
+ ### Instruction:
61
+ Your instruction or question here.
62
+ For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
63
+
64
+ ### Response:
65
+ ```
66
+
67
+ ---
68
+ license: other
69
+ ---