wolfram commited on
Commit
7793179
β€’
1 Parent(s): d769608

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -8
README.md CHANGED
@@ -11,7 +11,6 @@ library_name: transformers
11
  tags:
12
  - mergekit
13
  - merge
14
- license: other
15
  ---
16
  # miqu-1-120b
17
 
@@ -28,6 +27,8 @@ Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) -
28
 
29
  Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker)!
30
 
 
 
31
  ## Review
32
 
33
  u/SomeOddCodeGuy wrote on r/LocalLLaMA:
@@ -44,7 +45,12 @@ u/SomeOddCodeGuy wrote on r/LocalLLaMA:
44
 
45
  (Note: All I did was merge this, though, so the credit mostly belongs to [Mistral AI](https://mistral.ai/) (giving proper attribution!) and the creators of [mergekit](https://github.com/arcee-ai/mergekit) as well as [Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2) and [MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b) who inspired it.)
46
 
47
- ## Prompt template: Mistral
 
 
 
 
 
48
 
49
  ```
50
  <s>[INST] {prompt} [/INST]
@@ -52,11 +58,6 @@ u/SomeOddCodeGuy wrote on r/LocalLLaMA:
52
 
53
  See also: [πŸΊπŸ¦β€β¬› LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
54
 
55
- ## Model Details
56
-
57
- * Max Context: 32764 tokens (kept the weird number from the original/base model)
58
- * Layers: 140
59
-
60
  ## Merge Details
61
 
62
  ### Merge Method
@@ -111,4 +112,14 @@ slices:
111
 
112
  * [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
113
 
114
- #### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.
 
 
 
 
 
 
 
 
 
 
 
11
  tags:
12
  - mergekit
13
  - merge
 
14
  ---
15
  # miqu-1-120b
16
 
 
27
 
28
  Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker)!
29
 
30
+ Also available: [miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0) – Miqu's younger, fresher sister; a new and improved Goliath-like merge of Miqu and lzlv.
31
+
32
  ## Review
33
 
34
  u/SomeOddCodeGuy wrote on r/LocalLLaMA:
 
45
 
46
  (Note: All I did was merge this, though, so the credit mostly belongs to [Mistral AI](https://mistral.ai/) (giving proper attribution!) and the creators of [mergekit](https://github.com/arcee-ai/mergekit) as well as [Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2) and [MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b) who inspired it.)
47
 
48
+ ## Model Details
49
+
50
+ * Max Context: 32764 tokens (kept the weird number from the original/base model)
51
+ * Layers: 140
52
+
53
+ ### Prompt template: Mistral
54
 
55
  ```
56
  <s>[INST] {prompt} [/INST]
 
58
 
59
  See also: [πŸΊπŸ¦β€β¬› LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
60
 
 
 
 
 
 
61
  ## Merge Details
62
 
63
  ### Merge Method
 
112
 
113
  * [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
114
 
115
+ ## Disclaimer
116
+
117
+ *This model contains leaked weights and due to its content it should not be used by anyone.* 😜 But seriously:
118
+
119
+ ### License
120
+
121
+ **What I *know*:** [Weights produced by a machine are not copyrightable](https://www.reddit.com/r/LocalLLaMA/comments/1amc080/psa_if_you_use_miqu_or_a_derivative_please_keep/kpmamte/) so there is no copyright owner who could grant permission or a license to use, or restrict usage, once you have acquired the files.
122
+
123
+ ### Ethics
124
+
125
+ **What I *believe*:** All generative AI, including LLMs, only exists because it is trained mostly on human data (both public domain and copyright-protected, most likely acquired without express consent) and possibly synthetic data (which is ultimately derived from human data, too). It is only fair if something that is based on everyone's knowledge and data is also freely accessible to the public, the actual creators of the underlying content. Fair use, fair AI!