wolfram commited on
Commit
210309a
1 Parent(s): 11ec7b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -16,7 +16,7 @@ tags:
16
 
17
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/LxO9j7OykuabKLYQHIodG.jpeg)
18
 
19
- * HF FP16: [wolfram/miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b)
20
 
21
  This is a 120b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).
22
 
@@ -34,8 +34,8 @@ See also: [🐺🐦‍⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instru
34
 
35
  ## Model Details
36
 
37
- * Max Context: 32764 tokens (kept the weird number from the original/base model)
38
- * Layers: 140
39
 
40
  ## Merge Details
41
 
@@ -81,15 +81,15 @@ slices:
81
 
82
  ## Credits & Special Thanks
83
 
84
- * original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
85
- * leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
86
- * f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
87
- * mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
88
- * mergekit_config.yml: [nsfwthrowitaway69/Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2)
89
  - gguf quantization: [ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++](https://github.com/ggerganov/llama.cpp)
90
 
91
  ### Support
92
 
93
- * [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
94
 
95
  #### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.
 
16
 
17
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/LxO9j7OykuabKLYQHIodG.jpeg)
18
 
19
+ - HF FP16: [wolfram/miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b)
20
 
21
  This is a 120b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).
22
 
 
34
 
35
  ## Model Details
36
 
37
+ - Max Context: 32764 tokens (kept the weird number from the original/base model)
38
+ - Layers: 140
39
 
40
  ## Merge Details
41
 
 
81
 
82
  ## Credits & Special Thanks
83
 
84
+ - original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
85
+ - leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
86
+ - f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
87
+ - mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
88
+ - mergekit_config.yml: [nsfwthrowitaway69/Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2)
89
  - gguf quantization: [ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++](https://github.com/ggerganov/llama.cpp)
90
 
91
  ### Support
92
 
93
+ - [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
94
 
95
  #### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.