Natkituwu commited on
Commit
82cb4f5
1 Parent(s): 8596e8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -1,3 +1,44 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - text-generation-inference
7
+ - instruct
8
+ - conversational
9
+ - roleplay
10
  license: cc-by-4.0
11
  ---
12
+
13
+ <h1 style="text-align: center">Erosumika-7B-v3-0.2</h1>
14
+ <h2 style="text-align: center">~Mistral 0.2 Edition~</h1>
15
+
16
+ <div style="display: flex; justify-content: center;">
17
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6512681f4151fb1fa719e033/8YBKfcegQZliRlQNm0oir.gif" alt="Header GIF">
18
+ </div>
19
+
20
+
21
+ 7.1bpw quant of Erosumika 7b 0.2 v3. Original Link : (https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2)
22
+
23
+ Best way to achieve 16k context withing 8GB of vram while achieving as much context as possible
24
+
25
+ ## Model Details
26
+ The Mistral 0.2 version of Erosumika-7B-v3, a DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b), Endevor's [InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). Alpaca and ChatML work best. Slightly smarter and better prompt comprehension than Mistral 0.1 Erosumika-7B-v3. 32k context should work.
27
+
28
+ [GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2-GGUF)
29
+
30
+
31
+ ## Limitations and biases
32
+ The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
33
+ It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
34
+
35
+
36
+ ```yaml
37
+ merge_method: task_arithmetic
38
+ base_model: alpindale/Mistral-7B-v0.2-hf
39
+ models:
40
+ - model: localfultonextractor/Erosumika-7B-v3
41
+ parameters:
42
+ weight: 1.0
43
+ dtype: float16
44
+ ```