Natkituwu commited on
Commit
4b46f8f
1 Parent(s): 14ff21c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - text-generation-inference
7
+ - instruct
8
+ - conversational
9
+ - roleplay
10
  license: cc-by-4.0
11
  ---
12
+
13
+ <h1 style="text-align: center">Erosumika-7B-v3</h1>
14
+
15
+ <div style="display: flex; justify-content: center;">
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6512681f4151fb1fa719e033/ZX5NLfB2CctdwuctS9W8A.gif" alt="Header GIF">
17
+ </div>
18
+
19
+
20
+ 4.0bpw exl2 quant. great for 16k+ context on 6GB GPUS!
21
+
22
+ Original Model : (https://huggingface.co/localfultonextractor/Erosumika-7B-v3)
23
+
24
+ ## Model Details
25
+ A DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b), Endevor's [InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). Alpaca and ChatML work best.
26
+
27
+ [GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-GGUF)
28
+
29
+
30
+ ## Limitations and biases
31
+ The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
32
+ It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
33
+
34
+
35
+ ```yaml
36
+ base_model: localfultonextractor/FlatErosAlpha
37
+ models:
38
+ - model: localfultonextractor/FlatErosAlpha
39
+ - model: Epiculous/InfinityRP-v1-7B
40
+ parameters:
41
+ density: 0.4
42
+ weight: 0.25
43
+ - model: Nitral-AI/Kunocchini-7b
44
+ parameters:
45
+ density: 0.3
46
+ weight: 0.35
47
+ merge_method: dare_ties
48
+ dtype: bfloat16
49
+ ```
50
+ Note: Copied the tokenizer from InfinityRP-v1-7B.