PotatoOff commited on
Commit
0932e21
1 Parent(s): 3c90c3d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - miqu
7
+ - 70b model
8
+ - cat
9
+ - miqu cat
10
+ ---
11
+ ## Welcome to Miqu Cat: A 70B Miqu Lora Fine-Tune
12
+
13
+ Introducing **Miqu Cat**, an advanced model fine-tuned by Dr. Kal'tsit then quanted for the the ExllamaV2 project, bringing the model down to an impressive 4.8 bits per weight (bpw). This fine-tuning allows those with limited computational resources to explore its capabilities without compromise.
14
+
15
+ ### Competitive Edge - *meow!*
16
+
17
+ Miqu Cat stands out in the arena of Miqu fine-tunes, consistently performing admirably in tests and comparisons. It’s crafted to be less restrictive and more robust than its predecessors and variants, making it a versatile tool in AI-driven applications.
18
+
19
+ ### How to Use Miqu Cat: The Nitty-Gritty
20
+
21
+ Miqu Cat operates on the **CHATML** prompt format, designed for straightforward and effective interaction. Whether you're integrating it into existing systems or using it for new projects, its flexible prompt structure facilitates ease of use.
22
+
23
+ ### Training Specs
24
+
25
+ - **Dataset**: 1.5 GB
26
+ - **Compute**: Dual setup of 8xA100 nodes
27
+ - **Duration**: Approximately 1000 hours of intensive training
28
+
29
+ ### Meet the Author
30
+
31
+ **Dr. Kal'tsit** has been at the forefront of this fine-tuning process, ensuring that Miqu Cat gives the user a unique feel.