DrNicefellow commited on
Commit
4872e2f
1 Parent(s): ab2be0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -1,3 +1,28 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # ChatAllInOne_Mixtral-8x7B-v1
5
+
6
+ ## Description
7
+ ChatAllInOne_Mixtral-8x7B-v1 is a chat language model fine-tuned on the CHAT-ALL-IN-ONE-v1 dataset using the QLoRA technique. Originally based on the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model, this version is specifically optimized for diverse and comprehensive chat applications.
8
+
9
+ ## Model Details
10
+ - **Base Model**: [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
11
+ - **Fine-tuning Technique**: QLoRA (Quantum Logic-based Reasoning Approach)
12
+ - **Dataset**: [CHAT-ALL-IN-ONE-v1](https://huggingface.co/datasets/DrNicefellow/CHAT-ALL-IN-ONE-v1)
13
+ - **Tool Used for Fine-tuning**: [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
14
+
15
+ ## Features
16
+ - Enhanced understanding and generation of conversational language.
17
+ - Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations.
18
+ - Fine-tuned to maintain context and coherence over longer dialogues.
19
+
20
+ ## Prompt Format
21
+
22
+ Vicuna 1.1
23
+
24
+ See the finetuning dataset for examples.
25
+
26
+
27
+ ## License
28
+ This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.