Undi95 evangineer commited on
Commit
1b62994
1 Parent(s): 24c3192

Update README.md (#1)

Browse files

- Update README.md (dc255c309f63a7ab7f6053c6ca59c4eb6538c97b)


Co-authored-by: Mamading Ceesay <evangineer@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +23 -3
README.md CHANGED
@@ -1,5 +1,25 @@
1
- https://huggingface.co/migtissera/Synthia-7B-v1.3
2
 
3
- Will edit Readme when OG repo will edit his readme lmao.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
- If you want to support me, you can [here](https://ko-fi.com/undiai).
 
1
+ This is a GGUF quant of https://huggingface.co/migtissera/Synthia-7B-v1.3
2
 
3
+ If you want to support me, you can [here](https://ko-fi.com/undiai).
4
+
5
+ # Synthia v1.3
6
+
7
+ SynthIA (Synthetic Intelligent Agent) v1.3 is a Mistral-7B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
8
+
9
+ To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
10
+
11
+ `Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.`
12
+
13
+ All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
14
+
15
+ ## Training Details
16
+ This was trained with QLoRA, as with all my models. Learning rate was 3e-4, 4096 context length. Batch size was 64, trained on a single H100.
17
+
18
+ Synthia-v1.2 dataset, which contain Chain-of-Thought (Orca), Tree-of-Thought and Long-Form conversation data.
19
+
20
+ Dataset is super high quality, and not a massive dataset (about ~125K samples).
21
+
22
+ ## License Disclaimer:
23
+
24
+ This model is bound by the license & usage restrictions of the original Mistral model, and comes with no warranty or guarantees of any kind.
25