WebraftAI commited on
Commit
d6add3e
1 Parent(s): fa9b91a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # SynapseLLM:
3
+
4
+ SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
5
+
6
+ ## Model Details
7
+ **SynapseLLM:**
8
+ - Parameters: 7B
9
+ - Learning rate: 2e-4
10
+ - Adapter used: Qlora
11
+ - Precision: float16
12
+ - Batch size: 16
13
+ - Maximum gradient normal: 0.3
14
+ - Optimizer: paged_adamw_32bit
15
+ - Warmup Ratio: 0.03
16
+ - Step(s) (trained): 100
17
+ - Epoch(s) (trained): 1
18
+
19
+ ### Model Description
20
+
21
+ This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 409k rows comprising of 140k General Code, 143k GPT-3.5 Q/A, 63k Python code, and 54k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
22
+
23
+
24
+ - **Developed by:** WebraftAI
25
+ - **Funded by:** Webraft Cloud
26
+ - **Shared by:** WebraftAI
27
+ - **Model type:** Decoder-only Transformer
28
+ - **Language(s):** English Only
29
+ - **License:** Apache 2.0
30
+ - **Finetuned from model:** Mistral-7b-v0.1
31
+
32
+
33
+
34
+
35
+
36
+