dsmueller commited on
Commit
608f6aa
1 Parent(s): 1a10fbd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -1
README.md CHANGED
@@ -5,4 +5,32 @@ datasets:
5
  base_model: mistralai/Mistral-7B-Instruct-v0.1
6
  ---
7
  First model fine tune trained from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1.
8
- Code to create this here: https://colab.research.google.com/drive/1Wsi7q1sBJlXrVZAbxhMRZuKnhSFeU9mu?usp=sharing
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  base_model: mistralai/Mistral-7B-Instruct-v0.1
6
  ---
7
  First model fine tune trained from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1.
8
+ Code to create this here: https://colab.research.google.com/drive/1Wsi7q1sBJlXrVZAbxhMRZuKnhSFeU9mu?usp=sharing
9
+
10
+ Parameters used for fine tuning:
11
+ `model_params={
12
+ "project_name": project_name,
13
+ "model_name": model_name,
14
+ "repo_id": username+'/'+repo_name,
15
+ "block_size": block_size,
16
+ "model_max_length": max_token_length,
17
+ "logging_steps": -1,
18
+ "evaluation_strategy": "epoch",
19
+ "save_total_limit": 1,
20
+ "save_strategy": "epoch",
21
+ "mixed_precision": "fp16",
22
+ "lr": 0.00003,
23
+ "epochs": 3,
24
+ "batch_size": 1,
25
+ "warmup_ratio": 0.1,
26
+ "gradient_accumulation": 1,
27
+ "optimizer": "adamw_torch",
28
+ "scheduler": "linear",
29
+ "weight_decay": 0,
30
+ "max_grad_norm": 1,
31
+ "seed": 42,
32
+ "quantization": "int4",
33
+ "lora_r": 16,
34
+ "lora_alpha": 32,
35
+ "lora_dropout": 0.05
36
+ }`