August4293 commited on
Commit
7d1b7da
·
verified ·
1 Parent(s): b690bb4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -2,17 +2,16 @@
2
  library_name: transformers
3
  tags: []
4
  ---
 
5
 
6
- # Mistral 7b Self-Alignment SFT Model
7
 
8
- The Mistral 7b Self-Alignment SFT Model is an adapter fine-tuned specifically for self-alignment and harmlessness. It has been trained using the Mistral Self-Alignment Preference Dataset, which can be accessed [here](https://huggingface.co/datasets/August4293/Preference-Dataset).
9
-
10
- The fine-tuning process is detailed on the corresponding [GitHub page](https://github.com/August-murr/Lab/tree/main/Mistral%20Self%20Alignment), providing insights into the methodology and purpose behind the model's adaptation.
11
 
12
  ## Model Details:
13
  - **Base Model:** Mistral 7b
14
  - **Fine-Tuning Purpose:** Self-Alignment and Harmlessness
15
- - **Fine-Tuning Dataset:** Mistral Self-Alignment Preference Dataset
16
-
17
 
18
 
 
2
  library_name: transformers
3
  tags: []
4
  ---
5
+ # Mistral 7b Self-Alignment DPO Model
6
 
7
+ The Mistral 7b Self-Alignment DPO Model is an adapter fine-tuned for self-alignment and harmlessness using the Direct Preference Optimization (DPO) technique. It has been trained utilizing the Mistral Self-Alignment Preference Dataset, accessible [here](https://huggingface.co/datasets/August4293/Preference-Dataset).
8
 
9
+ Detailed information about the DPO fine-tuning process and its application for self-alignment can be found on the corresponding [GitHub page](https://github.com/August-murr/Lab/tree/main/Mistral%20Self%20Alignment).
 
 
10
 
11
  ## Model Details:
12
  - **Base Model:** Mistral 7b
13
  - **Fine-Tuning Purpose:** Self-Alignment and Harmlessness
14
+ - **Fine-Tuning Method:** Direct Preference Optimization (DPO)
15
+ - **Fine-Tuning Dataset:** [Mistral Self-Alignment Preference Dataset](https://huggingface.co/datasets/August4293/Preference-Dataset)
16
 
17