Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,14 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: mit
|
3 |
language:
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
# LHK_DPO_v1
|
|
|
8 |
LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
|
9 |
|
10 |
## Details
|
|
|
1 |
---
|
2 |
+
base_model: TomGrc/FusionNet_7Bx2_MoE_14B
|
3 |
+
model_creator: HanNayeoniee
|
4 |
+
model_name: LHK_DPO_v1
|
5 |
license: mit
|
6 |
language:
|
7 |
- en
|
8 |
---
|
9 |
|
10 |
# LHK_DPO_v1
|
11 |
+
- Original model: [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
|
12 |
LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
|
13 |
|
14 |
## Details
|