TheBloke commited on
Commit
ebb3b40
1 Parent(s): 9f0c663

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -410,12 +410,14 @@ And thank you again to a16z for their generous grant.
410
  </p>
411
 
412
 
413
- ### Dataset:
414
- 1. Selected from OpenOrca
415
- 2. Intel Orca-DPO-Pairs
416
- 3. Privately Crafted Dataset
417
 
 
 
 
 
 
418
 
 
419
 
420
 
421
  **########## First turn ##########**
@@ -462,6 +464,16 @@ and you carefully consider each step before providing answers.
462
  \n\n### Instruction:\n{instruction}\n\n### Response:
463
 
464
 
 
 
 
 
 
 
 
 
 
 
465
  ## Created By xDAN-AI at 2023-12-15
466
  ## Eval by FastChat: https://github.com/lm-sys/FastChat.git
467
 
 
410
  </p>
411
 
412
 
 
 
 
 
413
 
414
+ ## Outperformer GPT3.5turbo & Claude-v1
415
+
416
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/c9btBdopOpM06VuBsvRxq.png)
417
+
418
+ ## Touch nearby GPT4 on MT-Bench
419
 
420
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/QhcLDoOGZznkvy0v4FsUY.png)
421
 
422
 
423
  **########## First turn ##########**
 
464
  \n\n### Instruction:\n{instruction}\n\n### Response:
465
 
466
 
467
+ ### Dataset:
468
+ 1. Selected from OpenOrca
469
+ 2. Intel Orca-DPO-Pairs
470
+ 3. Privately Crafted Dataset
471
+
472
+ ### Training:
473
+ 1. SFT with Mixed dataset from OpenOrca
474
+ 2. The Next DPO dataset made by xDAN-AI
475
+ 3. The Next DPO Training method by xDAN-AI
476
+
477
  ## Created By xDAN-AI at 2023-12-15
478
  ## Eval by FastChat: https://github.com/lm-sys/FastChat.git
479