TheBloke commited on
Commit
0d75d8f
1 Parent(s): 6d028f3

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -373,12 +373,14 @@ And thank you again to a16z for their generous grant.
373
  </p>
374
 
375
 
376
- ### Dataset:
377
- 1. Selected from OpenOrca
378
- 2. Intel Orca-DPO-Pairs
379
- 3. Privately Crafted Dataset
380
 
 
 
 
 
 
381
 
 
382
 
383
 
384
  **########## First turn ##########**
@@ -425,6 +427,16 @@ and you carefully consider each step before providing answers.
425
  \n\n### Instruction:\n{instruction}\n\n### Response:
426
 
427
 
 
 
 
 
 
 
 
 
 
 
428
  ## Created By xDAN-AI at 2023-12-15
429
  ## Eval by FastChat: https://github.com/lm-sys/FastChat.git
430
 
 
373
  </p>
374
 
375
 
 
 
 
 
376
 
377
+ ## Outperformer GPT3.5turbo & Claude-v1
378
+
379
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/c9btBdopOpM06VuBsvRxq.png)
380
+
381
+ ## Touch nearby GPT4 on MT-Bench
382
 
383
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/QhcLDoOGZznkvy0v4FsUY.png)
384
 
385
 
386
  **########## First turn ##########**
 
427
  \n\n### Instruction:\n{instruction}\n\n### Response:
428
 
429
 
430
+ ### Dataset:
431
+ 1. Selected from OpenOrca
432
+ 2. Intel Orca-DPO-Pairs
433
+ 3. Privately Crafted Dataset
434
+
435
+ ### Training:
436
+ 1. SFT with Mixed dataset from OpenOrca
437
+ 2. The Next DPO dataset made by xDAN-AI
438
+ 3. The Next DPO Training method by xDAN-AI
439
+
440
  ## Created By xDAN-AI at 2023-12-15
441
  ## Eval by FastChat: https://github.com/lm-sys/FastChat.git
442