Text Generation
GGUF
reasoning
preference_learning
kto
munish0838 commited on
Commit
c013894
1 Parent(s): a70f14a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - openbmb/UltraFeedback
5
+ - openbmb/UltraInteract_pair
6
+ tags:
7
+ - reasoning
8
+ - preference_learning
9
+ - kto
10
+ pipeline_tag: text-generation
11
+ base_model: openbmb/Eurus-7b-kto
12
+ ---
13
+
14
+ # openbmb/Eurus-7b-kto-GGUF
15
+
16
+ - This is quantized version of [openbmb/Eurus-7b-kto](https://huggingface.co/openbmb/Eurus-7b-kto)
17
+
18
+ # Model Description
19
+
20
+ Eurus-7B-KTO is [KTO](https://arxiv.org/abs/2402.01306) fine-tuned from [Eurus-7B-SFT](https://huggingface.co/openbmb/Eurus-7b-sft) on all multi-turn trajectory pairs in [UltraInteract](https://huggingface.co/openbmb/UltraInteract) and all pairs in [UltraFeedback](https://huggingface.co/openbmb/UltraFeedback).
21
+
22
+ It achieves the best overall performance among open-source models of similar sizes and even outperforms specialized models in corresponding domains in many cases. Notably, Eurus-7B-KTO outperforms baselines that are 5× larger.
23
+
24
+ ## Usage
25
+
26
+ We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
27
+
28
+ **Coding**
29
+
30
+ ```
31
+ [INST] Write Python code to solve the task:
32
+ {Instruction} [/INST]
33
+ ```
34
+ **Math-CoT**
35
+
36
+ ```
37
+ [INST] Solve the following math problem step-by-step.
38
+ Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
39
+ {Instruction} [/INST]
40
+ ```
41
+
42
+ **Math-PoT**
43
+
44
+ ```
45
+ [INST] Tool available:
46
+ [1] Python interpreter
47
+ When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
48
+ Solve the following math problem step-by-step.
49
+ Simplify your answer as much as possible.
50
+ {Instruction} [/INST]
51
+ ```
52
+
53
+ ## Evaluation
54
+ - Eurus, both the 7B and 70B variants, achieve the best overall performance among open-source models of similar sizes. Eurus even outperforms specialized models in corresponding domains in many cases. Notably, Eurus-7B outperforms baselines that are 5× larger, and Eurus-70B achieves better performance than GPT-3.5 Turbo.
55
+ - Preference learning with UltraInteract can further improve performance, especially in math and the multi-turn ability.
56
+ <img src="figures_main_exp.png" alt="stats" style="zoom: 40%;" />
57
+