Words
stringlengths 1
32
|
---|
Are |
All |
You |
Need |
Self-Rewarding |
Tart |
– |
A |
plug-and-play |
Transformer |
module |
task-agnostic |
reasoning |
💡 |
Machine |
Training |
Data |
Influence |
Analysis |
Estimation |
A |
Survey |
Sections |
Tuning |
Experiments |
Tip |
a |
13B |
parameter |
with |
ChatGPT |
level |
performance |
thanks |
a |
huge |
dataset |
5M |
samples |
step-by-step |
explanations. |
📝 |
Paper: |
https://arxiv.org/abs/2306.02707 |
will |
probably |
never |
be |
released |
by |
Microsoft, |
but |
open-source |
projects |
try |
replicate |
it |
(OpenOrca, |
Dolphin). |
authors |
note |
that |
while |
Vicuna-13B |
display |
excellent |
when |
evaluated |
performs |
quite |
poorly |
on |
benchmarks |
like |
SAT, |
LSAT, |
GRE, |
GMAT. |
Self-Instruct |
involves |
using |
an |
initial |
set |
prompts |
ask |
create |
new |
instructions. |
Low-quality |
or |
overly |
similar |
responses |
removed, |
remaining |
recycled |
back |
into |
task |