Update README.md
Browse files
README.md
CHANGED
@@ -70,7 +70,7 @@ The format for TinyCoT was:
|
|
70 |
|
71 |
Memphis outperforms human-data models that are over twice its size, along with SFT models of its size, and trades with the Zephyr DPO model. That said, Zephyr uses synthetic data, and *much* more of it.
|
72 |
|
73 |
-
It is unclear why Zephyr performs so poorly on BBH.
|
74 |
|
75 |
Notes:
|
76 |
- Evaluations were performed using the `agieval` branch of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) (commit `0bef5c9c273b1c2f68e6018d4bb9c32b9aaff298`), using the `vllm` model.
|
|
|
70 |
|
71 |
Memphis outperforms human-data models that are over twice its size, along with SFT models of its size, and trades with the Zephyr DPO model. That said, Zephyr uses synthetic data, and *much* more of it.
|
72 |
|
73 |
+
It is unclear why Zephyr performs so poorly on BBH. Perhaps it is overfit.
|
74 |
|
75 |
Notes:
|
76 |
- Evaluations were performed using the `agieval` branch of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) (commit `0bef5c9c273b1c2f68e6018d4bb9c32b9aaff298`), using the `vllm` model.
|