Update README.md
Browse files
README.md
CHANGED
@@ -47,7 +47,7 @@ _For GPT-3.5, GPT-4 we used the few-shot approach, while for Jellyfish and Jelly
|
|
47 |
[HoloClean](https://arxiv.org/abs/1702.00820) for Data Imputation
|
48 |
2.
|
49 |
[Large Language Models as Data Preprocessors](https://arxiv.org/abs/2308.16361)
|
50 |
-
3. Jellyfish-13B-1.1 is set to be the next iteration of Jellyfish-13B and is presently under development. We
|
51 |
|
52 |
|
53 |
**Jellyfish paper will be coming soon!**
|
|
|
47 |
[HoloClean](https://arxiv.org/abs/1702.00820) for Data Imputation
|
48 |
2.
|
49 |
[Large Language Models as Data Preprocessors](https://arxiv.org/abs/2308.16361)
|
50 |
+
3. Jellyfish-13B-1.1 is set to be the next iteration of Jellyfish-13B and is presently under development. We showcase its performance at this stage to highlight its impressive potential. As demonstrated in the table, it has already outperformed Non-LLM methods on the majority of benchmark datasets. We've optimized the training data for this 1.1 version, and its release is on the horizon.
|
51 |
|
52 |
|
53 |
**Jellyfish paper will be coming soon!**
|