tapex-zero-large / README.md
SivilTaram
update model
93d6909
|
raw
history blame
1.49 kB
---
license: mit
---
# TAPEX Zero
## Model description
TAPEX Zero is a novel approach for instruction tuning that leverages synthetic tasks to improve the performance of language models on unseen tasks. The proposed approach utilizes symbolic tasks for instruction tuning, which has demonstrated its effectiveness in enhancing the performance of language models on unseen tasks, as exemplified by the case study on zero-shot table reasoning.
## Intended uses
TAPEX Zero can be used for fine-tuning language models on tasks with instructions, especially when there is limited labeled data available. It can also be used for zero-shot learning, where the model can generalize to unseen tasks without any labeled data.
## Training data
The model is trained on a variety of datasets that include symbolic tasks such as SQL queries. The training data also includes generic datasets to ensure that the model maintains its original performance on those datasets.
## Limitations and bias
As with any machine learning model, TAPEX Zero may have limitations and biases based on the training data it was exposed to. It may not perform well on tasks that are significantly different from those in the training data. Additionally, since symbolic tasks are generated by programs, they may not fully capture the complexity and variability of human-generated instructions.
## Code base
The code used to train TAPEX Zero is available at https://github.com/openai/symbolic-tasks-instruction-tuning