Edit model card

TAPEX Zero

Model description

TAPEX Zero is a novel approach for instruction tuning that leverages synthetic tasks to improve the performance of language models on unseen tasks. The proposed approach utilizes symbolic tasks for instruction tuning, which has demonstrated its effectiveness in enhancing the performance of language models on unseen tasks, as exemplified by the case study on zero-shot table reasoning.

Training data

The model is trained on a variety of datasets that include symbolic tasks such as SQL queries. The training data also includes generic datasets to ensure that the model maintains its original performance on those datasets.

Limitations and bias

As with any machine learning model, TAPEX Zero may have limitations and biases based on the training data it was exposed to. It may not perform well on tasks that are significantly different from those in the training data. Additionally, since symbolic tasks are generated by programs, they may not fully capture the complexity and variability of human-generated instructions.

Code base

The code used to train TAPEX Zero is available at https://github.com/sail-sg/symbolic-instruction-tuning

Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.