SivilTaram commited on
Commit
b6043f1
·
1 Parent(s): edc02ed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -1,3 +1,25 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # TAPEX Zero
6
+
7
+ ## Model description
8
+
9
+ TAPEX Zero is a novel approach for instruction tuning that leverages synthetic tasks to improve the performance of language models on unseen tasks. The proposed approach utilizes symbolic tasks for instruction tuning, which has demonstrated its effectiveness in enhancing the performance of language models on unseen tasks, as exemplified by the case study on zero-shot table reasoning.
10
+
11
+ ## Intended uses
12
+
13
+ TAPEX Zero can be used for fine-tuning language models on tasks with instructions, especially when there is limited labeled data available. It can also be used for zero-shot learning, where the model can generalize to unseen tasks without any labeled data.
14
+
15
+ ## Training data
16
+
17
+ The model is trained on a variety of datasets that include symbolic tasks such as SQL queries. The training data also includes generic datasets to ensure that the model maintains its original performance on those datasets.
18
+
19
+ ## Limitations and bias
20
+
21
+ As with any machine learning model, TAPEX Zero may have limitations and biases based on the training data it was exposed to. It may not perform well on tasks that are significantly different from those in the training data. Additionally, since symbolic tasks are generated by programs, they may not fully capture the complexity and variability of human-generated instructions.
22
+
23
+ ## Code base
24
+
25
+ The code used to train TAPEX Zero is available at https://github.com/openai/symbolic-tasks-instruction-tuning