ys-zong
commited on
Commit
•
7280546
1
Parent(s):
469851d
update readme
Browse files
README.md
CHANGED
@@ -46,3 +46,8 @@ Source of data: https://fh295.github.io/frozen.html
|
|
46 |
We reuse the CoBSAT benchmark for few-shot image generation tasks. We have 800 support and 200 query examples, and these are organized in such a way that for each of the 100 scenarios (defined by the task -- e.g. colour, and the choice of the latent variable -- e.g. object value), we have 8 support and 2 query examples. When sampling the support examples, we need to ensure that these share the same `task` and value of the latent variable `latent`, which can be either the value of `attribute` or `object`. The `question` has the value of the latent variable and defines what image should be generated. The `image` is the generated image. The `answer` is a list [value of the latent variable, value of the non-latent variable]. For each image we also store the values of the `object`, `attribute`.
|
47 |
|
48 |
Source of data: https://github.com/UW-Madison-Lee-Lab/CoBSAT
|
|
|
|
|
|
|
|
|
|
|
|
46 |
We reuse the CoBSAT benchmark for few-shot image generation tasks. We have 800 support and 200 query examples, and these are organized in such a way that for each of the 100 scenarios (defined by the task -- e.g. colour, and the choice of the latent variable -- e.g. object value), we have 8 support and 2 query examples. When sampling the support examples, we need to ensure that these share the same `task` and value of the latent variable `latent`, which can be either the value of `attribute` or `object`. The `question` has the value of the latent variable and defines what image should be generated. The `image` is the generated image. The `answer` is a list [value of the latent variable, value of the non-latent variable]. For each image we also store the values of the `object`, `attribute`.
|
47 |
|
48 |
Source of data: https://github.com/UW-Madison-Lee-Lab/CoBSAT
|
49 |
+
|
50 |
+
|
51 |
+
## Text ICL Variations
|
52 |
+
|
53 |
+
We have also released the text variations of CLEVR, Operator Induction, and interleaved Operator Induction datasets to reproduce the comparison of multimodal and text ICL (Figure 7). You can either use the `query.json` in `{dataset}_text/` folder for "text support set + text query", or use the `query.json` in `{dataset}/` folder for "text support set + multimodal query".
|