license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) lab imagepair version 14
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
Version 1
Image-size 1-10. Compare histograms between 2 images.
Version 2
Image-size 1-20. Histogram.remove_other_colors() exclude colors between two histograms. These bigger images are causing problems for the model to learn.
Version 3
Smaller image sizes: width 1-20. height 1-5. This is training much better.
Version 4
Smaller image sizes: width 1-5. height 1-20.
Version 5
Slightly bigger image sizes: width 1-10. height 1-20.
Version 6
Slightly bigger image sizes: width 1-15. height 10-30. This was too hard for the LLM to learn.
Version 7
Slightly smaller image sizes: width 1-15. height 10-20.
Version 8
image size 10-20. This was too hard for the LLM to learn.
Version 9
image width 1-20. image height 1-5. This was easy for the LLM to learn.
Version 10
I want to try just adding 1 more row to the height, and see how that impacts the training loss.
image width 1-20. image height 1-6.
Training with that extra row, it was easy for the LLM to learn.
Version 11
I want to try just adding 1 more row to the height, and see how that impacts the training loss.
image width 1-20. image height 1-7.
Training with that extra row, it was easy for the LLM to learn.
Version 12
I want to try just adding 1 more row to the height, and see how that impacts the training loss.
image width 1-20. image height 1-8.
Training with that extra row, it was easy for the LLM to learn.
Version 13
I want to try just adding 1 more row to the height, and see how that impacts the training loss.
image width 1-20. image height 1-9.
Training with that extra row, it was took quite some time for the LLM to learn.
Version 14
I want to try just adding 1 more row to the height, and see how that impacts the training loss.
image width 1-20. image height 1-10.
Added a benchmark
column to the dataset.