--- license: mit dataset_info: features: - name: 1 Round Prompt dtype: string - name: 2 Round Prompt dtype: string - name: 3 Round Prompt dtype: string - name: 4 Round Prompt dtype: string - name: 5 Round Prompt dtype: string - name: label dtype: string splits: - name: GoEmotion num_bytes: 22269392 num_examples: 500 - name: BANKING77 num_bytes: 63903578 num_examples: 500 - name: FewNERD num_bytes: 117612568 num_examples: 500 - name: TacRED num_bytes: 35790846 num_examples: 500 - name: Discovery num_bytes: 353082806 num_examples: 500 - name: DialogRE num_bytes: 35512103 num_examples: 118 download_size: 281420232 dataset_size: 628171293 configs: - config_name: default data_files: - split: GoEmotion path: data/GoEmotion-* - split: BANKING77 path: data/BANKING77-* - split: FewNERD path: data/FewNERD-* - split: TacRED path: data/TacRED-* - split: Discovery path: data/Discovery-* - split: DialogRE path: data/DialogRE-* --- This is the benchmark we adopt in [Long-context LLMs Struggle with Long In-context Learning](https://arxiv.org/abs/2404.02060). Check out our leaderboard at https://huggingface.co/spaces/TIGER-Lab/LongICL-Leaderboard.