Datasets:

ArXiv:
License:
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('csv', {'sep': '\t'}), NamedSplit('validation'): ('json', {}), NamedSplit('test'): ('json', {})}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1054, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 513, in infer_module_for_data_files
                  raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}")
              ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('csv', {'sep': '\t'}), NamedSplit('validation'): ('json', {}), NamedSplit('test'): ('json', {})}

Need help to make the dataset viewer work? Open a discussion for direct support.

advABSA

An adversarial aspect-based sentiment analysis (ABSA) benchmark, dubbed advABSA for both aspect-based sentiment classification (SC) and opinion extraction (OE).

advABSA (Adversarial ABSA Benchmark)

In response to the concerning robustness issue of ABSA, Arts is proposed, which contains datasets crafted only for adversarial evaluaiton on SC but not for OE. So we additionally craft datasets for adversarial evaluaiton on OE following their track. These gathered datasets form advABSA. That is, advABSA can be decomposed to two parts, where the first part is Arts-[domain]-SC reused from Arts and the second part is Arts-[domain]-OE newly produced by us.

stdABSA (Standard ABSA Benchmark)

In addition, we also provide stdABSA containing datasets from SemEval14 for standard evaluation, namely Sem14-[domain]-SC and Sem14-[domain]-OE. So corresponding performance drops can be measured properly.

Citation

If you find advABSA useful, please kindly star this repositary and cite our paper as follows:

@inproceedings{ma-etal-2022-aspect, 
    title = "Aspect-specific Context Modeling for Aspect-based Sentiment Analysis", 
    author = "Ma, Fang and Zhang, Chen and Zhang, Bo and Song, Dawei",
    booktitle = "NLPCC", 
    month = "sep", year = "2022", 
    address = "Guilin, China", 
    url = "https://arxiv.org/pdf/2207.08099.pdf",
}

Credits

The benchmark is mainly processed by Fang Ma.

Downloads last month
0
Edit dataset card