--- license: apache-2.0 --- # DCA-Benchmark [![arXiv](https://img.shields.io/badge/Arxiv-2406.07275-blueviolet?logo=arxiv)](https://arxiv.org/abs/2406.07275) [![Github](https://img.shields.io/badge/Github-black?logo=github)](https://github.com/TRAIS-Lab/dca-bench) **DCA-Benchmark** aims to provide a comprehensive benchmark for evaluating LLM agents' capabilities in discovering data quality issues across online dataset platforms, representing the first step of the curation pipeline. Throughout this document, we will refer to such an LLM agent as a **"Curator"** to highlight its role in this task. A well-performing Curator can detect and locate existing issues, which is critical for subsequent fixes by human maintainers or other LLM agents. We collected 91 representative samples from 8 online dataset platforms and classified them into 4 types with 18 tags according to their various content and difficulty. ![image](https://huggingface.co/datasets/Jasoncsc/DCA-Bench/resolve/main/dca.png) ## Key Features - **Real-world Cases with Minimal Simplification**: All test cases in DCA-Benchmark have references to real-world sources, allowing benchmark users to understand them better in practical scenarios. To test the Curator's ability in complex real-world environments, DCA-Benchmark includes all relevant dataset files for each test case, not just the flawed ones. - **Multiple Difficulty Levels**: DCA-Benchmark provides four levels of hints for each test case in the benchmark. With higher-level hints, the Curator gains more information about the content and location of the issue. This approach aims to make the task more achievable and gauge the information required for the Curator to detect these issues. - **Accurate Automatic Evaluation**: Unlike traditional machine learning tasks, dataset curation does not have labels that can be directly evaluated by scripts. Human-level efforts are required to rate the Curator's performance, which is not scalable. Therefore, we developed an automatic and accurate evaluation scheme using GPT-4 to replace human annotators. ## Getting Started Please refer to our [GitHub Repository](https://github.com/TRAIS-Lab/dca-bench?tab=readme-ov-file#contribute-new-datapoint) to get the Benchmark Suite Code.
Statement of Potential Ethical Concerns and Justification ### Dataset Content and Ethical Considerations Our dataset, comprising 91 data points, includes content that may be considered sensitive or potentially controversial. Specifically: - 4 data points (4.4% of the dataset) involve ethical or legal risks: - 2 instances (IDs: 7488a00f-397f-42fe-80bf-15b0f490b690, 7e8f31cb-8c2a-4676-b3d4-941a64184a26) contain content that may exhibit bias towards specific groups of people. - 2 instances (IDs: f192fe08-bb50-46dd-973d-8ba37d338758, 38d3fd72-c2b1-47e2-b64e-b579dc66887c) present potential legal risks. ### Justification for Inclusion While we acknowledge the sensitive nature of these data points, their inclusion in our dataset is both intentional and necessary for the following reasons: - **Benchmark Objectives**: A primary goal of our benchmark is to identify and assess potential ethical and legal risks in AI systems. The inclusion of these sensitive data points is crucial for thoroughly evaluating the capability of AI models to recognize and appropriately handle such content. - **Realistic Representation**: These data points reflect real-world scenarios that AI systems may encounter. By including them, we ensure our benchmark provides a more comprehensive and authentic assessment of AI performance. ### Copyright and Licensing We have curated a [table](https://docs.google.com/spreadsheets/d/1jweqxg7jZ97Knl1f2Y64RqolDc8x14kLS9iJn8rjkbc/edit?gid=0#gid=0) where all the files involved in DCA-Bench have been annotated with their License Information. Each data point in DCA-Bench has two types of license: - License for the platform that hosted this dataset - License of this dataset **Details:** - Some data point in DCA-Bench involves files from 2 datasets, we listed all licenses. - Many datasets themself don’t have a license listed on their data card. Although some datasets among them list some other resources that their data was collected from, which may have its own license, this will make the case complicated, so we choose to conclude this type of dataset as "None", which means it doesn’t have a License. - We notice that there is one [dataset](https://www.kaggle.com/datasets/roche-data-science-coalition/uncover/data) with id (21ca944c-cf82-4764-bb2c-4c8db0cee950) claiming that "Data files © Original Authors", which is not a standard License. We have reached out to the dataset owner for clarification, but have not received a response. #### How does this secondary usage of user-generated data comply with restrictions? DCA-Bench involves user-generated data (comments, modified codes) collected from dataset repo hosted on Github, HuggingFace, and Kaggle. **For Github**, we collected the comments and modified the codes generated by users. According to section D.3, paragraph 2 of [Github Terms of Service Github](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service\#c-acceptable-use), > Because you retain ownership of and responsibility for Your Content, we need you to grant us — and other GitHub Users — certain legal permissions, listed in Sections D.4 — D.7. According to section [D.5\. License Grant to Other Users](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service\#5-license-grant-to-other-users), If not provided a specific License, any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. However, it doesn’t clearly explain how this content is allowed to be used in which ways, and which is not. So I now opened a [Github Discussion](https://github.com/orgs/community/discussions/135466), and I will update this section once I get a response. Besides, I noticed that there have already been some works ([The Stack](https://huggingface.co/datasets/bigcode/the-stack/discussions/43), [SWE-Bench](https://arxiv.org/abs/2310.06770)) with similar usage of Github data, which implies that this usage is acceptable. **For HuggingFace**, according to [HuggingFace Content Policy](https://huggingface.co/content-guidelines), Content types may include: * "ML Artifacts": Code and assets hosted as Hugging Face Repositories, including Models, Datasets, and Spaces; * "Community Content": Content that can be found in the Community section of the Hugging Face Platform, including discussions, comments, and usernames, as well as related documentation such as READMEs, model cards, data cards, pull requests, and merges. According to [HuggingFace Terms of Service](https://huggingface.co/terms-of-service), Section “Your Content”, > If you decide to set your Repository public, you grant each User a perpetual, irrevocable, worldwide, royalty-free, non-exclusive license to use, display, publish, reproduce, distribute, and make derivative works of your Content through our Services and functionalities; Therefore, we believe our usage of user-generated content from HuggingFace should belong to the “derivative works” here, which is acceptable. **For Kaggle**, after reviewing their [Terms of Use](https://www.kaggle.com/terms), I noticed that there are no explicit guidelines regarding the use of user-submitted content for academic research purposes. So I have sent Kaggle a request for clarification and will update this section once I receive a response. **Lastly, our collection and evaluation processes exclude the gathering of any GitHub user information. We promise we will remove the content once requested with a valid reason.**