RegexEval / README.md
lsiddiqsunny's picture
Update README.md
2cf2243
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - regex
  - redos
  - security
pretty_name: RegexEval
size_categories:
  - n<1K

Dataset Card for RegexEval

Re(gEx|DoS)Eval is a framework that includes a dataset of 762 regex descriptions (prompts) from real users, refined prompts with examples, and a robust set of tests.

Dataset Details

Dataset Description

  • Curated by: Mohammed Latif Siddiq, Jiahao Zhang, Lindsay Roney, and Joanna C. S. Santos
  • Language(s): English

Dataset Sources

Dataset Structure

  • dataset.jsonl: dataset file in jsonl format. Every line contains a JSON object with the following fields:
    • id: unique identifier of the sample.
    • raw_prompt: Raw/original prompt from the real users with the description of the RegEx.
    • refined_prompt: Refined prompt with the description of the RegEx.
    • matches: Matches examples for the RegEx.
    • non-matches: Non-matches examples for the RegEx.

Dataset Creation

Source Data

We mined (on Aug. 16th, 2023) all the regexes from RegExLib, a regular expression library. We use this library because it contains user-contributed regular expressions. We obtained from RegExLib a list of 4,128 regular expressions along with their id, description, and list of expected matches and non-match strings.

Data Collection and Processing

For each sample previously collected, we perform a manual validation to (1) filter out incorrect regexes, (2) create more sample test cases (i.e., matching and non-matching string examples), and (3) create refined problem descriptions (i.e., prompts). We excluded any regex that matched one or more of the following conditions: (i) it was missing any metadata, i.e., description and/or list of expected matches and non- matches; (ii) its description is not written in English; (iii) its description included vulgar words; (iv) its description does not provide sufficient information to understand the purpose of the regular expression; (v) it aimed to detect just one word; (vi) it is incorrect (i.e., the regex matches a string that is not supposed to match, or it does not match a string that is expected to match). After this step, we have 1,001 regex samples.

Each collected regex sample had (on average) only 4 string examples (2 that are expected matches and 2 that are expected non-matches). Thus, we manually crafted additional test cases to ensure that each sample has at least 13 matching1 and 12 non-matching string examples. After creating these additional test strings, we evaluated the regex with the new set of test cases again and excluded the failed regex samples. Hence, we have 762 samples in our final dataset.

Upon further inspection of the descriptions in the extracted sample, we observed that some of them lacked a more detailed explanation (e.g., ID#84: “SQL date format tester.”) or had extra information unrelated to the regex (e.g., ID#4: “... Other than that, this is just a really really long description of a regular expression that I’m using to test how my front page will look in the case where very long expression descriptions are used”). Thus, we created a refined prompt with a clear description of the regex that includes three match and two non-match string examples.

Citation

BibTeX:

@inproceedings{siddiq2024regexeval,
  author={Siddiq, Mohammed Latif and Zhang, Jiahao and Roney, Lindsay and Santos, Joanna C. S.},
  booktitle={Proceedings of the 46th International Conference on Software Engineering, NIER Track (ICSE-NIER '24)}, 
  title={Re(gEx|DoS)Eval: Evaluating Generated Regular Expressions and their Proneness to DoS Attacks}, 
  year={2024}
}

Dataset Card Authors and Contact

Mohammed Latif Siddiq