--- dataset_info: features: - name: question_id dtype: int64 - name: title dtype: string - name: question_body dtype: string - name: question_type dtype: string - name: question_date dtype: string splits: - name: train num_bytes: 3433758 num_examples: 3449 - name: test num_bytes: 12055 num_examples: 14 download_size: 0 dataset_size: 3445813 license: cc task_categories: - text-classification language: - en tags: - code pretty_name: staqt size_categories: - 1K wants any solution for solving "x". - Debug: the user wants to do "x", and has a clear idea/solution "y" but it is not working, and is seeking a correction to "y". - Seeking-different-solution: the user wants to do "x", and has found already a working solution "y", but is seeking an alternative "z". Sometimes, it's hard to truly understand the users' true intentions; the separating line between each category will be minor and might be subject to interpretation. Naturally, some questions may have multiple concerns (i.e. could correspond to multiple categories). However, this dataset contains mainly questions for which we could assign a clear single category to each question. Currently, all questions annotated are a subset of the [stackoverflow_python](koutch/stackoverflow_python) dataset. ### Languages The currently annotated questions concern posts with the *python* tag. The questions are written in *English*. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - question_id: the unique id of the post - question_body: the (HTML) content of the question - question_type: the assigned category/type/label - "needtoknow" - "howto", - "debug", - "seeking", - "conceptual", - "other" ### Data Splits [More Information Needed] ## Dataset Creation ### Annotations #### Annotation process Previous research looked into mining natural language-code pairs from stackoverflow. Two notable works yielded the [StaQC](https://arxiv.org/abs/1803.09371) and [ConaLA](https://arxiv.org/abs/1803.09371) datasets. Parts of the dataset used a subset of the manual annotations provided by the authors of the papers (available at [staqc](https://huggingface.co/datasets/koutch/staqc), and [conala](https://huggingface.co/datasets/neulab/conala])). The questions were annotated as belonging to the "how to do it" category. To ease the annotation procedure, we used the [argilla platform](https://docs.argilla.io/en/latest/index.html) and multiple iterations of [few-shot training with a SetFit model](https://docs.argilla.io/en/latest/tutorials/notebooks/labelling-textclassification-setfit-zeroshot.html#%F0%9F%A6%BE-Train-a-few-shot-SetFit-model). ## Considerations for Using the Data ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed]