|
--- |
|
license: mit |
|
task_categories: |
|
- text-classification |
|
- feature-extraction |
|
language: |
|
- en |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
# A cleaned dataset from [paperswithcode.com](https://paperswithcode.com/) |
|
*Last dataset update: July 2023* |
|
|
|
This is a cleaned up dataset optained from [paperswithcode.com](https://paperswithcode.com/) through their [API](https://paperswithcode.com/api/v1/docs/) service. It represents a set of around 56K carefully categorized papers into 3K tasks and 16 areas. The papers contain arXiv and NIPS IDs as well as title, abstract and other meta information. |
|
It can be used for training text classifiers that concentrate on the use of specific AI and ML methods and frameworks. |
|
|
|
### Contents |
|
It contains the following tables: |
|
|
|
- papers.csv (around 56K) |
|
- papers_train.csv (80% from 56K) |
|
- papers_test.csv (20% from 56K) |
|
- tasks.csv |
|
- areas.csv |
|
|
|
### Specials |
|
UUIDs were added to the dataset since the PapersWithCode IDs (pwc_ids) are not distinct enough. These UUIDs may change in the future with new versions of the dataset. |
|
Also, embeddings were calculated for all of the 56K papers using the brilliant model [SciNCL](https://huggingface.co/malteos/scincl) as well as dimensionality-redused 2D coordinates using UMAP. |
|
|
|
There is also a simple Python Notebook which was used to optain and refactor the dataset. |