json-schema / README.md
michaelmior's picture
Update README
9cbc94c verified
metadata
language:
  - en
license:
  - unknown

JSON Schema Dataset

This dataset consists of a collection of JSON Schema documents collected from GitHub by searching using the Sourcegraph API.

Step 1: Find a list of JSON Schema paths

The Sourcegraph code search API is used to find files with a .json extension and containing {\n "$schema": "https://json-schema.org/". This is somewhat restrictive, but still manages to find a large number of schemas.

pipenv run python slurp.py --outfile repos.csv

Step 2: Fetch the history information for each file

We fetch every revision of each JSON Schema file. Before downloading the files, we use the GitHub API to get the list of commit hashes. The resulting data is saved to commits.json.

pipenv run python fetch_history.py > commits.json

Step 3: Download the JSON Schema files

This script will download each schema which comes from GitHub and save it into subfolders in the data directory.

./fetch_files.sh

Step 4: Validate each JSON Schema

The following script will read each schema in the data directory and confirm that it is a valid JSON Schema. A copy of all valid schemas will be placed in the valid_data directory. Note that schemas are parsed as JSON5 to be more permissive on what syntax is allowed but the final schemas are written as standard JSON.

pipenv run python validate_schemas.py

Step 5: Retrieve additional metadata

We also collect language information using Fasttext and fetch the associated license from the GitHub API.

pipenv run python get_languages.py > languages.json
pipenv run python get_licenses.py > licenses.json

Step 6: Split into train, test, and validation

Finally data is split into training, test, and validation sets. Schemas are always grouped together in the same set based on the GitHub organization they are from. Schemas can also be checked for similarity so that very similar schemas are grouped together.

pipenv run python train_split.py