Source data files are not accessible: 403 Forbidden error

#3
by albertvillanova HF staff - opened
code-search-net org

The CodeSearchNet repo has been archived (1 Apr 2023) and they have removed access to their S3 data files.

For example, the URL https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/go.zip
gives

<Error>
 <Code>AccessDenied</Code>
 <Message>Access Denied</Message>
 <RequestId>7HBEECDGJYHA1V44</RequestId>
 <HostId>
  A/g5WejF+7h2Y9tF1Rl0YS34yu8gw2s8p7GzLgFJ6+KD4o2qmw/vVN0u4UTN6n3h+wNKYZBbOBk=
 </HostId>
</Error>

Hey is there a workaround to this issue? We started a project using this dataset but only downloaded the python examples, so were missing all the other examples.

Edit: BTW I believe we did find a workaround by going to kaggle to download the data: https://www.kaggle.com/datasets/omduggineni/codesearchnet

Here are some functions I hacked together to effectively load the same dataset:

def download_dataset_from_kaggle(path="data"):
    """
    Download the CodeSearchNet dataset from Kaggle.
    Make sure to have the Kaggle API token in ~/.kaggle/kaggle.json

    Returns:
        str: Path to the downloaded dataset.
    """
    path = Path(path)
    path.parent.mkdir(parents=True, exist_ok=True)
    kaggle.api.authenticate()
    kaggle.api.dataset_download_files(
        "omduggineni/codesearchnet", path=path, unzip=True
    )


def load_local_dataset(lang="all", path="data"):
    """
    Load a local dataset from the downloaded Kaggle dataset.

    Args:
        lang (str): The language to be used for the dataset.
        path (str, optional): Path to the downloaded dataset. Defaults to "data".

    Returns:
        Dataset: dataset loaded from local files
    """
    path = Path(path)

    if lang != "all":
        # Read the downloaded dataset
        path = path / lang / lang / "final/jsonl"
        dataset = load_dataset(
            "json",
            data_files={
                "train": glob.glob(path.as_posix() + "/train/*.jsonl"),
                "validation": glob.glob(path.as_posix() + "/valid/*.jsonl"),
                "test": glob.glob(path.as_posix() + "/test/*.jsonl"),
            },
        )
    else:
        train_files = glob.glob(path.as_posix() + "/**/train/*.jsonl", recursive=True)
        valid_files = glob.glob(path.as_posix() + "/**/valid/*.jsonl", recursive=True)
        test_files = glob.glob(path.as_posix() + "/**/test/*.jsonl", recursive=True)
        dataset = load_dataset(
            "json",
            data_files={
                "train": train_files,
                "validation": valid_files,
                "test": test_files,
            },
        )

    return dataset

although be warned some of the features might have different labels now.

code-search-net org

We have contacted one of the authors of the dataset to ask them if we could get their source data files and host them here on the Hugging Face Hub.

I'll keep you informed.

CC: @hamel

Hey is there a workaround to this issue? We started a project using this dataset but only downloaded the python examples, so were missing all the other examples.

Edit: BTW I believe we did find a workaround by going to kaggle to download the data: https://www.kaggle.com/datasets/omduggineni/codesearchnet

Here are some functions I hacked together to effectively load the same dataset:

def download_dataset_from_kaggle(path="data"):
    """
    Download the CodeSearchNet dataset from Kaggle.
    Make sure to have the Kaggle API token in ~/.kaggle/kaggle.json

    Returns:
        str: Path to the downloaded dataset.
    """
    path = Path(path)
    path.parent.mkdir(parents=True, exist_ok=True)
    kaggle.api.authenticate()
    kaggle.api.dataset_download_files(
        "omduggineni/codesearchnet", path=path, unzip=True
    )


def load_local_dataset(lang="all", path="data"):
    """
    Load a local dataset from the downloaded Kaggle dataset.

    Args:
        lang (str): The language to be used for the dataset.
        path (str, optional): Path to the downloaded dataset. Defaults to "data".

    Returns:
        Dataset: dataset loaded from local files
    """
    path = Path(path)

    if lang != "all":
        # Read the downloaded dataset
        path = path / lang / lang / "final/jsonl"
        dataset = load_dataset(
            "json",
            data_files={
                "train": glob.glob(path.as_posix() + "/train/*.jsonl"),
                "validation": glob.glob(path.as_posix() + "/valid/*.jsonl"),
                "test": glob.glob(path.as_posix() + "/test/*.jsonl"),
            },
        )
    else:
        train_files = glob.glob(path.as_posix() + "/**/train/*.jsonl", recursive=True)
        valid_files = glob.glob(path.as_posix() + "/**/valid/*.jsonl", recursive=True)
        test_files = glob.glob(path.as_posix() + "/**/test/*.jsonl", recursive=True)
        dataset = load_dataset(
            "json",
            data_files={
                "train": train_files,
                "validation": valid_files,
                "test": test_files,
            },
        )

    return dataset

although be warned some of the features might have different labels now.

Good work, bro

@albertvillanova I found the new URLs. They were updated in Microsoft's CodeXGlue repo.

The url pattern is:
https://zenodo.org/record/7857872/files/{go,java,javascript,php,python,ruby}.zip

There is no "all.zip". They've been taking forever to download so far, so it would sure be nice if HF hosted copies itself.

@albertvillanova : I've created a PR (https://huggingface.co/datasets/code_search_net/discussions/6) that updates the link from S3 to Zenodo as Marc pointed out above. On my local machine, the entire dataset (6 files) downloads in about 35 minutes. I am not sure how long it took to download from the S3 bucket and how long it will take if the dataset is hosted on the hub itself, but this seems to be an easy immediate fix.

For now we could do this

!wget https://zenodo.org/record/7857872/files/python.zip
!unzip python.zip
data_files = {
"train": ['python/final/jsonl/train/python_train_0.jsonl.gz',
'python/final/jsonl/train/python_train_1.jsonl.gz'
]
}
raw_dataset = load_dataset("json", data_files=data_files)
raw_dataset["train"][0]

code-search-net org

We are finally hosting the data files on the Hugging Face Hub.

You can now load the dataset as usual:

ds = load_dataset("code_search_net", "python")

This issue is fixed by: #7

albertvillanova changed discussion status to closed

A question regarding the licenses:
Since for each source data, a different license exists in the python_licenses.pkl file, are the source data free to use?

Sign up or log in to comment