The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 159, in compute
                  compute_split_names_from_info_response(
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 131, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 567, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.9/tarfile.py", line 190, in nti
                  s = nts(s, "ascii", "strict")
                File "/usr/local/lib/python3.9/tarfile.py", line 174, in nts
                  return s.decode(encoding, errors)
              UnicodeDecodeError: 'ascii' codec can't decode byte 0x88 in position 1: ordinal not in range(128)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.9/tarfile.py", line 2588, in next
                  tarinfo = self.tarinfo.fromtarfile(self)
                File "/usr/local/lib/python3.9/tarfile.py", line 1292, in fromtarfile
                  obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
                File "/usr/local/lib/python3.9/tarfile.py", line 1234, in frombuf
                  chksum = nti(buf[148:156])
                File "/usr/local/lib/python3.9/tarfile.py", line 193, in nti
                  raise InvalidHeaderError("invalid header")
              tarfile.InvalidHeaderError: invalid header
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 86, in _split_generators
                  first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 30, in _get_pipeline_from_tar
                  for filename, f in tar_iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1577, in __iter__
                  for x in self.generator(*self.args, **self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1637, in _iter_from_urlpath
                  yield from cls._iter_tar(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1588, in _iter_tar
                  stream = tarfile.open(fileobj=f, mode="r|*")
                File "/usr/local/lib/python3.9/tarfile.py", line 1822, in open
                  t = cls(name, filemode, stream, **kwargs)
                File "/usr/local/lib/python3.9/tarfile.py", line 1703, in __init__
                  self.firstmember = self.next()
                File "/usr/local/lib/python3.9/tarfile.py", line 2600, in next
                  raise ReadError(str(e))
              tarfile.ReadError: invalid header
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 75, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open a discussion for direct support.

Assemblage vcpkg DLL Dataset

This reposotory holds the public dataset for Assemblage, this copy of dataset is on vcpkg dll data along with pdb files, please note this copy of dataset doesn't include source code and comments about functions. Please note, the Assemblage code is published under the MIT license, while vcpkg distribute the source code and build script, please obey the original source code's license.

About Assemblage

Assemblage consists a cloud-based distributed system for building large, diverse, corpuses of binaries and datasets (of x86-64 ELF and Windows PE executables) it generates. You can find the paper at this link.

Dataset Details

This public copy of Assemblage data consists of 55k vcpkg DLL binaries, and the information are stored in binaries.csv and functions.csv. Due to the nature that binary files can't be put into csvs, a seperate binaries.tar.xz.part** is included, and each file can be indexed by either its SHA-256 hash or the binary_path column.

We are no longer offering csv files as data is getting too large, and can't be loaded into memory less than 128GB.

The binaries.csv lists each binary file's detailed source, e.g., the compiler version, optimization and source code link. You can index the binary file by the binary_path column, and you can also look up dataframe entry by the binary's hash in reverse.

The functions.csv records all functions indicated by its pdb file, extracted by dia2dump. You can find detailed function level information, such as RVA address, source code, and comments in this file.

Please use our SQLite database, which records every details about the binary files including function source code and comments.

Downloads last month
0
Edit dataset card