The dataset viewer is not available for this split.
Error code: FeaturesError Exception: UnicodeDecodeError Message: 'utf-8' codec can't decode byte 0xa0 in position 263: invalid start byte Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 591, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__ yield from islice(self.ex_iterable, self.n) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 193, in _generate_tables csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1491, in xpandas_read_csv return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv return _read(filepath_or_buffer, kwds) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__ self._engine = self._make_engine(f, self.engine) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine return mapping[engine](f, **self.options) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__ self._reader = parsers.TextReader(src, **kwds) File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__ File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 263: invalid start byte
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Dataset Name
This dataset aggregates potential homepage URLs for police agencies, paired with Google Search snippets that describe each homepage. It aims to facilitate research, development, and verification tasks related to digital public safety resources.
Dataset Details
This dataset compiles ten pairs of URLs and corresponding Google Search snippets for each police agency investigated.
Dataset Description
- Curated by: Police Data Accessibility Project
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
Uses
Direct Use
This dataset is suitable for use in projects that require the identification or verification of official police agency homepages, such as data enrichment in research databases, verification tasks for public safety applications, and training datasets for machine learning models focused on URL classification or information retrieval.
Out-of-Scope Use
This dataset is not intended for use in operational systems without further verification of URL authenticity. It should not be used as a sole source for critical applications that require up-to-date and officially verified data.
Dataset Structure
Each entry in the dataset represents a police agency, identified by a unique agency ID and name, and includes a list of ten URL and snippet pairs that potentially correspond to the agency's official homepage.
Dataset Creation
Curation Rationale
The dataset was created to address the need for a comprehensive and accessible repository of potential police agency homepage URLs, to support research, development, and verification efforts in public safety and law enforcement domains.
Source Data
Data Collection and Processing
Data was collected using automated scripts that performed Google Searches for each police agency and extracted the top ten URLs and their corresponding snippets.
Who are the source data producers?
The data was produced by automated scripts designed and implemented by the dataset curators, with manual oversight to ensure quality and relevance.
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
The dataset does not contain personal or sensitive information. URLs and snippets were collected from public Google Search results.
Bias, Risks, and Limitations
The dataset may reflect the biases inherent in Google Search algorithms and the potentially dynamic nature of URLs. Users should be aware that the dataset might not always represent the current official homepage of a police agency.
Recommendations
Users are encouraged to verify the currentness and authenticity of URLs when using this dataset for critical applications. Additionally, consideration should be given to the potential biases in search engine results.
Citation [optional]
BibTeX:
@misc{possible_police_agency_homepage_urls, author = {Police Data Accessibility Project}, title = {Possible Police Agency Homepage URLs Dataset}, year = {2024}, publisher = {GitHub/HuggingFace}, }
APA:
Police Data Accessibility Project. (2024). Possible Police Agency Homepage URLs Dataset. GitHub/HuggingFace.
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 36