Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K<n<100K
ArXiv:
Tags:
language: | |
- en | |
size_categories: | |
- 1B<n<10B | |
task_categories: | |
- text-generation | |
pretty_name: AgentSearch-V1 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: "**/*.parquet" | |
# Important Notice | |
**This dataset is just a sample. The real dataset will be uploaded after New Year's 2024. This early release is intended for Agent Search launching today, but the data is not yet finalized.** | |
### Getting Started | |
The AgentSearch-V1 dataset includes over one billion embeddings sourced from over 50 million high-quality documents. This extensive collection encompasses the majority of content from sources like Arxiv, Wikipedia, Project Gutenberg, and includes quality-filtered CC data. | |
To access and utilize the AgentSearch-V1 dataset, you can stream it via HuggingFace with the following Python code: | |
```python | |
from datasets import load_dataset | |
# To stream the entire dataset: | |
ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", streaming=True) | |
# Optional, stream just the "arxiv" dataset | |
ds = load_dataset("SciPhi/AgentSearch-V1", data_files="arxiv/*", streaming=True) | |
``` | |
--- | |
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/SciPhi-AI/agent-search). [Synthesizer](https://github.com/SciPhi-AI/synthesizer) offers direct integration with AgentSearch and top LLM providers. | |
### Dataset Summary | |
We take a similar approach to RedPajama-v1 and divide AgentSearch into a number of categories. | |
| Dataset | Token Count | | |
|----------------|-------------| | |
| Books | TBD | | |
| ArXiv | TBD | | |
| Wikipedia | TBD | | |
| StackExchange | TBD | | |
| OpenMath | TBD | | |
| Filtered Crawl | TBD | | |
| Total | TBD | | |
### Languages | |
English. | |
## Dataset Structure | |
The raw dataset structure is as follows: | |
```json | |
{ | |
"url": ..., | |
"title": ..., | |
"metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}, | |
"text_chunks": ..., | |
"embeddings": ..., | |
"dataset": "github" | "books" | "arxiv" | "wikipedia" | "stackexchange" | "open-math" | "filtered-rp2" | |
} | |
``` | |
The indexed dataset can be downloaded directly and is structured as a qdrant database dump, each entry has meta data {"url", "vector"}. In addition, there is a corresponding sqlite dataset which contains the mapping from urls onto embeddings, text chunks, and other metadata. | |
## Dataset Creation | |
This dataset was created as a step towards making humanities most important knowledge locally searchable and LLM optimal. It was created by filtering, cleaning, and augmenting locally publicly available datasets. | |
To cite our work, please use the following: | |
``` | |
@software{SciPhi2023AgentSearch, | |
author = {SciPhi}, | |
title = {AgentSearch [ΨΦ]: A Comprehensive Agent-First Framework and Dataset for Webscale Search}, | |
year = {2023}, | |
url = {https://github.com/SciPhi-AI/agent-search} | |
} | |
``` | |
### Source Data | |
``` | |
@ONLINE{wikidump, | |
author = "Wikimedia Foundation", | |
title = "Wikimedia Downloads", | |
url = "https://dumps.wikimedia.org" | |
} | |
``` | |
``` | |
@misc{paster2023openwebmath, | |
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, | |
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba}, | |
year={2023}, | |
eprint={2310.06786}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.AI} | |
} | |
``` | |
``` | |
@software{together2023redpajama, | |
author = {Together Computer}, | |
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset}, | |
month = April, | |
year = 2023, | |
url = {https://github.com/togethercomputer/RedPajama-Data} | |
} | |
``` | |
### License | |
Please refer to the licenses of the data subsets you use. | |
* [Open-Web (Common Crawl Foundation Terms of Use)](https://commoncrawl.org/terms-of-use/full/) | |
* Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information) | |
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html) | |
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information) | |
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) | |
<!-- | |
### Annotations | |
#### Annotation process | |
[More Information Needed] | |
#### Who are the annotators? | |
[More Information Needed] | |
### Personal and Sensitive Information | |
[More Information Needed] | |
## Considerations for Using the Data | |
### Social Impact of Dataset | |
[More Information Needed] | |
### Discussion of Biases | |
[More Information Needed] | |
### Other Known Limitations | |
[More Information Needed] | |
## Additional Information | |
### Dataset Curators | |
[More Information Needed] | |
### Licensing Information | |
[More Information Needed] | |
### Citation Information | |
[More Information Needed] | |
### Contributions | |
[More Information Needed] | |
--> |