This dataset is a curated subset of the original arXiv dataset, each entry enriched with a 256-dimensional embedding vector. The embeddings are generated using OpenAI's "text-embedding-3-small" model. For each data point, the embedding is created by concatenating the text of the title, author(s), and abstract into a single string, which is then processed by the embedding model. This approach captures the semantic essence of each document, facilitating tasks such as similarity search, clustering, and recommendation systems based on content relevance. The dataset is designed for use in advanced machine learning applications that require an understanding of document content at a granular level.