--- license: other language: - code - en task_categories: - question-answering - text-generation - text2text-generation tags: - code viewer: true pretty_name: StackOverflow Posts Markdown size_categories: - 10M<n<100M --- # StackOverflow Posts Markdown ![StackOverflow Logo](https://stackoverflow.design/assets/img/logos/so/logo-stackoverflow.png) ## Dataset Summary This dataset contains all posts submitted to StackOverflow before the 14th of June 2023 formatted as **Markdown text**.
The dataset contains ~60 Million posts, totaling ~35GB in size and ~65 billion characters of text.
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange). ## Dataset Structure Each record corresponds to one post of a particular type. Original ordering from the data dump is not exactly preserved due to parallelism in the script used to process the data dump. The markdown content of each post is contained in the `Body` field. The license for a particular post is contained in the `ContentLicense` field. ### Data Fields ```typescript { Id: long, PostTypeId: long, // 1=Question, 2=Answer, 3=Orphaned tag wiki, 4=Tag wiki excerpt, 5=Tag wiki, 6=Moderator nomination, 7=Wiki Placeholder, 8=Privilige Wiki AcceptedAnswerId: long | null, // only present if PostTypeId=1 ParentId: long | null, // only present if PostTypeId=2 Score: long, ViewCount: long | null, Body: string | null, Title: string | null, ContentLicense: string | null, FavoriteCount: long | null, CreationDate: string | null, LastActivityDate: string | null, LastEditDate: string | null, LastEditorUserId: long | null, OwnerUserId: long | null, Tags: array | null } ``` Also consider the [StackExchange Datadump Schema Documentation](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede), as all fields have analogs in the original dump format. ## How to use? ```python from datasets import load_dataset # predownload full dataset ds = load_dataset('mikex86/stackoverflow-posts', split='train') # dataset streaming (will only download the data as needed) ds = load_dataset('mikex86/stackoverflow-posts', split='train', streaming=True) for sample in iter(ds): print(sample["Body"]) ``` ## How is the text stored? The original Data Dump formats the "Body" field as HTML, using tags such as ``, `

`, `