The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Infinite Blue Skies

A streaming dataset providing real-time access to public posts from the Bluesky social network via the AtProto API.

Dataset Summary

The Bluesky Posts dataset provides streaming access to public posts from the Bluesky social network through the AtProto API. This dataset is particularly useful for researchers and developers interested in social media analysis, content moderation, language modeling, and trend detection.

Supported Tasks and Leaderboards

The dataset can be used for various tasks including:

  • Text Generation: Training language models on social media content
  • Text Classification: Content moderation, topic classification, sentiment analysis
  • Social Media Analysis: Trend detection, user behavior analysis
  • Content Analysis: Hashtag analysis, URL pattern analysis

Dataset Structure

Data Instances

Each instance in the dataset represents a Bluesky post with the following fields:

{
    'uri': 'at://did:plc:..../app.bsky.feed.post/...',
    'cid': 'baf...',
    'text': 'The content of the post...',
    'created_at': '2024-03-21T12:34:56.789Z',
    'author_did': 'did:plc:...',
}

Data Fields

  • uri: Unique identifier for the post
  • cid: Content identifier
  • text: Content of the post
  • created_at: ISO timestamp of when the post was created
  • author_did: Decentralized identifier of the author

Data Splits

This is a streaming dataset and does not have traditional splits. Data is accessed in real-time through an iterator.

How to Use

This dataset is designed to be used with the Hugging Face Datasets library. Here's how to get started:

from datasets import load_dataset

dataset = load_dataset(
    "serpxe/infinite_blue_skies",
    streaming=True,
    trust_remote_code=True,
    split="train",
    batch_size=5,
)

# Iterate one-by-one
for i in range(10):
    print(next(iter(dataset)))
    # Returns 10 posts

# Batched iteration
iterable_dataset = iter(dataset)
for i in range(10):
    print(next(iterable_dataset))
    # Returns 10 posts, but in batches of 5
Downloads last month
12