metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1929035
num_examples: 5000
- name: validation
num_bytes: 1926717
num_examples: 5000
- name: test
num_bytes: 1926477
num_examples: 5000
download_size: 5840409
dataset_size: 5782229
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Random ASCII Dataset
This dataset contains random sequences of ASCII characters, with "train," "validation," and "test" splits, designed to simulate text-like structures using all printable ASCII characters. Each sequence consists of pseudo-randomly generated "words" of various lengths, separated by spaces to mimic natural language text.
Dataset Details
- Splits: Train, Validation, and Test
- Number of sequences:
- Train: 5000 sequences
- Validation: 5000 sequences
- Test: 5000 sequences
- Sequence length: 512 characters per sequence
- Character pool: All printable ASCII characters, including letters, digits, punctuation, and whitespace.
Sample Usage
To load this dataset in Python, you can use the Hugging Face datasets
library:
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("brando/random-ascii-dataset")
# Access the train, validation, and test splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]
# Print a sample
print(train_data[0]["text"])
Example Data
Below are examples of random sequences generated in this dataset:
#Example 1:
"!Q4$^V3w L@#12 Vd&$%4B+ (k#yFw! [7*9z"
#Example 2:
"T^&3xR f$xH&ty ^23M* qW@# Lm5&"
#Example 3:
"b7$W %&6Zn!!R xT&8N z#G m93T +%^0"
License
This dataset is released under the Apache License 2.0. You are free to use, modify, and distribute this dataset under the terms of the Apache License.
Citation
@misc{miranda2021ultimateutils,
title={Ultimate Utils - the Ultimate Utils Library for Machine Learning and Artificial Intelligence},
author={Brando Miranda},
year={2021},
url={https://github.com/brando90/ultimate-utils},
note={Available at: \url{https://www.ideals.illinois.edu/handle/2142/112797}},
abstract={Ultimate Utils is a comprehensive library providing utility functions and tools to facilitate efficient machine learning and AI research, including efficient tensor manipulations and gradient handling with methods such as `detach()` for creating gradient-free tensors.}
}
Code that generate it
#ref: https://chatgpt.com/c/671ff56a-563c-8001-afd5-94632fe63d67
import os
import random
import string
from huggingface_hub import login
from datasets import Dataset, DatasetDict
# Function to load the Hugging Face API token from a file
def load_token(file_path: str) -> str:
"""Load API token from a specified file path."""
with open(os.path.expanduser(file_path)) as f:
return f.read().strip()
# Function to log in to Hugging Face using a token
def login_to_huggingface(token: str) -> None:
"""Authenticate with Hugging Face Hub."""
login(token=token)
print("Login successful")
# Function to generate a random word of a given length
def generate_random_word(length: int, character_pool: str) -> str:
"""Generate a random word of specified length from a character pool."""
return "".join(random.choice(character_pool) for _ in range(length))
# Function to generate a single random sentence with "words" of random lengths
def generate_random_sentence(sequence_length: int, character_pool: str) -> str:
"""Generate a random sentence of approximately sequence_length characters."""
words = [
generate_random_word(random.randint(3, 10), character_pool)
for _ in range(sequence_length // 10) # Estimate number of words to fit length
]
sentence = " ".join(words)
# print(f"Generated sentence length: {len(sentence)}\a") # Print length and sound alert
return sentence
# Function to create a dataset of random "sentences"
def create_random_text_dataset(num_sequences: int, sequence_length: int, character_pool: str) -> Dataset:
"""Create a dataset with random text sequences."""
data = {
"text": [generate_random_sentence(sequence_length, character_pool) for _ in range(num_sequences)]
}
return Dataset.from_dict(data)
# Main function to generate, inspect, and upload dataset with train, validation, and test splits
def main() -> None:
# Step 1: Load token and log in
key_file_path: str = "/lfs/skampere1/0/brando9/keys/brandos_hf_token.txt"
token: str = load_token(key_file_path)
login_to_huggingface(token)
# Step 2: Dataset parameters
num_sequences_train: int = 5000
num_sequences_val: int = 5000
num_sequences_test: int = 5000
sequence_length: int = 512
character_pool: str = string.printable # All ASCII characters (letters, digits, punctuation, whitespace)
# Step 3: Create datasets for each split
train_dataset = create_random_text_dataset(num_sequences_train, sequence_length, character_pool)
val_dataset = create_random_text_dataset(num_sequences_val, sequence_length, character_pool)
test_dataset = create_random_text_dataset(num_sequences_test, sequence_length, character_pool)
# Step 4: Combine into a DatasetDict with train, validation, and test splits
dataset_dict = DatasetDict({
"train": train_dataset,
"validation": val_dataset,
"test": test_dataset
})
# Step 5: Print a sample of the train dataset for verification
print("Sample of train dataset:", train_dataset[:5])
# Step 6: Push the dataset to Hugging Face Hub
dataset_name: str = "brando/random-ascii-dataset"
dataset_dict.push_to_hub(dataset_name)
print(f"Dataset uploaded to https://huggingface.co/datasets/{dataset_name}")
# Run the main function
if __name__ == "__main__":
main()