brando commited on
Commit
7339033
·
verified ·
1 Parent(s): ba655c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md CHANGED
@@ -25,3 +25,152 @@ configs:
25
  - split: test
26
  path: data/test-*
27
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  - split: test
26
  path: data/test-*
27
  ---
28
+
29
+ # Random ASCII Dataset
30
+
31
+ This dataset contains random sequences of ASCII characters, with "train," "validation," and "test" splits, designed to simulate text-like structures using all printable ASCII characters. Each sequence consists of pseudo-randomly generated "words" of various lengths, separated by spaces to mimic natural language text.
32
+
33
+ ## Dataset Details
34
+
35
+ - **Splits**: Train, Validation, and Test
36
+ - **Number of sequences**:
37
+ - Train: 5000 sequences
38
+ - Validation: 5000 sequences
39
+ - Test: 5000 sequences
40
+ - **Sequence length**: 512 characters per sequence
41
+ - **Character pool**: All printable ASCII characters, including letters, digits, punctuation, and whitespace.
42
+
43
+ ## Sample Usage
44
+
45
+ To load this dataset in Python, you can use the Hugging Face `datasets` library:
46
+
47
+ ```python
48
+ from datasets import load_dataset
49
+
50
+ # Load the dataset
51
+ dataset = load_dataset("brando/random-ascii-dataset")
52
+
53
+ # Access the train, validation, and test splits
54
+ train_data = dataset["train"]
55
+ val_data = dataset["validation"]
56
+ test_data = dataset["test"]
57
+
58
+ # Print a sample
59
+ print(train_data[0]["text"])
60
+ ```
61
+
62
+ # Example Data
63
+ Below are examples of random sequences generated in this dataset:
64
+ ```python
65
+ #Example 1:
66
+ "!Q4$^V3w L@#12 Vd&$%4B+ (k#yFw! [7*9z"
67
+
68
+ #Example 2:
69
+ "T^&3xR f$xH&ty ^23M* qW@# Lm5&"
70
+
71
+ #Example 3:
72
+ "b7$W %&6Zn!!R xT&8N z#G m93T +%^0"
73
+ ```
74
+
75
+ # License
76
+
77
+ This dataset is released under the Apache License 2.0. You are free to use, modify, and distribute this dataset under the terms of the Apache License.
78
+
79
+ # Citation
80
+
81
+ ```bibtex
82
+ @misc{miranda2021ultimateutils,
83
+ title={Ultimate Utils - the Ultimate Utils Library for Machine Learning and Artificial Intelligence},
84
+ author={Brando Miranda},
85
+ year={2021},
86
+ url={https://github.com/brando90/ultimate-utils},
87
+ note={Available at: \url{https://www.ideals.illinois.edu/handle/2142/112797}},
88
+ abstract={Ultimate Utils is a comprehensive library providing utility functions and tools to facilitate efficient machine learning and AI research, including efficient tensor manipulations and gradient handling with methods such as `detach()` for creating gradient-free tensors.}
89
+ }
90
+ ```
91
+
92
+ # Code that generate it
93
+
94
+ ```python
95
+ #ref: https://chatgpt.com/c/671ff56a-563c-8001-afd5-94632fe63d67
96
+ import os
97
+ import random
98
+ import string
99
+ from huggingface_hub import login
100
+ from datasets import Dataset, DatasetDict
101
+
102
+ # Function to load the Hugging Face API token from a file
103
+ def load_token(file_path: str) -> str:
104
+ """Load API token from a specified file path."""
105
+ with open(os.path.expanduser(file_path)) as f:
106
+ return f.read().strip()
107
+
108
+ # Function to log in to Hugging Face using a token
109
+ def login_to_huggingface(token: str) -> None:
110
+ """Authenticate with Hugging Face Hub."""
111
+ login(token=token)
112
+ print("Login successful")
113
+
114
+ # Function to generate a random word of a given length
115
+ def generate_random_word(length: int, character_pool: str) -> str:
116
+ """Generate a random word of specified length from a character pool."""
117
+ return "".join(random.choice(character_pool) for _ in range(length))
118
+
119
+ # Function to generate a single random sentence with "words" of random lengths
120
+ def generate_random_sentence(sequence_length: int, character_pool: str) -> str:
121
+ """Generate a random sentence of approximately sequence_length characters."""
122
+ words = [
123
+ generate_random_word(random.randint(3, 10), character_pool)
124
+ for _ in range(sequence_length // 10) # Estimate number of words to fit length
125
+ ]
126
+ sentence = " ".join(words)
127
+ # print(f"Generated sentence length: {len(sentence)}\a") # Print length and sound alert
128
+ return sentence
129
+
130
+ # Function to create a dataset of random "sentences"
131
+ def create_random_text_dataset(num_sequences: int, sequence_length: int, character_pool: str) -> Dataset:
132
+ """Create a dataset with random text sequences."""
133
+ data = {
134
+ "text": [generate_random_sentence(sequence_length, character_pool) for _ in range(num_sequences)]
135
+ }
136
+ return Dataset.from_dict(data)
137
+
138
+ # Main function to generate, inspect, and upload dataset with train, validation, and test splits
139
+ def main() -> None:
140
+ # Step 1: Load token and log in
141
+ key_file_path: str = "/lfs/skampere1/0/brando9/keys/brandos_hf_token.txt"
142
+ token: str = load_token(key_file_path)
143
+ login_to_huggingface(token)
144
+
145
+ # Step 2: Dataset parameters
146
+ num_sequences_train: int = 5000
147
+ num_sequences_val: int = 5000
148
+ num_sequences_test: int = 5000
149
+ sequence_length: int = 512
150
+ character_pool: str = string.printable # All ASCII characters (letters, digits, punctuation, whitespace)
151
+
152
+ # Step 3: Create datasets for each split
153
+ train_dataset = create_random_text_dataset(num_sequences_train, sequence_length, character_pool)
154
+ val_dataset = create_random_text_dataset(num_sequences_val, sequence_length, character_pool)
155
+ test_dataset = create_random_text_dataset(num_sequences_test, sequence_length, character_pool)
156
+
157
+ # Step 4: Combine into a DatasetDict with train, validation, and test splits
158
+ dataset_dict = DatasetDict({
159
+ "train": train_dataset,
160
+ "validation": val_dataset,
161
+ "test": test_dataset
162
+ })
163
+
164
+ # Step 5: Print a sample of the train dataset for verification
165
+ print("Sample of train dataset:", train_dataset[:5])
166
+
167
+ # Step 6: Push the dataset to Hugging Face Hub
168
+ dataset_name: str = "brando/random-ascii-dataset"
169
+ dataset_dict.push_to_hub(dataset_name)
170
+ print(f"Dataset uploaded to https://huggingface.co/datasets/{dataset_name}")
171
+
172
+ # Run the main function
173
+ if __name__ == "__main__":
174
+ main()
175
+
176
+ ```