Update README.md
Browse files
README.md
CHANGED
@@ -1,40 +1,115 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
dataset_info:
|
4 |
-
features:
|
5 |
-
- name: type
|
6 |
-
dtype: string
|
7 |
-
- name: text
|
8 |
-
dtype: string
|
9 |
-
- name: created_at
|
10 |
-
dtype: string
|
11 |
-
- name: author
|
12 |
-
dtype: string
|
13 |
-
- name: author_did
|
14 |
-
dtype: string
|
15 |
-
- name: uri
|
16 |
-
dtype: string
|
17 |
-
- name: embedded_array
|
18 |
-
list:
|
19 |
-
- name: alt
|
20 |
-
dtype: string
|
21 |
-
- name: blob
|
22 |
-
dtype: string
|
23 |
-
- name: type
|
24 |
-
dtype: string
|
25 |
-
- name: langs
|
26 |
-
sequence: string
|
27 |
-
- name: reply_to
|
28 |
-
dtype: string
|
29 |
-
splits:
|
30 |
-
- name: train
|
31 |
-
num_bytes: 3500528518
|
32 |
-
num_examples: 10000000
|
33 |
-
download_size: 1428374810
|
34 |
-
dataset_size: 3500528518
|
35 |
-
configs:
|
36 |
-
- config_name: default
|
37 |
-
data_files:
|
38 |
-
- split: train
|
39 |
-
path: data/train-*
|
40 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
dataset_info:
|
4 |
+
features:
|
5 |
+
- name: type
|
6 |
+
dtype: string
|
7 |
+
- name: text
|
8 |
+
dtype: string
|
9 |
+
- name: created_at
|
10 |
+
dtype: string
|
11 |
+
- name: author
|
12 |
+
dtype: string
|
13 |
+
- name: author_did
|
14 |
+
dtype: string
|
15 |
+
- name: uri
|
16 |
+
dtype: string
|
17 |
+
- name: embedded_array
|
18 |
+
list:
|
19 |
+
- name: alt
|
20 |
+
dtype: string
|
21 |
+
- name: blob
|
22 |
+
dtype: string
|
23 |
+
- name: type
|
24 |
+
dtype: string
|
25 |
+
- name: langs
|
26 |
+
sequence: string
|
27 |
+
- name: reply_to
|
28 |
+
dtype: string
|
29 |
+
splits:
|
30 |
+
- name: train
|
31 |
+
num_bytes: 3500528518
|
32 |
+
num_examples: 10000000
|
33 |
+
download_size: 1428374810
|
34 |
+
dataset_size: 3500528518
|
35 |
+
configs:
|
36 |
+
- config_name: default
|
37 |
+
data_files:
|
38 |
+
- split: train
|
39 |
+
path: data/train-*
|
40 |
+
---
|
41 |
+
|
42 |
+
# Ten Million bluesky posts
|
43 |
+
|
44 |
+
|
45 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/eGAoFJuzuSVPhOl__n64P.png)
|
46 |
+
|
47 |
+
<!-- Provide a quick summary of the dataset. -->
|
48 |
+
|
49 |
+
This dataset contains 5 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data.
|
50 |
+
|
51 |
+
This dataset was inspired by the Alpindales original 2 million posts dataset, this dataset expands on that dataset with much more data.
|
52 |
+
|
53 |
+
Alpins dataset did not get author handles or image urls & metadata that was included in the posts. The images and their captions could potenically be invaluble for training so they have been collected.
|
54 |
+
|
55 |
+
This is the small version of the dataset to come for testing with formatting/smaller projects.
|
56 |
+
|
57 |
+
This dataset is my own and is unaffiliated with bluesky or any potential employer.
|
58 |
+
|
59 |
+
|
60 |
+
## Dataset Structure
|
61 |
+
|
62 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
63 |
+
|
64 |
+
|
65 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/674783a7c6317bfd72b33659/9FA7LTPkffQDwrSL4F2z5.png)
|
66 |
+
|
67 |
+
- **Curated by:** Roro
|
68 |
+
- **License:** MIT
|
69 |
+
|
70 |
+
## Uses
|
71 |
+
|
72 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
73 |
+
|
74 |
+
The dataset could be used for:
|
75 |
+
- Study social media trends
|
76 |
+
- Research on social media content moderation
|
77 |
+
- Studying conversation structures and reply networks
|
78 |
+
|
79 |
+
I have not been able to figure out how to parse the atproto image ref bytes into a image or blob url. I would appreciate a PR for that.
|
80 |
+
|
81 |
+
The dataset is meant to be downloaded with the huggingface load_dataset() function. From there you can either run the dataset as a iterable stream so you do not have to worry about memory or you can convert to a pandas dataframe.
|
82 |
+
|
83 |
+
Note that you will need the to install the following libraries:
|
84 |
+
```bash
|
85 |
+
pip install pandas pyarrow datasets huggingface_hub
|
86 |
+
```
|
87 |
+
|
88 |
+
To download/load the huggingface dataset:
|
89 |
+
```python
|
90 |
+
from datasets import load_dataset
|
91 |
+
dataset = load_dataset("Roronotalt/bluesky")
|
92 |
+
```
|
93 |
+
|
94 |
+
To pandas:
|
95 |
+
```python
|
96 |
+
new_dataset = dataset.to_pandas()
|
97 |
+
```
|
98 |
+
|
99 |
+
You can then save the pandas dataframe as a csv.
|
100 |
+
|
101 |
+
Alternativley if you download the provided dataset parquet file in /data, you can convert the file to a csv using the following python code:
|
102 |
+
|
103 |
+
```bash
|
104 |
+
python -c "import pandas as pd;
|
105 |
+
df = http://pd.read_parquet('train-0000.parquet', engine='pyarrow');
|
106 |
+
http://df.to_csv('output_file.csv', index=False)
|
107 |
+
"
|
108 |
+
```
|
109 |
+
Credit to @TyrantsMuse on twitter for the code snippet
|
110 |
+
|
111 |
+
## Dataset Curation
|
112 |
+
|
113 |
+
The dataset not is filtered, sorting the dataset for quality or moderation may make it more valuable for your use cases. The dataset is as-is and no liablity is provided.
|
114 |
+
|
115 |
+
Deduping was done based on the post URIs. The dataset is sorted by the author column.
|