tamnd commited on
Commit
94d3f22
·
verified ·
1 Parent(s): e2171d1

Sync facebook/react: 5.1K rows (2026-03-28 12:37 UTC)

Browse files

facebook/react: 999 issues, 0 PRs, 0 comments, 0 reviews, 4.1K timeline, 0 pr_files

README.md ADDED
@@ -0,0 +1,375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ task_categories:
4
+ - feature-extraction
5
+ language:
6
+ - en
7
+ - mul
8
+ pretty_name: OpenGitHub Meta
9
+ size_categories:
10
+ - 1K<n<10K
11
+ tags:
12
+ - github
13
+ - metadata
14
+ - issues
15
+ - pull-requests
16
+ - code-review
17
+ - open-source
18
+ - software-engineering
19
+ configs:
20
+ - config_name: issues
21
+ data_files: "data/issues/**/*.parquet"
22
+ - config_name: pull_requests
23
+ data_files: "data/pull_requests/**/*.parquet"
24
+ - config_name: comments
25
+ data_files: "data/comments/**/*.parquet"
26
+ - config_name: review_comments
27
+ data_files: "data/review_comments/**/*.parquet"
28
+ - config_name: reviews
29
+ data_files: "data/reviews/**/*.parquet"
30
+ - config_name: timeline_events
31
+ data_files: "data/timeline_events/**/*.parquet"
32
+ - config_name: pr_files
33
+ data_files: "data/pr_files/**/*.parquet"
34
+ - config_name: commit_statuses
35
+ data_files: "data/commit_statuses/**/*.parquet"
36
+ ---
37
+
38
+ # OpenGitHub Meta
39
+
40
+ ## What is it?
41
+
42
+ The full development metadata of 1 public GitHub repository, fetched from the [GitHub REST API](https://docs.github.com/en/rest) and [GraphQL API](https://docs.github.com/en/graphql), converted to Parquet and hosted here for easy access.
43
+
44
+ Right now the archive has **5.1K rows** across 8 tables in **956.7 KB** of Zstd-compressed Parquet. Every issue, pull request, comment, code review, timeline event, file change, and CI status check is stored as a separate table you can load individually or query together.
45
+
46
+ This is the companion to [OpenGitHub](https://huggingface.co/datasets/open-index/open-github), which mirrors the real-time GitHub event stream via [GH Archive](https://www.gharchive.org/). That dataset tells you what happened across all of GitHub. This one gives you the full picture for specific repos: complete issue threads, full PR review conversations, the state machine from open to close.
47
+
48
+ People use it for:
49
+
50
+ - **Code review research** with inline comments attached to specific diff lines
51
+ - **Project health metrics** like merge rates, review turnaround, label usage
52
+ - **Issue triage and classification** with full text, labels, and timeline
53
+ - **Software engineering process mining** from timeline event sequences
54
+
55
+ Last updated: **2026-03-28**.
56
+
57
+ ## Repositories
58
+
59
+ | Repository | Issues | PRs | Comments | Reviews | Timeline | Total |
60
+ |---|---:|---:|---:|---:|---:|---:|
61
+ | **facebook/react** | 999 | 0 | 0 | 0 | 4.1K | 5.1K |
62
+
63
+ ## How to download and use this dataset
64
+
65
+ Data lives at `data/{table}/{owner}/{repo}/0.parquet`. Load a single table, a single repo, or everything at once. Standard Hugging Face Parquet layout, works with DuckDB, `datasets`, `pandas`, and `huggingface_hub` out of the box.
66
+
67
+ ### Using DuckDB
68
+
69
+ DuckDB reads Parquet directly from Hugging Face, no download step needed. Save any query below as a `.sql` file and run it with `duckdb < query.sql`.
70
+
71
+ ```sql
72
+ -- Top issue authors across all repos
73
+ SELECT
74
+ author,
75
+ COUNT(*) as issue_count,
76
+ COUNT(*) FILTER (WHERE state = 'open') as open,
77
+ COUNT(*) FILTER (WHERE state = 'closed') as closed
78
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/issues/**/0.parquet')
79
+ WHERE is_pull_request = false
80
+ GROUP BY author
81
+ ORDER BY issue_count DESC
82
+ LIMIT 20;
83
+ ```
84
+
85
+ ```sql
86
+ -- PR merge rate by repo
87
+ SELECT
88
+ split_part(filename, '/', 8) || '/' || split_part(filename, '/', 9) as repo,
89
+ COUNT(*) as total_prs,
90
+ COUNT(*) FILTER (WHERE merged) as merged,
91
+ ROUND(COUNT(*) FILTER (WHERE merged) * 100.0 / COUNT(*), 1) as merge_pct
92
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/pull_requests/**/0.parquet', filename=true)
93
+ GROUP BY repo
94
+ ORDER BY total_prs DESC;
95
+ ```
96
+
97
+ ```sql
98
+ -- Most reviewed PRs by number of review submissions
99
+ SELECT
100
+ r.pr_number,
101
+ COUNT(*) as review_count,
102
+ COUNT(*) FILTER (WHERE r.state = 'APPROVED') as approvals,
103
+ COUNT(*) FILTER (WHERE r.state = 'CHANGES_REQUESTED') as changes_requested
104
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/reviews/**/0.parquet') r
105
+ GROUP BY r.pr_number
106
+ ORDER BY review_count DESC
107
+ LIMIT 20;
108
+ ```
109
+
110
+ ```sql
111
+ -- Label activity over time (monthly)
112
+ SELECT
113
+ date_trunc('month', created_at) as month,
114
+ COUNT(*) as label_events
115
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/timeline_events/**/0.parquet')
116
+ WHERE event_type = 'LabeledEvent'
117
+ GROUP BY month
118
+ ORDER BY month;
119
+ ```
120
+
121
+ ```sql
122
+ -- Largest PRs by lines changed
123
+ SELECT
124
+ number,
125
+ additions,
126
+ deletions,
127
+ changed_files,
128
+ additions + deletions as total_lines
129
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/pull_requests/**/0.parquet')
130
+ ORDER BY total_lines DESC
131
+ LIMIT 20;
132
+ ```
133
+
134
+ ### Using Python (`uv run`)
135
+
136
+ These scripts use [PEP 723](https://peps.python.org/pep-0723/) inline metadata. Save as a `.py` file and run with `uv run script.py`. No virtualenv or `pip install` needed.
137
+
138
+ **Stream issues:**
139
+
140
+ ```python
141
+ # /// script
142
+ # requires-python = ">=3.11"
143
+ # dependencies = ["datasets"]
144
+ # ///
145
+ from datasets import load_dataset
146
+
147
+ ds = load_dataset("open-index/open-github-meta", "issues", streaming=True)
148
+ for i, row in enumerate(ds["train"]):
149
+ print(f"#{row['number']}: [{row['state']}] {row['title']} (by {row['author']})")
150
+ if i >= 19:
151
+ break
152
+ ```
153
+
154
+ **Load a specific repo:**
155
+
156
+ ```python
157
+ # /// script
158
+ # requires-python = ">=3.11"
159
+ # dependencies = ["datasets"]
160
+ # ///
161
+ from datasets import load_dataset
162
+
163
+ ds = load_dataset(
164
+ "open-index/open-github-meta",
165
+ "pull_requests",
166
+ data_files="data/pull_requests/facebook/react/0.parquet",
167
+ )
168
+ df = ds["train"].to_pandas()
169
+ print(f"Loaded {len(df)} pull requests")
170
+ print(f"Merged: {df['merged'].sum()} ({df['merged'].mean()*100:.1f}%)")
171
+ print(f"\nTop 10 by lines changed:")
172
+ df["total_lines"] = df["additions"] + df["deletions"]
173
+ print(df.nlargest(10, "total_lines")[["number", "additions", "deletions", "total_lines"]].to_string(index=False))
174
+ ```
175
+
176
+ **Download files:**
177
+
178
+ ```python
179
+ # /// script
180
+ # requires-python = ">=3.11"
181
+ # dependencies = ["huggingface-hub"]
182
+ # ///
183
+ from huggingface_hub import snapshot_download
184
+
185
+ # Download only issues
186
+ snapshot_download(
187
+ "open-index/open-github-meta",
188
+ repo_type="dataset",
189
+ local_dir="./open-github-meta/",
190
+ allow_patterns="data/issues/**/*.parquet",
191
+ )
192
+ print("Downloaded issues parquet files to ./open-github-meta/")
193
+ ```
194
+
195
+ For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.
196
+
197
+ ## Dataset structure
198
+
199
+ ### `issues`
200
+
201
+ Both issues and PRs live in this table (check `is_pull_request`). Join with `pull_requests` on `number` for PR-specific fields like merge status and diff stats.
202
+
203
+ | Column | Type | Description |
204
+ |---|---|---|
205
+ | `number` | int32 | Issue/PR number (primary key) |
206
+ | `node_id` | string | GitHub GraphQL node ID |
207
+ | `is_pull_request` | bool | True if this is a PR |
208
+ | `title` | string | Title |
209
+ | `body` | string | Full body text in Markdown |
210
+ | `state` | string | `open` or `closed` |
211
+ | `state_reason` | string | `completed`, `not_planned`, or `reopened` |
212
+ | `author` | string | Username of the creator |
213
+ | `created_at` | timestamp | When opened |
214
+ | `updated_at` | timestamp | Last activity |
215
+ | `closed_at` | timestamp | When closed (null if open) |
216
+ | `labels` | string (JSON) | Array of label names |
217
+ | `assignees` | string (JSON) | Array of assignee usernames |
218
+ | `milestone_title` | string | Milestone name |
219
+ | `milestone_number` | int32 | Milestone number |
220
+ | `reactions` | string (JSON) | Reaction counts (`{"+1": 5, "heart": 2}`) |
221
+ | `comment_count` | int32 | Number of comments |
222
+ | `locked` | bool | Whether the conversation is locked |
223
+ | `lock_reason` | string | Lock reason |
224
+
225
+ ### `pull_requests`
226
+
227
+ PR-specific fields. Join with `issues` on `number` for title, body, labels, and other shared fields.
228
+
229
+ | Column | Type | Description |
230
+ |---|---|---|
231
+ | `number` | int32 | PR number (matches `issues.number`) |
232
+ | `merged` | bool | Whether the PR was merged |
233
+ | `merged_at` | timestamp | When merged |
234
+ | `merged_by` | string | Username who merged |
235
+ | `merge_commit_sha` | string | Merge commit SHA |
236
+ | `base_ref` | string | Target branch (e.g. `main`) |
237
+ | `head_ref` | string | Source branch |
238
+ | `head_sha` | string | Head commit SHA |
239
+ | `additions` | int32 | Lines added |
240
+ | `deletions` | int32 | Lines deleted |
241
+ | `changed_files` | int32 | Number of files changed |
242
+ | `draft` | bool | Whether the PR is a draft |
243
+ | `maintainer_can_modify` | bool | Whether maintainers can push to the head branch |
244
+
245
+ ### `comments`
246
+
247
+ Conversation comments on issues and PRs. These are the threaded discussion comments, not inline code review comments (those are in `review_comments`).
248
+
249
+ | Column | Type | Description |
250
+ |---|---|---|
251
+ | `id` | int64 | Comment ID (primary key) |
252
+ | `issue_number` | int32 | Parent issue/PR number |
253
+ | `author` | string | Username |
254
+ | `body` | string | Comment body in Markdown |
255
+ | `created_at` | timestamp | When posted |
256
+ | `updated_at` | timestamp | Last edit |
257
+ | `reactions` | string (JSON) | Reaction counts |
258
+ | `author_association` | string | `OWNER`, `MEMBER`, `CONTRIBUTOR`, `NONE`, etc. |
259
+
260
+ ### `review_comments`
261
+
262
+ Inline code review comments on PR diffs. Each comment is attached to a specific file and line in the diff.
263
+
264
+ | Column | Type | Description |
265
+ |---|---|---|
266
+ | `id` | int64 | Comment ID (primary key) |
267
+ | `pr_number` | int32 | Parent PR number |
268
+ | `review_id` | int64 | Parent review ID |
269
+ | `author` | string | Reviewer username |
270
+ | `body` | string | Comment body in Markdown |
271
+ | `path` | string | File path in the diff |
272
+ | `line` | int32 | Line number |
273
+ | `side` | string | `LEFT` (old code) or `RIGHT` (new code) |
274
+ | `diff_hunk` | string | Surrounding diff context |
275
+ | `created_at` | timestamp | When posted |
276
+ | `updated_at` | timestamp | Last edit |
277
+ | `in_reply_to_id` | int64 | Parent comment ID (for threaded replies) |
278
+
279
+ ### `reviews`
280
+
281
+ PR review decisions. One row per review action on a PR.
282
+
283
+ | Column | Type | Description |
284
+ |---|---|---|
285
+ | `id` | int64 | Review ID (primary key) |
286
+ | `pr_number` | int32 | Parent PR number |
287
+ | `author` | string | Reviewer username |
288
+ | `state` | string | `APPROVED`, `CHANGES_REQUESTED`, `COMMENTED`, `DISMISSED` |
289
+ | `body` | string | Review summary in Markdown |
290
+ | `submitted_at` | timestamp | When submitted |
291
+ | `commit_id` | string | Commit SHA that was reviewed |
292
+
293
+ ### `timeline_events`
294
+
295
+ The full lifecycle of every issue and PR. Every label change, assignment, cross-reference, merge, force-push, lock, and other state transition.
296
+
297
+ | Column | Type | Description |
298
+ |---|---|---|
299
+ | `id` | string | Event ID (node_id or synthesized) |
300
+ | `issue_number` | int32 | Parent issue/PR number |
301
+ | `event_type` | string | Event type (see below) |
302
+ | `actor` | string | Username who triggered the event |
303
+ | `created_at` | timestamp | When it happened |
304
+ | `data` | string (JSON) | Full event payload |
305
+
306
+ Event types include `LabeledEvent`, `UnlabeledEvent`, `ClosedEvent`, `ReopenedEvent`, `AssignedEvent`, `UnassignedEvent`, `MilestonedEvent`, `DemilestonedEvent`, `RenamedTitleEvent`, `CrossReferencedEvent`, `ReferencedEvent`, `LockedEvent`, `UnlockedEvent`, `PinnedEvent`, `MergedEvent`, `ReviewRequestedEvent`, `HeadRefForcePushedEvent`, `HeadRefDeletedEvent`, `ReadyForReviewEvent`, `ConvertToDraftEvent`, and more.
307
+
308
+ The `data` column contains the raw event payload as JSON. Its shape depends on `event_type`. See the [GitHub GraphQL timeline items documentation](https://docs.github.com/en/graphql/reference/unions#issuetimelineitems) for the full type catalog.
309
+
310
+ ### `pr_files`
311
+
312
+ Every file touched by each pull request, with per-file diff statistics.
313
+
314
+ | Column | Type | Description |
315
+ |---|---|---|
316
+ | `pr_number` | int32 | Parent PR number |
317
+ | `path` | string | File path |
318
+ | `additions` | int32 | Lines added |
319
+ | `deletions` | int32 | Lines deleted |
320
+ | `status` | string | `added`, `removed`, `modified`, `renamed` |
321
+ | `previous_filename` | string | Original path (for renames) |
322
+
323
+ ### `commit_statuses`
324
+
325
+ CI/CD status checks and GitHub Actions results for each commit.
326
+
327
+ | Column | Type | Description |
328
+ |---|---|---|
329
+ | `sha` | string | Commit SHA |
330
+ | `context` | string | Check name (e.g. `ci/circleci`, `check:build`) |
331
+ | `state` | string | `success`, `failure`, `pending`, `error` |
332
+ | `description` | string | Status description |
333
+ | `target_url` | string | Link to CI details |
334
+ | `created_at` | timestamp | When reported |
335
+
336
+ ## Dataset statistics
337
+
338
+ | Table | Rows | Description |
339
+ |-------|-----:|-------------|
340
+ | `issues` | 999 | Issues and pull requests (shared metadata) |
341
+ | `timeline_events` | 4.1K | Activity timeline (labels, closes, merges, assignments) |
342
+ | **Total** | **5.1K** | |
343
+
344
+ ## How it's built
345
+
346
+ The sync pipeline uses both GitHub APIs. The [REST API](https://docs.github.com/en/rest) handles bulk listing: issues, comments, and review comments are fetched repo-wide with `since`-based incremental pagination and parallel page fetching across multiple tokens. The [GraphQL API](https://docs.github.com/en/graphql) handles per-item detail: one query grabs reviews, timeline events, file changes, and commit statuses in a single round trip, with automatic REST fallback for PRs with more than 100 files or reviews.
347
+
348
+ Multiple GitHub Personal Access Tokens rotate round-robin to spread rate limit load. The pipeline is fully incremental and idempotent: re-running picks up only what changed since the last sync.
349
+
350
+ Everything lands in per-repo [DuckDB](https://duckdb.org/) files first, then gets exported to Parquet with Zstd compression for publishing here. No filtering, deduplication, or content changes. Bot activity, automated PRs, CI noise, Dependabot upgrades, all of it is preserved, because that's how repos actually work.
351
+
352
+ ## Known limitations
353
+
354
+ - **Point-in-time snapshot.** Data reflects the state at the last sync, not real-time. Incremental updates capture everything that changed since the previous sync.
355
+ - **Bot activity included.** Comments and PRs from bots (Dependabot, Renovate, GitHub Actions, etc.) are included without filtering. This is intentional. Filter on `author` if you want humans only.
356
+ - **JSON columns.** `labels`, `assignees`, `reactions`, and `data` contain JSON strings. Use `json_extract()` in DuckDB or `json.loads()` in Python.
357
+ - **Body text can be large.** Issue and comment bodies contain full Markdown, sometimes with embedded images. Project only the columns you need for memory-constrained workloads.
358
+ - **Timeline data varies by event type.** The `data` field in `timeline_events` contains the raw event payload as JSON. The schema depends on `event_type`.
359
+
360
+ ## Personal and sensitive information
361
+
362
+ Usernames, user IDs, and author associations are included as they appear in the GitHub API. All data was already publicly accessible on GitHub. Email addresses do not appear in this dataset (they exist only in git commit objects, which are in the separate code archive, not here). No private repository data is present.
363
+
364
+ ## License
365
+
366
+ Released under the [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/). The underlying data is sourced from GitHub's public API. [GitHub's Terms of Service](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service) apply to the original data.
367
+
368
+ ## Thanks
369
+
370
+ All the data here comes from [GitHub](https://github.com/)'s public [REST API](https://docs.github.com/en/rest) and [GraphQL API](https://docs.github.com/en/graphql). We are grateful to the open-source maintainers and contributors whose work is represented in these tables.
371
+
372
+ - **[OpenGitHub](https://huggingface.co/datasets/open-index/open-github)**, our companion dataset covering the full GitHub event stream via [GH Archive](https://www.gharchive.org/) by [Ilya Grigorik](https://www.igvita.com/)
373
+ - Built with [DuckDB](https://duckdb.org/) (Go driver), [Apache Parquet](https://parquet.apache.org/) (Zstd compression), published via [Hugging Face Hub](https://huggingface.co/)
374
+
375
+ Questions, feedback, or issues? Open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-github-meta/discussions).
code/download.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = ["huggingface-hub"]
4
+ # ///
5
+ """Download dataset files from Hugging Face Hub."""
6
+
7
+ from huggingface_hub import snapshot_download
8
+
9
+ # Download only issues
10
+ snapshot_download(
11
+ "open-index/open-github-meta",
12
+ repo_type="dataset",
13
+ local_dir="./open-github-meta/",
14
+ allow_patterns="data/issues/**/*.parquet",
15
+ )
16
+ print("Downloaded issues parquet files to ./open-github-meta/")
code/label_activity.sql ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ -- Label activity over time (monthly)
2
+ SELECT
3
+ date_trunc('month', created_at) as month,
4
+ COUNT(*) as label_events
5
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/timeline_events/**/0.parquet')
6
+ WHERE event_type = 'LabeledEvent'
7
+ GROUP BY month
8
+ ORDER BY month;
code/largest_prs.sql ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Largest PRs by lines changed
2
+ SELECT
3
+ number,
4
+ additions,
5
+ deletions,
6
+ changed_files,
7
+ additions + deletions as total_lines
8
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/pull_requests/**/0.parquet')
9
+ ORDER BY total_lines DESC
10
+ LIMIT 20;
code/load_repo.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = ["datasets"]
4
+ # ///
5
+ """Load pull requests for a specific repo."""
6
+
7
+ from datasets import load_dataset
8
+
9
+ ds = load_dataset(
10
+ "open-index/open-github-meta",
11
+ "pull_requests",
12
+ data_files="data/pull_requests/facebook/react/0.parquet",
13
+ )
14
+ df = ds["train"].to_pandas()
15
+ print(f"Loaded {len(df)} pull requests")
16
+ print(f"Merged: {df['merged'].sum()} ({df['merged'].mean()*100:.1f}%)")
17
+ print(f"\nTop 10 by lines changed:")
18
+ df["total_lines"] = df["additions"] + df["deletions"]
19
+ print(df.nlargest(10, "total_lines")[["number", "additions", "deletions", "total_lines"]].to_string(index=False))
code/most_reviewed_prs.sql ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Most reviewed PRs by number of review submissions
2
+ SELECT
3
+ r.pr_number,
4
+ COUNT(*) as review_count,
5
+ COUNT(*) FILTER (WHERE r.state = 'APPROVED') as approvals,
6
+ COUNT(*) FILTER (WHERE r.state = 'CHANGES_REQUESTED') as changes_requested
7
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/reviews/**/0.parquet') r
8
+ GROUP BY r.pr_number
9
+ ORDER BY review_count DESC
10
+ LIMIT 20;
code/pr_merge_rate.sql ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ -- PR merge rate by repo
2
+ SELECT
3
+ split_part(filename, '/', 8) || '/' || split_part(filename, '/', 9) as repo,
4
+ COUNT(*) as total_prs,
5
+ COUNT(*) FILTER (WHERE merged) as merged,
6
+ ROUND(COUNT(*) FILTER (WHERE merged) * 100.0 / COUNT(*), 1) as merge_pct
7
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/pull_requests/**/0.parquet', filename=true)
8
+ GROUP BY repo
9
+ ORDER BY total_prs DESC;
code/stream_issues.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = ["datasets"]
4
+ # ///
5
+ """Stream issues from the dataset without downloading everything."""
6
+
7
+ from datasets import load_dataset
8
+
9
+ ds = load_dataset("open-index/open-github-meta", "issues", streaming=True)
10
+ for i, row in enumerate(ds["train"]):
11
+ print(f"#{row['number']}: [{row['state']}] {row['title']} (by {row['author']})")
12
+ if i >= 19:
13
+ break
code/top_issue_authors.sql ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -- Top issue authors across all repos
2
+ SELECT
3
+ author,
4
+ COUNT(*) as issue_count,
5
+ COUNT(*) FILTER (WHERE state = 'open') as open,
6
+ COUNT(*) FILTER (WHERE state = 'closed') as closed
7
+ FROM read_parquet('hf://datasets/open-index/open-github-meta/data/issues/**/0.parquet')
8
+ WHERE is_pull_request = false
9
+ GROUP BY author
10
+ ORDER BY issue_count DESC
11
+ LIMIT 20;
data/issues/facebook/react/0.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6894052a3d98ebff08dbba66d4c52affe748a0b79b122133ca75a5b2f66fc80b
3
+ size 532672
data/timeline_events/facebook/react/0.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dda82af3ebb6c614f3570902c62c0126b9390cfb6dc3fad16e25de36b70cba01
3
+ size 447017
stats.csv ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ repository,table,rows
2
+ facebook/react,issues,999
3
+ facebook/react,timeline_events,4107
4
+ _total,issues,999
5
+ _total,timeline_events,4107