Datasets:

Languages:
code
Multilinguality:
multilingual
Size Categories:
unknown
ArXiv:
Tags:
License:
File size: 7,517 Bytes
a21ce4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46d20d8
a21ce4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a94cf7
 
a21ce4f
 
 
0a94cf7
a21ce4f
 
 
 
 
 
 
 
 
8653b08
a21ce4f
 
 
 
0a94cf7
a21ce4f
 
 
 
46d20d8
 
 
0a94cf7
46d20d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a21ce4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: The-Stack-Metadata
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids: []

extra_gated_prompt: |-
  ## Terms of Use for The Stack
  The Stack Metadata is a collection of additional information for and is part of The Stack dataset, - a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:
  1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
  2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.
  3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.

  By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
    
extra_gated_fields:
 Email: text
 I have read the License and agree with its terms: checkbox
---

# Dataset Card for The Stack Metadata

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Changelog](#changelog)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
  - [Data Fields](#data-fields)
  - [Usage Example](#usage-example)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Terms of Use for The Stack](#terms-of-use-for-the-stack)

## Dataset Description

- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** https://arxiv.org/abs/2211.15533
- **Leaderboard:** N/A
- **Point of Contact:** contact@bigcode-project.org

### Changelog

|Release|Description|
|-|-|
|v1.1| This is the first release of the metadata. It is for The Stack v1.1|
|v1.2| Metadata dataset matching The Stack v1.2|


### Dataset Summary

This is a set of additional information for repositories used for The Stack. It contains file paths, detected licenes as well as some other information for the repositories.

### Supported Tasks and Leaderboards

The main task is to recreate repository structure from the files of The Stack. Also, the set can be used for computing statistics and custom filtering or aggregation operations on The Stack.

## Dataset Structure

### Data Fields
![set structure](images/structure.png)
The set is split into buckets by repositories. There are 944 buckets. Additionally to the fields in the image, `ri` contains `min_repo_event_datetime` which is the ealiest date and time of an event for a repo after Jan 1 2015.


![set usage](images/usage.png)

As an example of an aggregation operation on The Stack, the image above shows conceptually a selection of stars ( and  issues and PR count) for a file. Each unique file can be part of multiple repositories. So, The Stack releases unique files and aggregates meta information (e.g stars) from all repositories it belongs to. For example, for max_stars_count we take the maximum number of stars from all repositories the file is part of.


The meta data will allow you to reconstruct repository directory structures. For this, for each repository form `ri` tabele it is needed to take all its files from `fi` table, find them in The Stack by file's `hexsha` and  save those files' content under its path for a repository from `fi` table. For speed it is preferable to index The Stack by hexsha first.

### Usage Example

Restore folder structure for python files in numpy repository
```python
import datasets
from pathlib import Path
from tqdm.auto import tqdm
import pandas as pd

# assuming metadata is cloned into the local folder /data/hf_repos/the-stack-metadata
#          the stack is cloned into the local folder /data/hf_repos/the-stack-v1.1
#          destination folder is in  /repo_workdir/numpy_restored
the_stack_meta_path = Path('/data/hf_repos/the-stack-metadata')
the_stack_path = Path('/data/hf_repos/the-stack-v1.1')
repo_dst_root = Path('/repo_workdir/numpy_restored')
repo_name = 'numpy/numpy'

# Get bucket with numpy repo info 
# meta_bucket_path = None
#for fn in tqdm(list((the_stack_meta_path/'data').glob('*/ri.parquet'))):
#    df = pd.read_parquet(fn)
#    if any(df['name'] == repo_name):
#        meta_bucket_path = fn
#        break
meta_bucket_path = the_stack_meta_path / 'data/255_944'


# Get repository id from repo name
ri_id = pd.read_parquet(
    meta_bucket_path / 'ri.parquet'
).query(
    f'`name` == "{repo_name}"'
)['id'].to_list()[0]

# Get files information for the reopository
files_info = pd.read_parquet(
    meta_bucket_path / 'fi.parquet'
).query(
    f'`ri_id` == {ri_id} and `size` != 0 and `is_deleted` == False'
)

# Convert DF with files information to a dictionary by language and then file hexsha
#   there can be more than one file with the same  hexsha in the repo so we gather
#   all instances per unique hexsha
files_info_dict = {
    k: v[['hexsha', 'path']].groupby('hexsha').apply(lambda x: list(x['path'])).to_dict()
    for k, v in files_info.groupby('lang_ex')
}

# Load Python part of The Stack
ds = datasets.load_dataset(
    str(the_stack_path/'data/python'),
    num_proc=10, ignore_verifications=True
)

# Save file content of the python files in the numpy reposirotry in their appropriate locations
def save_file_content(example, files_info_dict, repo_dst_root):
    if example['hexsha'] in  files_info_dict:
        for el in files_info_dict[example['hexsha']]:
            path = repo_dst_root / el
            path.parent.mkdir(parents=True, exist_ok=True)
            path.write_text(example['content'])
ds.map(
    save_file_content,
    fn_kwargs={'files_info_dict': files_info_dict['Python'], 'repo_dst_root': repo_dst_root},
    num_proc=10
)
```

## Dataset Creation

Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#dataset-creation) in The Stack.

## Considerations for Using the Data

Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#considerations-for-using-the-data) in The Stack.

## Additional Information

Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#additional-information) in The Stack.


## Terms of Use for The Stack

Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) in The Stack.