parquet-converter commited on
Commit
40aa09b
1 Parent(s): 8757c84

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,28 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- *.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,204 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- tags:
5
- - huggingartists
6
- - lyrics
7
- ---
8
-
9
- # Dataset Card for "huggingartists/bob-dylan"
10
-
11
- ## Table of Contents
12
- - [Dataset Description](#dataset-description)
13
- - [Dataset Summary](#dataset-summary)
14
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
15
- - [Languages](#languages)
16
- - [How to use](#how-to-use)
17
- - [Dataset Structure](#dataset-structure)
18
- - [Data Fields](#data-fields)
19
- - [Data Splits](#data-splits)
20
- - [Dataset Creation](#dataset-creation)
21
- - [Curation Rationale](#curation-rationale)
22
- - [Source Data](#source-data)
23
- - [Annotations](#annotations)
24
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
25
- - [Considerations for Using the Data](#considerations-for-using-the-data)
26
- - [Social Impact of Dataset](#social-impact-of-dataset)
27
- - [Discussion of Biases](#discussion-of-biases)
28
- - [Other Known Limitations](#other-known-limitations)
29
- - [Additional Information](#additional-information)
30
- - [Dataset Curators](#dataset-curators)
31
- - [Licensing Information](#licensing-information)
32
- - [Citation Information](#citation-information)
33
- - [About](#about)
34
-
35
- ## Dataset Description
36
-
37
- - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
38
- - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
39
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
41
- - **Size of the generated dataset:** 2.91167 MB
42
-
43
-
44
- <div class="inline-flex flex-col" style="line-height: 1.5;">
45
- <div class="flex">
46
- <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/22306423b6ad8777d1ed5b33ad8b0d0b.1000x1000x1.jpg&#39;)">
47
- </div>
48
- </div>
49
- <a href="https://huggingface.co/huggingartists/bob-dylan">
50
- <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
51
- </a>
52
- <div style="text-align: center; font-size: 16px; font-weight: 800">Bob Dylan</div>
53
- <a href="https://genius.com/artists/bob-dylan">
54
- <div style="text-align: center; font-size: 14px;">@bob-dylan</div>
55
- </a>
56
- </div>
57
-
58
- ### Dataset Summary
59
-
60
- The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
61
- Model is available [here](https://huggingface.co/huggingartists/bob-dylan).
62
-
63
- ### Supported Tasks and Leaderboards
64
-
65
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
-
67
- ### Languages
68
-
69
- en
70
-
71
- ## How to use
72
-
73
- How to load this dataset directly with the datasets library:
74
-
75
- ```python
76
- from datasets import load_dataset
77
-
78
- dataset = load_dataset("huggingartists/bob-dylan")
79
- ```
80
-
81
- ## Dataset Structure
82
-
83
- An example of 'train' looks as follows.
84
- ```
85
- This example was too long and was cropped:
86
-
87
- {
88
- "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
89
- }
90
- ```
91
-
92
- ### Data Fields
93
-
94
- The data fields are the same among all splits.
95
-
96
- - `text`: a `string` feature.
97
-
98
-
99
- ### Data Splits
100
-
101
- | train |validation|test|
102
- |------:|---------:|---:|
103
- |2241| -| -|
104
-
105
- 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
106
-
107
- ```python
108
- from datasets import load_dataset, Dataset, DatasetDict
109
- import numpy as np
110
-
111
- datasets = load_dataset("huggingartists/bob-dylan")
112
-
113
- train_percentage = 0.9
114
- validation_percentage = 0.07
115
- test_percentage = 0.03
116
-
117
- train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
118
-
119
- datasets = DatasetDict(
120
- {
121
- 'train': Dataset.from_dict({'text': list(train)}),
122
- 'validation': Dataset.from_dict({'text': list(validation)}),
123
- 'test': Dataset.from_dict({'text': list(test)})
124
- }
125
- )
126
- ```
127
-
128
- ## Dataset Creation
129
-
130
- ### Curation Rationale
131
-
132
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
-
134
- ### Source Data
135
-
136
- #### Initial Data Collection and Normalization
137
-
138
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
-
140
- #### Who are the source language producers?
141
-
142
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
-
144
- ### Annotations
145
-
146
- #### Annotation process
147
-
148
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
149
-
150
- #### Who are the annotators?
151
-
152
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
-
154
- ### Personal and Sensitive Information
155
-
156
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
-
158
- ## Considerations for Using the Data
159
-
160
- ### Social Impact of Dataset
161
-
162
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
-
164
- ### Discussion of Biases
165
-
166
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
167
-
168
- ### Other Known Limitations
169
-
170
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
-
172
- ## Additional Information
173
-
174
- ### Dataset Curators
175
-
176
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
-
178
- ### Licensing Information
179
-
180
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
181
-
182
- ### Citation Information
183
-
184
- ```
185
- @InProceedings{huggingartists,
186
- author={Aleksey Korshuk}
187
- year=2021
188
- }
189
- ```
190
-
191
-
192
- ## About
193
-
194
- *Built by Aleksey Korshuk*
195
-
196
- [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk)
197
-
198
- [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
199
-
200
- [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
201
-
202
- For more details, visit the project repository.
203
-
204
- [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bob-dylan.py DELETED
@@ -1,107 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Lyrics dataset parsed from Genius"""
16
-
17
-
18
- import csv
19
- import json
20
- import os
21
- import gzip
22
-
23
- import datasets
24
-
25
-
26
- _CITATION = """\
27
- @InProceedings{huggingartists:dataset,
28
- title = {Lyrics dataset},
29
- author={Aleksey Korshuk
30
- },
31
- year={2021}
32
- }
33
- """
34
-
35
-
36
- _DESCRIPTION = """\
37
- This dataset is designed to generate lyrics with HuggingArtists.
38
- """
39
-
40
- # Add a link to an official homepage for the dataset here
41
- _HOMEPAGE = "https://github.com/AlekseyKorshuk/huggingartists"
42
-
43
- # Add the licence for the dataset here if you can find it
44
- _LICENSE = "All rights belong to copyright holders"
45
-
46
- _URL = "https://huggingface.co/datasets/huggingartists/bob-dylan/resolve/main/datasets.json"
47
-
48
- # Name of the dataset
49
- class LyricsDataset(datasets.GeneratorBasedBuilder):
50
- """Lyrics dataset"""
51
-
52
- VERSION = datasets.Version("1.0.0")
53
-
54
- def _info(self):
55
- # This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
56
- features = datasets.Features(
57
- {
58
- "text": datasets.Value("string"),
59
- }
60
- )
61
- return datasets.DatasetInfo(
62
- # This is the description that will appear on the datasets page.
63
- description=_DESCRIPTION,
64
- # This defines the different columns of the dataset and their types
65
- features=features, # Here we define them above because they are different between the two configurations
66
- # If there's a common (input, target) tuple from the features,
67
- # specify them here. They'll be used if as_supervised=True in
68
- # builder.as_dataset.
69
- supervised_keys=None,
70
- # Homepage of the dataset for documentation
71
- homepage=_HOMEPAGE,
72
- # License for the dataset if available
73
- license=_LICENSE,
74
- # Citation for the dataset
75
- citation=_CITATION,
76
- )
77
-
78
- def _split_generators(self, dl_manager):
79
- """Returns SplitGenerators."""
80
- # This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
81
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
82
-
83
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
84
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
85
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
86
-
87
- data_dir = dl_manager.download_and_extract(_URL)
88
- return [
89
- datasets.SplitGenerator(
90
- name=datasets.Split.TRAIN,
91
- # These kwargs will be passed to _generate_examples
92
- gen_kwargs={
93
- "filepath": data_dir,
94
- "split": "train",
95
- },
96
- ),
97
- ]
98
-
99
-
100
- def _generate_examples(self, filepath, split):
101
- """Yields examples as (key, example) tuples."""
102
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
103
-
104
- with open(filepath, encoding="utf-8") as f:
105
- data = json.load(f)
106
- for id, pred in enumerate(data[split]):
107
- yield id, {"text": pred}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
datasets.json → default/bob-dylan-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df84541347f8396a218b99987b538bedb40bc8bdd7041e94205b45f9325df8f6
3
- size 2883914
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07e4993464940cdd89bc55168a2432bbf6677266d3c9c1616d23b98caa93ba34
3
+ size 1483495