parquet-converter commited on
Commit
e0548b4
1 Parent(s): 9f5b323

Update parquet files

Browse files
Files changed (5) hide show
  1. .gitattributes +0 -17
  2. README.md +0 -188
  3. datasets.json +0 -3
  4. default/nirvana-train.parquet +0 -0
  5. nirvana.py +0 -107
.gitattributes DELETED
@@ -1,17 +0,0 @@
1
- *.bin.* filter=lfs diff=lfs merge=lfs -text
2
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.h5 filter=lfs diff=lfs merge=lfs -text
5
- *.tflite filter=lfs diff=lfs merge=lfs -text
6
- *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
- *.ot filter=lfs diff=lfs merge=lfs -text
8
- *.onnx filter=lfs diff=lfs merge=lfs -text
9
- *.arrow filter=lfs diff=lfs merge=lfs -text
10
- *.ftz filter=lfs diff=lfs merge=lfs -text
11
- *.joblib filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.pb filter=lfs diff=lfs merge=lfs -text
15
- *.pt filter=lfs diff=lfs merge=lfs -text
16
- *.pth filter=lfs diff=lfs merge=lfs -text
17
- *.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,188 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- tags:
5
- - huggingartists
6
- - lyrics
7
- ---
8
-
9
- # Dataset Card for "huggingartists/nirvana"
10
-
11
- ## Table of Contents
12
- - [Dataset Description](#dataset-description)
13
- - [Dataset Summary](#dataset-summary)
14
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
15
- - [Languages](#languages)
16
- - [How to use](#how-to-use)
17
- - [Dataset Structure](#dataset-structure)
18
- - [Data Fields](#data-fields)
19
- - [Data Splits](#data-splits)
20
- - [Dataset Creation](#dataset-creation)
21
- - [Curation Rationale](#curation-rationale)
22
- - [Source Data](#source-data)
23
- - [Annotations](#annotations)
24
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
25
- - [Considerations for Using the Data](#considerations-for-using-the-data)
26
- - [Social Impact of Dataset](#social-impact-of-dataset)
27
- - [Discussion of Biases](#discussion-of-biases)
28
- - [Other Known Limitations](#other-known-limitations)
29
- - [Additional Information](#additional-information)
30
- - [Dataset Curators](#dataset-curators)
31
- - [Licensing Information](#licensing-information)
32
- - [Citation Information](#citation-information)
33
-
34
- ## Dataset Description
35
-
36
- - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
37
- - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
38
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
39
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
- - **Size of the generated dataset:** 0.336531 MB
41
-
42
-
43
- <div class="inline-flex flex-col" style="line-height: 1.5;">
44
- <div class="flex">
45
- <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4c1373962cfc3a668a3e30da9a76a34c.640x640x1.jpg&#39;)">
46
- </div>
47
- </div>
48
- <a href="https://huggingface.co/huggingartists/nirvana">
49
- <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
50
- </a>
51
- <div style="text-align: center; font-size: 16px; font-weight: 800">Nirvana</div>
52
- <a href="https://genius.com/artists/nirvana">
53
- <div style="text-align: center; font-size: 14px;">@nirvana</div>
54
- </a>
55
- </div>
56
-
57
- ### Dataset Summary
58
-
59
- The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
60
- Model is available [here](https://huggingface.co/huggingartists/nirvana).
61
-
62
- ### Supported Tasks and Leaderboards
63
-
64
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
65
-
66
- ### Languages
67
-
68
- en
69
-
70
- ## How to use
71
-
72
- How to load this dataset directly with the datasets library:
73
-
74
- ```python
75
- from datasets import load_dataset
76
-
77
- dataset = load_dataset("huggingartists/nirvana")
78
- ```
79
-
80
- ## Dataset Structure
81
-
82
- An example of 'train' looks as follows.
83
- ```
84
- This example was too long and was cropped:
85
-
86
- {
87
- "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
88
- }
89
- ```
90
-
91
- ### Data Fields
92
-
93
- The data fields are the same among all splits.
94
-
95
- - `text`: a `string` feature.
96
-
97
-
98
- ### Data Splits
99
-
100
- | train |validation|test|
101
- |------:|---------:|---:|
102
- |TRAIN_0.336531| -| -|
103
-
104
- 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
105
-
106
- ```python
107
- from datasets import load_dataset, Dataset, DatasetDict
108
- import numpy as np
109
-
110
- datasets = load_dataset("huggingartists/nirvana")
111
-
112
- train_percentage = 0.9
113
- validation_percentage = 0.07
114
- test_percentage = 0.03
115
-
116
- train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
117
-
118
- datasets = DatasetDict(
119
- {
120
- 'train': Dataset.from_dict({'text': list(train)}),
121
- 'validation': Dataset.from_dict({'text': list(validation)}),
122
- 'test': Dataset.from_dict({'text': list(test)})
123
- }
124
- )
125
- ```
126
-
127
- ## Dataset Creation
128
-
129
- ### Curation Rationale
130
-
131
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
-
133
- ### Source Data
134
-
135
- #### Initial Data Collection and Normalization
136
-
137
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
-
139
- #### Who are the source language producers?
140
-
141
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
-
143
- ### Annotations
144
-
145
- #### Annotation process
146
-
147
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
148
-
149
- #### Who are the annotators?
150
-
151
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
-
153
- ### Personal and Sensitive Information
154
-
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
-
157
- ## Considerations for Using the Data
158
-
159
- ### Social Impact of Dataset
160
-
161
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
162
-
163
- ### Discussion of Biases
164
-
165
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
-
167
- ### Other Known Limitations
168
-
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
-
171
- ## Additional Information
172
-
173
- ### Dataset Curators
174
-
175
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
-
177
- ### Licensing Information
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- ### Citation Information
182
-
183
- ```
184
- @InProceedings{huggingartists,
185
- author={Aleksey Korshuk}
186
- year=2021
187
- }
188
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
datasets.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:36df4cc030f1e60a8122940980cf202dc5ebd7765eca40297d7c5e846dfc8b27
3
- size 309320
 
 
 
 
default/nirvana-train.parquet ADDED
Binary file (114 kB). View file
 
nirvana.py DELETED
@@ -1,107 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Lyrics dataset parsed from Genius"""
16
-
17
-
18
- import csv
19
- import json
20
- import os
21
- import gzip
22
-
23
- import datasets
24
-
25
-
26
- _CITATION = """\
27
- @InProceedings{huggingartists:dataset,
28
- title = {Lyrics dataset},
29
- author={Aleksey Korshuk
30
- },
31
- year={2021}
32
- }
33
- """
34
-
35
-
36
- _DESCRIPTION = """\
37
- This dataset is designed to generate lyrics with HuggingArtists.
38
- """
39
-
40
- # Add a link to an official homepage for the dataset here
41
- _HOMEPAGE = "https://github.com/AlekseyKorshuk/huggingartists"
42
-
43
- # Add the licence for the dataset here if you can find it
44
- _LICENSE = "All rights belong to copyright holders"
45
-
46
- _URL = "https://huggingface.co/datasets/huggingartists/nirvana/resolve/main/datasets.json"
47
-
48
- # Name of the dataset
49
- class LyricsDataset(datasets.GeneratorBasedBuilder):
50
- """Lyrics dataset"""
51
-
52
- VERSION = datasets.Version("1.0.0")
53
-
54
- def _info(self):
55
- # This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
56
- features = datasets.Features(
57
- {
58
- "text": datasets.Value("string"),
59
- }
60
- )
61
- return datasets.DatasetInfo(
62
- # This is the description that will appear on the datasets page.
63
- description=_DESCRIPTION,
64
- # This defines the different columns of the dataset and their types
65
- features=features, # Here we define them above because they are different between the two configurations
66
- # If there's a common (input, target) tuple from the features,
67
- # specify them here. They'll be used if as_supervised=True in
68
- # builder.as_dataset.
69
- supervised_keys=None,
70
- # Homepage of the dataset for documentation
71
- homepage=_HOMEPAGE,
72
- # License for the dataset if available
73
- license=_LICENSE,
74
- # Citation for the dataset
75
- citation=_CITATION,
76
- )
77
-
78
- def _split_generators(self, dl_manager):
79
- """Returns SplitGenerators."""
80
- # This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
81
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
82
-
83
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
84
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
85
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
86
-
87
- data_dir = dl_manager.download_and_extract(_URL)
88
- return [
89
- datasets.SplitGenerator(
90
- name=datasets.Split.TRAIN,
91
- # These kwargs will be passed to _generate_examples
92
- gen_kwargs={
93
- "filepath": data_dir,
94
- "split": "train",
95
- },
96
- ),
97
- ]
98
-
99
-
100
- def _generate_examples(self, filepath, split):
101
- """Yields examples as (key, example) tuples."""
102
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
103
-
104
- with open(filepath, encoding="utf-8") as f:
105
- data = json.load(f)
106
- for id, pred in enumerate(data[split]):
107
- yield id, {"text": pred}