phongdtd commited on
Commit
a71e4b6
·
1 Parent(s): d3e4084

custom_common_voice dataset.

Browse files
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. README.md +367 -0
  3. common_voice.py +204 -0
  4. dataset_infos.json +3 -0
  5. main.py +16 -0
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ dataset_infos.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,367 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Common Voice
3
+ annotations_creators:
4
+ - crowdsourced
5
+ language_creators:
6
+ - crowdsourced
7
+ languages:
8
+ - ab
9
+ - ar
10
+ - as
11
+ - br
12
+ - ca
13
+ - cnh
14
+ - cs
15
+ - cv
16
+ - cy
17
+ - de
18
+ - dv
19
+ - el
20
+ - en
21
+ - eo
22
+ - es
23
+ - et
24
+ - eu
25
+ - fa
26
+ - fi
27
+ - fr
28
+ - fy-NL
29
+ - ga-IE
30
+ - hi
31
+ - hsb
32
+ - hu
33
+ - ia
34
+ - id
35
+ - it
36
+ - ja
37
+ - ka
38
+ - kab
39
+ - ky
40
+ - lg
41
+ - lt
42
+ - lv
43
+ - mn
44
+ - mt
45
+ - nl
46
+ - or
47
+ - pa-IN
48
+ - pl
49
+ - pt
50
+ - rm-sursilv
51
+ - rm-vallader
52
+ - ro
53
+ - ru
54
+ - rw
55
+ - sah
56
+ - sl
57
+ - sv-SE
58
+ - ta
59
+ - th
60
+ - tr
61
+ - tt
62
+ - uk
63
+ - vi
64
+ - vot
65
+ - zh-CN
66
+ - zh-HK
67
+ - zh-TW
68
+ licenses:
69
+ - cc0-1.0
70
+ multilinguality:
71
+ - multilingual
72
+ size_categories:
73
+ ab:
74
+ - n<1K
75
+ ar:
76
+ - 10K<n<100K
77
+ as:
78
+ - n<1K
79
+ br:
80
+ - 10K<n<100K
81
+ ca:
82
+ - 100K<n<1M
83
+ cnh:
84
+ - 1K<n<10K
85
+ cs:
86
+ - 10K<n<100K
87
+ cv:
88
+ - 10K<n<100K
89
+ cy:
90
+ - 10K<n<100K
91
+ de:
92
+ - 100K<n<1M
93
+ dv:
94
+ - 1K<n<10K
95
+ el:
96
+ - 10K<n<100K
97
+ en:
98
+ - 100K<n<1M
99
+ eo:
100
+ - 10K<n<100K
101
+ es:
102
+ - 100K<n<1M
103
+ et:
104
+ - 10K<n<100K
105
+ eu:
106
+ - 10K<n<100K
107
+ fa:
108
+ - 10K<n<100K
109
+ fi:
110
+ - 1K<n<10K
111
+ fr:
112
+ - 100K<n<1M
113
+ fy-NL:
114
+ - 10K<n<100K
115
+ ga-IE:
116
+ - 1K<n<10K
117
+ hi:
118
+ - n<1K
119
+ hsb:
120
+ - 1K<n<10K
121
+ hu:
122
+ - 1K<n<10K
123
+ ia:
124
+ - 1K<n<10K
125
+ id:
126
+ - 10K<n<100K
127
+ it:
128
+ - 100K<n<1M
129
+ ja:
130
+ - 1K<n<10K
131
+ ka:
132
+ - 1K<n<10K
133
+ kab:
134
+ - 100K<n<1M
135
+ ky:
136
+ - 10K<n<100K
137
+ lg:
138
+ - 1K<n<10K
139
+ lt:
140
+ - 1K<n<10K
141
+ lv:
142
+ - 1K<n<10K
143
+ mn:
144
+ - 1K<n<10K
145
+ mt:
146
+ - 10K<n<100K
147
+ nl:
148
+ - 10K<n<100K
149
+ or:
150
+ - 1K<n<10K
151
+ pa-IN:
152
+ - 1K<n<10K
153
+ pl:
154
+ - 10K<n<100K
155
+ pt:
156
+ - 10K<n<100K
157
+ rm-sursilv:
158
+ - 1K<n<10K
159
+ rm-vallader:
160
+ - 1K<n<10K
161
+ ro:
162
+ - 1K<n<10K
163
+ ru:
164
+ - 10K<n<100K
165
+ rw:
166
+ - 100K<n<1M
167
+ sah:
168
+ - 1K<n<10K
169
+ sl:
170
+ - 1K<n<10K
171
+ sv-SE:
172
+ - 1K<n<10K
173
+ ta:
174
+ - 10K<n<100K
175
+ th:
176
+ - 10K<n<100K
177
+ tr:
178
+ - 1K<n<10K
179
+ tt:
180
+ - 10K<n<100K
181
+ uk:
182
+ - 10K<n<100K
183
+ vi:
184
+ - 1K<n<10K
185
+ vot:
186
+ - n<1K
187
+ zh-CN:
188
+ - 10K<n<100K
189
+ zh-HK:
190
+ - 10K<n<100K
191
+ zh-TW:
192
+ - 10K<n<100K
193
+ source_datasets:
194
+ - extended|common_voice
195
+ task_categories:
196
+ - speech-processing
197
+ task_ids:
198
+ - automatic-speech-recognition
199
+ paperswithcode_id: common-voice
200
+ ---
201
+
202
+ # Dataset Card for common_voice
203
+
204
+ ## Table of Contents
205
+ - [Dataset Description](#dataset-description)
206
+ - [Dataset Summary](#dataset-summary)
207
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
208
+ - [Languages](#languages)
209
+ - [Dataset Structure](#dataset-structure)
210
+ - [Data Instances](#data-instances)
211
+ - [Data Fields](#data-fields)
212
+ - [Data Splits](#data-splits)
213
+ - [Dataset Creation](#dataset-creation)
214
+ - [Curation Rationale](#curation-rationale)
215
+ - [Source Data](#source-data)
216
+ - [Annotations](#annotations)
217
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
218
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
219
+ - [Social Impact of Dataset](#social-impact-of-dataset)
220
+ - [Discussion of Biases](#discussion-of-biases)
221
+ - [Other Known Limitations](#other-known-limitations)
222
+ - [Additional Information](#additional-information)
223
+ - [Dataset Curators](#dataset-curators)
224
+ - [Licensing Information](#licensing-information)
225
+ - [Citation Information](#citation-information)
226
+ - [Contributions](#contributions)
227
+
228
+ ## Dataset Description
229
+
230
+ - **Homepage:** https://commonvoice.mozilla.org/en/datasets
231
+ - **Repository:** https://github.com/common-voice/common-voice
232
+ - **Paper:** https://commonvoice.mozilla.org/en/datasets
233
+ - **Leaderboard:** [Needs More Information]
234
+ - **Point of Contact:** [Needs More Information]
235
+
236
+ ### Dataset Summary
237
+
238
+ The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.
239
+
240
+ The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.
241
+
242
+ ### Supported Tasks and Leaderboards
243
+
244
+ [Needs More Information]
245
+
246
+ ### Languages
247
+
248
+ English
249
+
250
+ ## Dataset Structure
251
+
252
+ ### Data Instances
253
+
254
+ A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.
255
+
256
+ `
257
+ {'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': "''", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': `nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}
258
+ `
259
+
260
+ ### Data Fields
261
+
262
+ client_id: An id for which client (voice) made the recording
263
+
264
+ path: The path to the audio file
265
+
266
+ audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
267
+
268
+ sentence: The sentence the user was prompted to speak
269
+
270
+ up_votes: How many upvotes the audio file has received from reviewers
271
+
272
+ down_votes: How many downvotes the audio file has received from reviewers
273
+
274
+ age: The age of the speaker.
275
+
276
+ gender: The gender of the speaker
277
+
278
+ accent: Accent of the speaker
279
+
280
+ locale: The locale of the speaker
281
+
282
+ segment: Usually empty field
283
+
284
+ ### Data Splits
285
+
286
+ The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
287
+
288
+ The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.
289
+
290
+ The invalidated data is data has been invalidated by reviewers
291
+ and recieved downvotes that the data is of low quality.
292
+
293
+ The reported data is data that has been reported, for different reasons.
294
+
295
+ The other data is data that has not yet been reviewed.
296
+
297
+ The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
298
+
299
+ ## Dataset Creation
300
+
301
+ ### Curation Rationale
302
+
303
+ [Needs More Information]
304
+
305
+ ### Source Data
306
+
307
+ #### Initial Data Collection and Normalization
308
+
309
+ [Needs More Information]
310
+
311
+ #### Who are the source language producers?
312
+
313
+ [Needs More Information]
314
+
315
+ ### Annotations
316
+
317
+ #### Annotation process
318
+
319
+ [Needs More Information]
320
+
321
+ #### Who are the annotators?
322
+
323
+ [Needs More Information]
324
+
325
+ ### Personal and Sensitive Information
326
+
327
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
328
+
329
+ ## Considerations for Using the Data
330
+
331
+ ### Social Impact of Dataset
332
+
333
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
334
+
335
+ ### Discussion of Biases
336
+
337
+ [More Information Needed]
338
+
339
+ ### Other Known Limitations
340
+
341
+ [More Information Needed]
342
+
343
+ ## Additional Information
344
+
345
+ ### Dataset Curators
346
+
347
+ [More Information Needed]
348
+
349
+ ### Licensing Information
350
+
351
+ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
352
+
353
+ ### Citation Information
354
+
355
+ ```
356
+ @inproceedings{commonvoice:2020,
357
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
358
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
359
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
360
+ pages = {4211--4215},
361
+ year = 2020
362
+ }
363
+ ```
364
+
365
+ ### Contributions
366
+
367
+ Thanks to [@BirgerMoell](https://github.com/BirgerMoell) for adding this dataset.
common_voice.py ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ Common Voice Dataset"""
16
+
17
+
18
+ import datasets
19
+ from datasets.tasks import AutomaticSpeechRecognition
20
+
21
+
22
+ _DATA_URL = "https://drive.google.com/uc?export=download&id=13PIifohVjBB5ffSCm4eXyF8O9I30HOj2"
23
+
24
+ _DESCRIPTION = """\
25
+ Common Voice is Mozilla's initiative to help teach machines how real people speak.
26
+ The dataset currently consists of 7,335 validated hours of speech in 60 languages, but we’re always adding more voices and languages.
27
+ """
28
+
29
+ _LANGUAGES = {
30
+ "vi": {
31
+ "Language": "Vietnamese",
32
+ "Date": "2020-12-11",
33
+ "Size": "50 MB",
34
+ "Version": "vi_1h_2020-12-11",
35
+ "Validated_Hr_Total": 0.74,
36
+ "Overall_Hr_Total": 1,
37
+ "Number_Of_Voice": 62,
38
+ },
39
+ }
40
+
41
+
42
+ class CustomCommonVoiceConfig(datasets.BuilderConfig):
43
+ """BuilderConfig for CommonVoice."""
44
+
45
+ def __init__(self, name, sub_version, **kwargs):
46
+ """
47
+ Args:
48
+ data_dir: `string`, the path to the folder containing the files in the
49
+ downloaded .tar
50
+ citation: `string`, citation for the data set
51
+ url: `string`, url for information about the data set
52
+ **kwargs: keyword arguments forwarded to super.
53
+ """
54
+ self.sub_version = sub_version
55
+ self.language = kwargs.pop("language", None)
56
+ self.date_of_snapshot = kwargs.pop("date", None)
57
+ self.size = kwargs.pop("size", None)
58
+ self.validated_hr_total = kwargs.pop("val_hrs", None)
59
+ self.total_hr_total = kwargs.pop("total_hrs", None)
60
+ self.num_of_voice = kwargs.pop("num_of_voice", None)
61
+ description = f"Common Voice speech to text dataset in {self.language} version " \
62
+ f"{self.sub_version} of {self.date_of_snapshot}. " \
63
+ f"The dataset comprises {self.validated_hr_total} of validated transcribed speech data from " \
64
+ f"{self.num_of_voice} speakers. The dataset has a size of {self.size} "
65
+ super(CustomCommonVoiceConfig, self).__init__(
66
+ name=name, version=datasets.Version("1.0", ""), description=description, **kwargs
67
+ )
68
+
69
+
70
+ class CustomCommonVoice(datasets.GeneratorBasedBuilder):
71
+
72
+ DEFAULT_WRITER_BATCH_SIZE = 1000
73
+ BUILDER_CONFIGS = [
74
+ CustomCommonVoiceConfig(
75
+ name=lang_id,
76
+ language=_LANGUAGES[lang_id]["Language"],
77
+ sub_version=_LANGUAGES[lang_id]["Version"],
78
+ date=_LANGUAGES[lang_id]["Date"],
79
+ size=_LANGUAGES[lang_id]["Size"],
80
+ val_hrs=_LANGUAGES[lang_id]["Validated_Hr_Total"],
81
+ total_hrs=_LANGUAGES[lang_id]["Overall_Hr_Total"],
82
+ num_of_voice=_LANGUAGES[lang_id]["Number_Of_Voice"],
83
+ )
84
+ for lang_id in _LANGUAGES.keys()
85
+ ]
86
+
87
+ def _info(self):
88
+ features = datasets.Features(
89
+ {
90
+ "client_id": datasets.Value("string"),
91
+ "path": datasets.Value("string"),
92
+ "audio": datasets.Audio(sampling_rate=48_000),
93
+ "sentence": datasets.Value("string"),
94
+ "up_votes": datasets.Value("int64"),
95
+ "down_votes": datasets.Value("int64"),
96
+ "age": datasets.Value("string"),
97
+ "gender": datasets.Value("string"),
98
+ "accent": datasets.Value("string"),
99
+ "locale": datasets.Value("string"),
100
+ "segment": datasets.Value("string"),
101
+ }
102
+ )
103
+
104
+ return datasets.DatasetInfo(
105
+ description=_DESCRIPTION,
106
+ features=features,
107
+ supervised_keys=None,
108
+ task_templates=[
109
+ AutomaticSpeechRecognition(audio_file_path_column="path", transcription_column="sentence")
110
+ ],
111
+ )
112
+
113
+ def _split_generators(self, dl_manager):
114
+ """Returns SplitGenerators."""
115
+ archive = dl_manager.download(_DATA_URL)
116
+ path_to_data = "/".join("data_1")
117
+ path_to_clips = "/".join([path_to_data, "audio"])
118
+ path_to_script = "/".join([path_to_data, "script"])
119
+
120
+ return [
121
+ datasets.SplitGenerator(
122
+ name=datasets.Split.TRAIN,
123
+ gen_kwargs={
124
+ "files": dl_manager.iter_archive(archive),
125
+ "filepath": "/".join([path_to_data, "train_custom_common_voice.csv"]),
126
+ "path_to_clips": path_to_clips,
127
+ },
128
+ ),
129
+ datasets.SplitGenerator(
130
+ name=datasets.Split.TEST,
131
+ gen_kwargs={
132
+ "files": dl_manager.iter_archive(archive),
133
+ "filepath": "/".join([path_to_data, "test_custom_common_voice.csv"]),
134
+ "path_to_clips": path_to_clips,
135
+ },
136
+ ),
137
+ # datasets.SplitGenerator(
138
+ # name=datasets.Split.VALIDATION,
139
+ # gen_kwargs={
140
+ # "files": dl_manager.iter_archive(archive),
141
+ # "filepath": "/".join([path_to_data, "dev.tsv"]),
142
+ # "path_to_clips": path_to_clips,
143
+ # },
144
+ # ),
145
+ # datasets.SplitGenerator(
146
+ # name="other",
147
+ # gen_kwargs={
148
+ # "files": dl_manager.iter_archive(archive),
149
+ # "filepath": "/".join([path_to_data, "other.tsv"]),
150
+ # "path_to_clips": path_to_clips,
151
+ # },
152
+ # ),
153
+ # datasets.SplitGenerator(
154
+ # name="invalidated",
155
+ # gen_kwargs={
156
+ # "files": dl_manager.iter_archive(archive),
157
+ # "filepath": "/".join([path_to_data, "invalidated.tsv"]),
158
+ # "path_to_clips": path_to_clips,
159
+ # },
160
+ # ),
161
+ ]
162
+
163
+ def _generate_examples(self, files, filepath, path_to_clips):
164
+ """Yields examples."""
165
+ data_fields = list(self._info().features.keys())
166
+
167
+ # audio is not a header of the csv files
168
+ data_fields.remove("audio")
169
+ path_idx = data_fields.index("path")
170
+
171
+ all_field_values = {}
172
+ metadata_found = False
173
+ for path, f in files:
174
+ if path == filepath:
175
+ metadata_found = True
176
+ lines = f.readlines()
177
+ headline = lines[0].decode("utf-8")
178
+
179
+ column_names = headline.strip().split("\t")
180
+ assert (
181
+ column_names == data_fields
182
+ ), f"The file should have {data_fields} as column names, but has {column_names}"
183
+ for line in lines[1:]:
184
+ field_values = line.decode("utf-8").strip().split("\t")
185
+ # set full path for mp3 audio file
186
+ audio_path = "/".join([path_to_clips, field_values[path_idx]])
187
+ all_field_values[audio_path] = field_values
188
+ elif path.startswith(path_to_clips):
189
+ assert metadata_found, "Found audio clips before the metadata TSV file."
190
+ if not all_field_values:
191
+ break
192
+ if path in all_field_values:
193
+ field_values = all_field_values[path]
194
+
195
+ # if data is incomplete, fill with empty values
196
+ if len(field_values) < len(data_fields):
197
+ field_values += (len(data_fields) - len(field_values)) * ["''"]
198
+
199
+ result = {key: value for key, value in zip(data_fields, field_values)}
200
+
201
+ # set audio feature
202
+ result["audio"] = {"path": path, "bytes": f.read()}
203
+
204
+ yield path, result
dataset_infos.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b538721de160b475f8b25377e577d8e8c3dd490c59fdb6839275afc8d5e4c61
3
+ size 2787
main.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This is a sample Python script.
2
+
3
+ # Press Shift+F10 to execute it or replace it with your code.
4
+ # Press Double Shift to search everywhere for classes, files, tool windows, actions, and settings.
5
+
6
+
7
+ def print_hi(name):
8
+ # Use a breakpoint in the code line below to debug your script.
9
+ print(f'Hi, {name}') # Press Ctrl+F8 to toggle the breakpoint.
10
+
11
+
12
+ # Press the green button in the gutter to run the script.
13
+ if __name__ == '__main__':
14
+ print_hi('PyCharm')
15
+
16
+ # See PyCharm help at https://www.jetbrains.com/help/pycharm/