parquet-converter commited on
Commit
c606423
1 Parent(s): 0ce1cc2

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,231 +0,0 @@
1
- ---
2
- pretty_name: Arabic Speech Corpus
3
- annotations_creators:
4
- - expert-generated
5
- language_creators:
6
- - crowdsourced
7
- language:
8
- - ar
9
- license:
10
- - cc-by-4.0
11
- multilinguality:
12
- - monolingual
13
- paperswithcode_id: arabic-speech-corpus
14
- size_categories:
15
- - 1K<n<10K
16
- source_datasets:
17
- - original
18
- task_categories:
19
- - automatic-speech-recognition
20
- task_ids: []
21
- train-eval-index:
22
- - config: clean
23
- task: automatic-speech-recognition
24
- task_id: speech_recognition
25
- splits:
26
- train_split: train
27
- eval_split: test
28
- col_mapping:
29
- file: path
30
- text: text
31
- metrics:
32
- - type: wer
33
- name: WER
34
- - type: cer
35
- name: CER
36
- dataset_info:
37
- features:
38
- - name: file
39
- dtype: string
40
- - name: text
41
- dtype: string
42
- - name: audio
43
- dtype:
44
- audio:
45
- sampling_rate: 48000
46
- - name: phonetic
47
- dtype: string
48
- - name: orthographic
49
- dtype: string
50
- config_name: clean
51
- splits:
52
- - name: train
53
- num_bytes: 1002365
54
- num_examples: 1813
55
- - name: test
56
- num_bytes: 65784
57
- num_examples: 100
58
- download_size: 1192302846
59
- dataset_size: 1068149
60
- ---
61
-
62
- # Dataset Card for Arabic Speech Corpus
63
-
64
- ## Table of Contents
65
- - [Dataset Description](#dataset-description)
66
- - [Dataset Summary](#dataset-summary)
67
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
68
- - [Languages](#languages)
69
- - [Dataset Structure](#dataset-structure)
70
- - [Data Instances](#data-instances)
71
- - [Data Fields](#data-fields)
72
- - [Data Splits](#data-splits)
73
- - [Dataset Creation](#dataset-creation)
74
- - [Curation Rationale](#curation-rationale)
75
- - [Source Data](#source-data)
76
- - [Annotations](#annotations)
77
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
78
- - [Considerations for Using the Data](#considerations-for-using-the-data)
79
- - [Social Impact of Dataset](#social-impact-of-dataset)
80
- - [Discussion of Biases](#discussion-of-biases)
81
- - [Other Known Limitations](#other-known-limitations)
82
- - [Additional Information](#additional-information)
83
- - [Dataset Curators](#dataset-curators)
84
- - [Licensing Information](#licensing-information)
85
- - [Citation Information](#citation-information)
86
- - [Contributions](#contributions)
87
-
88
- ## Dataset Description
89
-
90
- - **Homepage:** [Arabic Speech Corpus](http://en.arabicspeechcorpus.com/)
91
- - **Repository:** [Needs More Information]
92
- - **Paper:** [Modern standard Arabic phonetics for speech synthesis](http://en.arabicspeechcorpus.com/Nawar%20Halabi%20PhD%20Thesis%20Revised.pdf)
93
- - **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
94
- - **Point of Contact:** [Nawar Halabi](mailto:nawar.halabi@gmail.com)
95
-
96
- ### Dataset Summary
97
-
98
- This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.
99
-
100
- ### Supported Tasks and Leaderboards
101
-
102
- [Needs More Information]
103
-
104
- ### Languages
105
-
106
- The audio is in Arabic.
107
-
108
- ## Dataset Structure
109
-
110
- ### Data Instances
111
-
112
- A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
113
- An example from the dataset is:
114
- ```
115
- {
116
- 'file': '/Users/username/.cache/huggingface/datasets/downloads/extracted/baebe85e2cb67579f6f88e7117a87888c1ace390f4f14cb6c3e585c517ad9db0/arabic-speech-corpus/wav/ARA NORM 0002.wav',
117
- 'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/baebe85e2cb67579f6f88e7117a87888c1ace390f4f14cb6c3e585c517ad9db0/arabic-speech-corpus/wav/ARA NORM 0002.wav',
118
- 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
119
- 'sampling_rate': 48000},
120
- 'orthographic': 'waraj~aHa Alt~aqoriyru Al~a*iy >aEad~ahu maEohadu >aboHaA^i haDabapi Alt~ibiti fiy Alo>akaAdiymiy~api AlS~iyniy~api liloEuluwmi - >ano tasotamir~a darajaAtu AloHaraArapi wamusotawayaAtu Alr~uTuwbapi fiy Alo<irotifaAEi TawaAla ha*aA Aloqarono',
121
- 'phonetic': "sil w a r a' jj A H a tt A q r ii0' r u0 ll a * i0 < a E a' dd a h u0 m a' E h a d u0 < a b H aa' ^ i0 h A D A' b a t i0 tt i1' b t i0 f i0 l < a k aa d ii0 m ii0' y a t i0 SS II0 n ii0' y a t i0 l u0 l E u0 l uu0' m i0 sil < a' n t a s t a m i0' rr a d a r a j aa' t u0 l H a r aa' r a t i0 w a m u0 s t a w a y aa' t u0 rr U0 T UU0' b a t i0 f i0 l Ah i0 r t i0 f aa' E i0 T A' w A l a h aa' * a l q A' r n sil",
122
- 'text': '\ufeffwaraj~aHa Alt~aqoriyru Al~aTHiy >aEad~ahu maEohadu >aboHaA^i haDabapi Alt~ibiti fiy Alo>akaAdiymiy~api AlS~iyniy~api liloEuluwmi - >ano tasotamir~a darajaAtu AloHaraArapi wamusotawayaAtu Alr~uTuwbapi fiy Alo<irotifaAEi TawaAla haTHaA Aloqarono'
123
- }
124
- ```
125
-
126
- ### Data Fields
127
-
128
- - file: A path to the downloaded audio file in .wav format.
129
-
130
- - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
131
-
132
- - text: the transcription of the audio file.
133
-
134
- - phonetic: the transcription in phonentics format.
135
-
136
- - orthographic: the transcriptions written in orthographic format.
137
-
138
- ### Data Splits
139
-
140
- | | Train | Test |
141
- | ----- | ----- | ---- |
142
- | dataset | 1813 | 100 |
143
-
144
-
145
-
146
- ## Dataset Creation
147
-
148
- ### Curation Rationale
149
-
150
- The corpus was created with Speech Synthesis as the main application in mind. Although it has been used as part of a larger corpus for speech recognition and speech denoising. Here are some explanations why the corpus was built the way it is:
151
-
152
- * Corpus size: Budget limitations and the research goal resulted in the decision not to gather more data. The goal was to show that high quality speech synthesis is possible with smaller corpora.
153
- * Phonetic diversity: Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency.
154
- * Content: News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal.
155
- * Non-sense utterances: The corpus contains a large set of utterances that are generated computationally to compensate for the diphones missing in the main part of the corpus. The usefullness of non-sense utterances was not proven in the PhD thesis.
156
- * The talent: The voice talent had a Syrian dialect from Damascus and spoke in formal Arabic.
157
-
158
- Please refer to [PhD thesis](#Citation-Information) for more detailed information.
159
-
160
- ### Source Data
161
-
162
- #### Initial Data Collection and Normalization
163
-
164
- News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal. We were restricted to content which was fully diacritised to make the annotation process easier.
165
-
166
- Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency.
167
-
168
- Please refer to [PhD thesis](#Citation-Information).
169
-
170
- #### Who are the source language producers?
171
-
172
- Please refer to [PhD thesis](#Citation-Information).
173
-
174
- ### Annotations
175
-
176
- #### Annotation process
177
-
178
- Three annotators aligned audio with phonemes with the help of HTK forced alignment. They worked on overlapping parts as well to assess annotator agreement and the quality of the annotations. The entire corpus was checked by human annotators.
179
-
180
- Please refer to [PhD thesis](#Citation-Information).
181
-
182
- #### Who are the annotators?
183
-
184
- Nawar Halabi and two anonymous Arabic language teachers.
185
-
186
- ### Personal and Sensitive Information
187
-
188
- The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. The voice talent agreed in writing for their voice to be used in speech technologies as long as they stay anonymous.
189
-
190
- ## Considerations for Using the Data
191
-
192
- ### Social Impact of Dataset
193
-
194
- [More Information Needed]
195
-
196
- ### Discussion of Biases
197
-
198
- [More Information Needed]
199
-
200
- ### Other Known Limitations
201
-
202
- [Needs More Information]
203
-
204
- ## Additional Information
205
-
206
- ### Dataset Curators
207
-
208
- The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio by Nawar Halabi.
209
-
210
- ### Licensing Information
211
-
212
- [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
213
-
214
- ### Citation Information
215
-
216
- ```
217
- @phdthesis{halabi2016modern,
218
- title={Modern standard Arabic phonetics for speech synthesis},
219
- author={Halabi, Nawar},
220
- year={2016},
221
- school={University of Southampton}
222
- }
223
- ```
224
-
225
- ### Contributions
226
-
227
- This dataset was created by:
228
- * Nawar Halabi [@nawarhalabi](https://github.com/nawarhalabi) main creator and annotator.
229
- * Two anonymous Arabic langauge teachers as annotators.
230
- * One anonymous voice talent.
231
- * Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
arabic_speech_corpus.py DELETED
@@ -1,145 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2021 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Arabic Speech Corpus"""
18
-
19
-
20
- import os
21
-
22
- import datasets
23
- from datasets.tasks import AutomaticSpeechRecognition
24
-
25
-
26
- _CITATION = """\
27
- @phdthesis{halabi2016modern,
28
- title={Modern standard Arabic phonetics for speech synthesis},
29
- author={Halabi, Nawar},
30
- year={2016},
31
- school={University of Southampton}
32
- }
33
- """
34
-
35
- _DESCRIPTION = """\
36
- This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton.
37
- The corpus was recorded in south Levantine Arabic
38
- (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.
39
- Note that in order to limit the required storage for preparing this dataset, the audio
40
- is stored in the .flac format and is not converted to a float32 array. To convert, the audio
41
- file to a float32 array, please make use of the `.map()` function as follows:
42
-
43
-
44
- ```python
45
- import soundfile as sf
46
-
47
- def map_to_array(batch):
48
- speech_array, _ = sf.read(batch["file"])
49
- batch["speech"] = speech_array
50
- return batch
51
-
52
- dataset = dataset.map(map_to_array, remove_columns=["file"])
53
- ```
54
- """
55
-
56
- _URL = "http://en.arabicspeechcorpus.com/arabic-speech-corpus.zip"
57
-
58
-
59
- class ArabicSpeechCorpusConfig(datasets.BuilderConfig):
60
- """BuilderConfig for ArabicSpeechCorpu."""
61
-
62
- def __init__(self, **kwargs):
63
- """
64
- Args:
65
- data_dir: `string`, the path to the folder containing the files in the
66
- downloaded .tar
67
- citation: `string`, citation for the data set
68
- url: `string`, url for information about the data set
69
- **kwargs: keyword arguments forwarded to super.
70
- """
71
- super(ArabicSpeechCorpusConfig, self).__init__(version=datasets.Version("2.1.0", ""), **kwargs)
72
-
73
-
74
- class ArabicSpeechCorpus(datasets.GeneratorBasedBuilder):
75
- """ArabicSpeechCorpus dataset."""
76
-
77
- BUILDER_CONFIGS = [
78
- ArabicSpeechCorpusConfig(name="clean", description="'Clean' speech."),
79
- ]
80
-
81
- def _info(self):
82
- return datasets.DatasetInfo(
83
- description=_DESCRIPTION,
84
- features=datasets.Features(
85
- {
86
- "file": datasets.Value("string"),
87
- "text": datasets.Value("string"),
88
- "audio": datasets.Audio(sampling_rate=48_000),
89
- "phonetic": datasets.Value("string"),
90
- "orthographic": datasets.Value("string"),
91
- }
92
- ),
93
- supervised_keys=("file", "text"),
94
- homepage=_URL,
95
- citation=_CITATION,
96
- task_templates=[AutomaticSpeechRecognition(audio_column="audio", transcription_column="text")],
97
- )
98
-
99
- def _split_generators(self, dl_manager):
100
- archive_path = dl_manager.download_and_extract(_URL)
101
- archive_path = os.path.join(archive_path, "arabic-speech-corpus")
102
- return [
103
- datasets.SplitGenerator(name="train", gen_kwargs={"archive_path": archive_path}),
104
- datasets.SplitGenerator(name="test", gen_kwargs={"archive_path": os.path.join(archive_path, "test set")}),
105
- ]
106
-
107
- def _generate_examples(self, archive_path):
108
- """Generate examples from a Librispeech archive_path."""
109
- lab_dir = os.path.join(archive_path, "lab")
110
- wav_dir = os.path.join(archive_path, "wav")
111
- if "test set" in archive_path:
112
- phonetic_path = os.path.join(archive_path, "phonetic-transcript.txt")
113
- else:
114
- phonetic_path = os.path.join(archive_path, "phonetic-transcipt.txt")
115
-
116
- orthographic_path = os.path.join(archive_path, "orthographic-transcript.txt")
117
-
118
- phonetics = {}
119
- orthographics = {}
120
-
121
- with open(phonetic_path, "r", encoding="utf-8") as f:
122
- for line in f:
123
- wav_file, phonetic = line.split('"')[1::2]
124
- phonetics[wav_file] = phonetic
125
-
126
- with open(orthographic_path, "r", encoding="utf-8") as f:
127
- for line in f:
128
- wav_file, orthographic = line.split('"')[1::2]
129
- orthographics[wav_file] = orthographic
130
-
131
- for _id, lab_name in enumerate(sorted(os.listdir(lab_dir))):
132
- lab_path = os.path.join(lab_dir, lab_name)
133
- lab_text = open(lab_path, "r", encoding="utf-8").read()
134
-
135
- wav_name = lab_name[:-4] + ".wav"
136
- wav_path = os.path.join(wav_dir, wav_name)
137
-
138
- example = {
139
- "file": wav_path,
140
- "audio": wav_path,
141
- "text": lab_text,
142
- "phonetic": phonetics[wav_name],
143
- "orthographic": orthographics[wav_name],
144
- }
145
- yield str(_id), example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
clean/arabic_speech_corpus-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a98b3ef01eba685d92b40bdf8cfcd13a2022509eb8e56be53c2760f3a863b13
3
+ size 90899032
clean/arabic_speech_corpus-train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e1ca67abf844f17e0f9f2c4f04e7559f6eb84e005e15acd75d845a73c731aa0
3
+ size 816551803
clean/arabic_speech_corpus-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c19a2529f94814f2dbacd7f6bd15daf6bd6dc6ed790d617593ea4ad84ad2b960
3
+ size 440020041
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"clean": {"description": "This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton.\nThe corpus was recorded in south Levantine Arabic\n(Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.\nNote that in order to limit the required storage for preparing this dataset, the audio\nis stored in the .flac format and is not converted to a float32 array. To convert, the audio\nfile to a float32 array, please make use of the `.map()` function as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@phdthesis{halabi2016modern,\n title={Modern standard Arabic phonetics for speech synthesis},\n author={Halabi, Nawar},\n year={2016},\n school={University of Southampton}\n}\n", "homepage": "http://en.arabicspeechcorpus.com/arabic-speech-corpus.zip", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "audio": {"sampling_rate": 48000, "mono": true, "decode": true, "id": null, "_type": "Audio"}, "phonetic": {"dtype": "string", "id": null, "_type": "Value"}, "orthographic": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_column": "audio", "transcription_column": "text"}], "builder_name": "arabic_speech_corpus", "config_name": "clean", "version": {"version_str": "2.1.0", "description": "", "major": 2, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1002365, "num_examples": 1813, "dataset_name": "arabic_speech_corpus"}, "test": {"name": "test", "num_bytes": 65784, "num_examples": 100, "dataset_name": "arabic_speech_corpus"}}, "download_checksums": {"http://en.arabicspeechcorpus.com/arabic-speech-corpus.zip": {"num_bytes": 1192302846, "checksum": "1df85219370fb1ebe8bfc46aa886265586411d04e7c1caa5a5b9847b3ad5f9de"}}, "download_size": 1192302846, "post_processing_size": null, "dataset_size": 1068149, "size_in_bytes": 1193370995}}