matallanas commited on
Commit
fa49242
1 Parent(s): 1210e32

Update README.md

Browse files

Updated all the dataset information

Files changed (1) hide show
  1. README.md +78 -6
README.md CHANGED
@@ -1,4 +1,6 @@
1
  ---
 
 
2
  dataset_info:
3
  features:
4
  - name: id
@@ -27,11 +29,81 @@ dataset_info:
27
  dtype: string
28
  splits:
29
  - name: train
30
- num_bytes: 102530760
31
- num_examples: 346
32
- download_size: 57264732
33
- dataset_size: 102530760
 
 
 
 
34
  ---
35
- # Dataset Card for "lex-fridman-podcast"
36
 
37
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ task_categories:
3
+ - automatic-speech-recognition
4
  dataset_info:
5
  features:
6
  - name: id
 
29
  dtype: string
30
  splits:
31
  - name: train
32
+ num_bytes: 65356108140.0
33
+ num_examples: 333
34
+ download_size: 64386861854
35
+ dataset_size: 65356108140.0
36
+ tags:
37
+ - whisper
38
+ - whispering
39
+ - medium
40
  ---
41
+ # Dataset Card for "lexFridmanPodcast-transcript-audio"
42
 
43
+ ## Table of Contents
44
+ - [Table of Contents](#table-of-contents)
45
+ - [Dataset Description](#dataset-description)
46
+ - [Dataset Summary](#dataset-summary)
47
+ - [Languages](#languages)
48
+ - [Dataset Structure](#dataset-structure)
49
+ - [Data Instances](#data-instances)
50
+ - [Data Fields](#data-fields)
51
+ - [Data Splits](#data-splits)
52
+ - [Dataset Creation](#dataset-creation)
53
+ - [Curation Rationale](#curation-rationale)
54
+ - [Source Data](#source-data)
55
+ - [Annotations](#annotations)
56
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
57
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
58
+ - [Social Impact of Dataset](#social-impact-of-dataset)
59
+ - [Discussion of Biases](#discussion-of-biases)
60
+ - [Other Known Limitations](#other-known-limitations)
61
+ - [Additional Information](#additional-information)
62
+ - [Contributions](#contributions)
63
+
64
+ ## Dataset Description
65
+
66
+ - **Homepage:** [Whispering-GPT](https://github.com/matallanas/whisper_gpt_pipeline)
67
+ - **Repository:** [whisper_gpt_pipeline](https://github.com/matallanas/whisper_gpt_pipeline)
68
+ - **Paper:** [whisper](https://cdn.openai.com/papers/whisper.pdf) and [gpt](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
69
+ - **Point of Contact:** [Whispering-GPT organization](https://huggingface.co/Whispering-GPT)
70
+
71
+ ### Dataset Summary
72
+
73
+ This dataset is created by applying whisper to the videos of the Youtube channel [Lex Fridman Podcast](https://www.youtube.com/watch?v=FhfmGM6hswI&list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4&ab_channel=LexFridman). The dataset was created a medium size whisper model.
74
+
75
+ ### Languages
76
+
77
+ - **Language**: English
78
+
79
+ ## Dataset Structure
80
+
81
+ The dataset contains all the transcripts plus the audio of the different videos of Lex Fridman Podcast.
82
+
83
+ ### Data Fields
84
+
85
+ The dataset is composed by:
86
+ - **id**: Id of the youtube video.
87
+ - **channel**: Name of the channel.
88
+ - **channel\_id**: Id of the youtube channel.
89
+ - **title**: Title given to the video.
90
+ - **categories**: Category of the video.
91
+ - **description**: Description added by the author.
92
+ - **text**: Whole transcript of the video.
93
+ - **segments**: A list with the time and transcription of the video.
94
+ - **start**: When started the trancription.
95
+ - **end**: When the transcription ends.
96
+ - **text**: The text of the transcription.
97
+
98
+ ### Data Splits
99
+
100
+ - Train split.
101
+
102
+ ## Dataset Creation
103
+ ### Source Data
104
+
105
+ The transcriptions are from the videos of [Lex Fridman Podcast](https://www.youtube.com/watch?v=FhfmGM6hswI&list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4&ab_channel=LexFridman)
106
+
107
+ ### Contributions
108
+
109
+ Thanks to [Whispering-GPT](https://huggingface.co/Whispering-GPT) organization for adding this dataset.