shivendrra's picture
Create README.md
9aa57be verified
|
raw
history blame
4 kB
metadata
task_categories:
  - text-generation
  - summarization
language:
  - en
  - hi
  - ja
  - fr
tags:
  - textdataset
  - text
  - youtube
  - webscrapped data
  - youtube transcripts
  - llm training
  - transformer models
size_categories:
  - 1B<n<10B
  - 100M<n<1B

Dataset Card for YouTubeTranscriptData

Dataset Details

Dataset Description

This dataset contains transcripts of around 167K youtube videos that include coding lectures, podcasts, interviews, news videos, commentary and song lyrics. Also there are multiple files that have been generated using webscrapping.

Dataset Sources

Uses

  • Can be used to train Transformer model/BPE tokenizers
  • Also for learning and research purposes
  • whatever you can think of, do whatever the fuck you want.

Direct Use

Used to train a 76million parameter transformer model.

Github repo

Out-of-Scope Use

Not suitable for finetuning any base model or pre-trained models. Only NLP and base model training from scratch.

Dataset Structure

I'll add some finetuning data and then will update this section

Dataset Creation

Curation Rationale

I wanted to create an app that would help me write script for my youtube videos. I fucked around a little with gpt-3.5 finetuning and langchain, and Youtube/Google APIs and got an idea to make a model and train it from scratch, all by myself.

Youtube video

Source Data

Youtube Videos: -podcasts like Lex Fridman's, Waveform, Joe Rogan, vergecast, bill gates, etc. -videos from candaian lad, aevy tv, SNL, lemmino, mrwhosetheboss, johnny harris, and many more. -news videos from vox, wallstreetjournal, newyorktimes, the guardian, etc. -interviews from variety, wired, y-combinator, eo. -lectures from mit opencourseware, cs50, freecodecamp, crashcourse, etc. -tech and science from kurzgesagt, real engineering, arvin ash, vsause, veritasium, etc.

Britannica.com: -articles on various topics like Covid, Nuclear reactions, Antarctica, Nobel prize, Great leaders, countries, etc.

Data Collection and Processing

Used Youtube V3 API to fetch video ids from a particular Youtube channel and generated a traget url. Then used Youtube Transcript API to fetch transcripts from the videos and write it in a .txt file. Made a json file containing channel ids of around 45channels and fetched transcipts from around 167K videos

Webscrapping data was generated using webscrapper that scrapped data from britannica.com and some sites that were fetched by GoogleCustomSearch API.

More Information Needed