{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ETL to get the text data from the playlist\n", "\n", "This notebook shows the process of building the corpus of transcripts from the YouTube playlist.\n", "\n", "**Extract**: Pull data (transcripts) from each video. \n", "**Transform**: \n", "**Load**: Load data into our database where it will be retrieved from. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from models import etl\n", "import json" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we load the video information. This includes the video IDs and titles." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "with open('data/single_video.json') as f:\n", " video_info = json.load(f)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we must extract the transcripts using the YouTube Transcript API. This is done over all of the videos. \n", "This produces a list of video segments with timestamps. \n", "Next, we format the transcript by adding metadata so that the segments are easily identified for retreival later. \n", "Since the original segments are small, they are batched with overlap to preserve semantic meaning." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "get_video_transcript took 0.84 seconds.\n", "Transcript for video 5sLYAQS9sWQ fetched.\n" ] } ], "source": [ "videos = []\n", "for video in video_info:\n", " video_id = video[\"id\"]\n", " video_title = video[\"title\"]\n", " transcript = etl.get_video_transcript(video_id)\n", " print(f\"Transcript for video {video_id} fetched.\")\n", " if transcript:\n", " formatted_transcript = etl.format_transcript(transcript, video_id, video_title, batch_size=5, overlap=2)\n", " \n", " videos.extend(formatted_transcript)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last step is to load the data into a database. We will use a Chromadb database. \n", "The embedding function is the ____ model from HuggingFace." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Database created at data/single_video.db\n" ] } ], "source": [ "# Initialize the database\n", "from utils.embedding_utils import MyEmbeddingFunction\n", "import chromadb\n", "\n", "embed_text = MyEmbeddingFunction()\n", "\n", "db_path = \"data/single_video.db\"\n", "client = chromadb.PersistentClient(path=db_path)\n", "\n", "client.create_collection(\n", " name=\"huberman_videos\",\n", " embedding_function=embed_text,\n", " metadata={\"hnsw:space\": \"cosine\"}\n", ")\n", "\n", "print(f\"Database created at {db_path}\")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Data loaded to database at data/single_video.db.\n" ] } ], "source": [ "# Add the data to the database\n", "client = chromadb.PersistentClient(path=db_path)\n", " \n", "collection = client.get_collection(\"huberman_videos\")\n", "\n", "documents = [segment['text'] for segment in videos]\n", "metadata = [segment['metadata'] for segment in videos]\n", "ids = [segment['metadata']['segment_id'] for segment in videos]\n", "\n", "collection.add(\n", " documents=documents,\n", " metadatas=metadata,\n", " ids=ids\n", ")\n", "\n", "print(f\"Data loaded to database at {db_path}.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is some of the data:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of segments: 26\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
idsembeddingsmetadatasdocumentsurisdata
05sLYAQS9sWQ__0[-0.11489544063806534, -0.03262839838862419, -...{'segment_id': '5sLYAQS9sWQ__0', 'source': 'ht...GPT, or Generative Pre-trained Transformer, is...NoneNone
15sLYAQS9sWQ__12[0.094169981777668, -0.10430295020341873, 0.02...{'segment_id': '5sLYAQS9sWQ__12', 'source': 'h...Now foundation models are pre-trained on large...NoneNone
25sLYAQS9sWQ__15[0.042587604373693466, -0.061460819095373154, ...{'segment_id': '5sLYAQS9sWQ__15', 'source': 'h...I'm talking about things like code. Now, large...NoneNone
35sLYAQS9sWQ__18[-0.0245895367115736, -0.058405470103025436, -...{'segment_id': '5sLYAQS9sWQ__18', 'source': 'h...these models can be tens of gigabytes in size ...NoneNone
45sLYAQS9sWQ__21[0.05348338559269905, -0.016104578971862793, -...{'segment_id': '5sLYAQS9sWQ__21', 'source': 'h...So to put that into perspective, a text file t...NoneNone
55sLYAQS9sWQ__24[0.07004527002573013, -0.08996045589447021, -0...{'segment_id': '5sLYAQS9sWQ__24', 'source': 'h...A lot of words just in one Gb. And how many gi...NoneNone
65sLYAQS9sWQ__27[0.0283487681299448, -0.11020224541425705, -0....{'segment_id': '5sLYAQS9sWQ__27', 'source': 'h...Yeah, that's truly a lot of text. And LLMs are...NoneNone
75sLYAQS9sWQ__3[-0.0700172707438469, -0.061202701181173325, -...{'segment_id': '5sLYAQS9sWQ__3', 'source': 'ht...And I've been using GPT in its various forms f...NoneNone
85sLYAQS9sWQ__30[-0.04904637485742569, -0.1277533322572708, -0...{'segment_id': '5sLYAQS9sWQ__30', 'source': 'h...and the more parameters a model has, the more ...NoneNone
95sLYAQS9sWQ__33[0.03286760300397873, -0.041724931448698044, 0...{'segment_id': '5sLYAQS9sWQ__33', 'source': 'h...All right, so how do they work? Well, we can t...NoneNone
\n", "
" ], "text/plain": [ " ids embeddings \\\n", "0 5sLYAQS9sWQ__0 [-0.11489544063806534, -0.03262839838862419, -... \n", "1 5sLYAQS9sWQ__12 [0.094169981777668, -0.10430295020341873, 0.02... \n", "2 5sLYAQS9sWQ__15 [0.042587604373693466, -0.061460819095373154, ... \n", "3 5sLYAQS9sWQ__18 [-0.0245895367115736, -0.058405470103025436, -... \n", "4 5sLYAQS9sWQ__21 [0.05348338559269905, -0.016104578971862793, -... \n", "5 5sLYAQS9sWQ__24 [0.07004527002573013, -0.08996045589447021, -0... \n", "6 5sLYAQS9sWQ__27 [0.0283487681299448, -0.11020224541425705, -0.... \n", "7 5sLYAQS9sWQ__3 [-0.0700172707438469, -0.061202701181173325, -... \n", "8 5sLYAQS9sWQ__30 [-0.04904637485742569, -0.1277533322572708, -0... \n", "9 5sLYAQS9sWQ__33 [0.03286760300397873, -0.041724931448698044, 0... \n", "\n", " metadatas \\\n", "0 {'segment_id': '5sLYAQS9sWQ__0', 'source': 'ht... \n", "1 {'segment_id': '5sLYAQS9sWQ__12', 'source': 'h... \n", "2 {'segment_id': '5sLYAQS9sWQ__15', 'source': 'h... \n", "3 {'segment_id': '5sLYAQS9sWQ__18', 'source': 'h... \n", "4 {'segment_id': '5sLYAQS9sWQ__21', 'source': 'h... \n", "5 {'segment_id': '5sLYAQS9sWQ__24', 'source': 'h... \n", "6 {'segment_id': '5sLYAQS9sWQ__27', 'source': 'h... \n", "7 {'segment_id': '5sLYAQS9sWQ__3', 'source': 'ht... \n", "8 {'segment_id': '5sLYAQS9sWQ__30', 'source': 'h... \n", "9 {'segment_id': '5sLYAQS9sWQ__33', 'source': 'h... \n", "\n", " documents uris data \n", "0 GPT, or Generative Pre-trained Transformer, is... None None \n", "1 Now foundation models are pre-trained on large... None None \n", "2 I'm talking about things like code. Now, large... None None \n", "3 these models can be tens of gigabytes in size ... None None \n", "4 So to put that into perspective, a text file t... None None \n", "5 A lot of words just in one Gb. And how many gi... None None \n", "6 Yeah, that's truly a lot of text. And LLMs are... None None \n", "7 And I've been using GPT in its various forms f... None None \n", "8 and the more parameters a model has, the more ... None None \n", "9 All right, so how do they work? Well, we can t... None None " ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd\n", "\n", "client = chromadb.PersistentClient('data/single_video.db')\n", "collection= client.get_collection('huberman_videos')\n", "\n", "num_segments = collection.count()\n", "sample_data = collection.peek()\n", "\n", "transcript_df = pd.DataFrame(sample_data)\n", "\n", "print(f\"Number of segments: {num_segments}\")\n", "transcript_df" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.1" } }, "nbformat": 4, "nbformat_minor": 2 }