{ "cells": [ { "cell_type": "markdown", "id": "b1b28232-b65d-41ce-88de-fd70b93a528d", "metadata": {}, "source": [ "# Imports" ] }, { "cell_type": "code", "execution_count": 1, "id": "abb5186b-ee67-4e1e-882d-3d8d5b4575d4", "metadata": { "tags": [] }, "outputs": [], "source": [ "import json\n", "from pathlib import Path\n", "import pickle\n", "from tqdm.auto import tqdm\n", "\n", "from haystack.nodes.preprocessor import PreProcessor" ] }, { "cell_type": "code", "execution_count": 2, "id": "c4b82ea2-8b30-4c2e-99f0-9a30f2f1bfb7", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/home/ec2-user/arabic-wiki\n" ] } ], "source": [ "proj_dir = Path.cwd().parent\n", "print(proj_dir)" ] }, { "cell_type": "markdown", "id": "76119e74-f601-436d-a253-63c5a19d1c83", "metadata": {}, "source": [ "# Config" ] }, { "cell_type": "code", "execution_count": 3, "id": "f6f74545-54a7-4f41-9f02-96964e1417f0", "metadata": { "tags": [] }, "outputs": [], "source": [ "files_in = list((proj_dir / 'data/consolidated').glob('*.ndjson'))\n", "folder_out = proj_dir / 'data/processed'\n", "folder_out_str = str(folder_out)" ] }, { "cell_type": "markdown", "id": "509f41f9-a59f-4171-b61f-ae0cf756fc92", "metadata": {}, "source": [ "# Analysis" ] }, { "cell_type": "code", "execution_count": 4, "id": "f0cbd1c9-3105-4940-85dc-c01ccaa217c7", "metadata": { "tags": [] }, "outputs": [], "source": [ "with open(files_in[0], 'r') as f:\n", " articles = [json.loads(line) for line in f]" ] }, { "cell_type": "code", "execution_count": 5, "id": "004aae7b-1a2f-4a0b-9450-5d80475258b1", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'content': 'الماء مادةٌ شفافةٌ عديمة اللون والرائحة، وهو المكو...',\n", " 'meta': {'id': '7',\n", " 'revid': '2080427',\n", " 'title': 'ماء',\n", " 'url': 'https://ar.wikipedia.org/wiki?curid=7'}}\n" ] } ], "source": [ "from pprint import pprint\n", "article = articles[0].copy()\n", "article['content'] = article['content'][:50] + '...'\n", "pprint(article)" ] }, { "cell_type": "markdown", "id": "6a643cf2-abce-48a9-b4e0-478bcbee28c3", "metadata": {}, "source": [ "# Preprocessing" ] }, { "cell_type": "markdown", "id": "a8f9630e-447e-423e-9f6c-e1dbc654f2dd", "metadata": {}, "source": [ "Its important to choose good pre-processing options. \n", "\n", "Clean whitespace helps each stage of RAG. It adds noise to the embeddings, and wastes space when we prompt with it.\n", "\n", "I chose to split by word as it would be tedious to tokenize here, and that doesnt scale well. The context length for most embedding models ends up being 512 tokens. We saw this within a good z-score is ~225 token.\n", "\n", "I like to respect the sentence boundary, thats why I gave a ~50 word buffer." ] }, { "cell_type": "code", "execution_count": 6, "id": "18807aea-24e4-4d74-bf10-55b24f3cb52c", "metadata": { "tags": [] }, "outputs": [], "source": [ "pp = PreProcessor(clean_whitespace = True,\n", " clean_header_footer = False,\n", " clean_empty_lines = True,\n", " remove_substrings = None,\n", " split_by='word',\n", " split_length = 225,\n", " split_overlap = 50,\n", " split_respect_sentence_boundary = True,\n", " tokenizer_model_folder = None,\n", " id_hash_keys = None,\n", " progress_bar = False,\n", " add_page_number = False,\n", " max_chars_check = 10_000)" ] }, { "cell_type": "markdown", "id": "3c1ab000-6574-485e-87f6-cc210f6e8a61", "metadata": {}, "source": [ "When we break a wikipedia article up, we lose some of the context. The local context is somewhat preserved by the `split_overlap`. Im trying to preserve the global context by adding a prefix that has the article's title.\n", "\n", "You could enhance this with the summary as well. This is mostly to help the retrieval step of RAG. Note that the way Im doing it alters some of `haystack`'s features like the hash and the lengths, but those arent too necessary. \n", "\n", "A more advanced way for many business applications would be to summarize the document and add that as a prefix for sub-documents.\n", "\n", "One last thing to note, is that it would be prudent (in some use-cases) to preserve the original document without the summary to give to the reader (retrieve with the summary but prompt without), but since this is a demo use-case I wont be doing that." ] }, { "cell_type": "code", "execution_count": 7, "id": "63871bdd-0369-4dd7-a65e-ccba29baed44", "metadata": {}, "outputs": [], "source": [ "with open(files_in[0], 'r', encoding='utf-8') as f:\n", " articles = [json.loads(line) for line in f]" ] }, { "cell_type": "code", "execution_count": 8, "id": "5c3b48b7-3c0f-41ba-a423-b716649efcaa", "metadata": { "tags": [] }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "We found one or more sentences whose word count is higher than the split length.\n", "Document e3e2bf8b3399979cb16219b175041b4d is 11336 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document 91ad1d1a24e93abacabd5a5478a96977 is 14251 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document 1625c431c0fcfaf81c13e0da59071a81 is 13395 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document 790d3b2d94a68cbec6d77f3c15d0e679 is 13484 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document e2dcf80a1f9dfc118aed059255f9b90b is 13217 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 3min 31s, sys: 95.1 ms, total: 3min 31s\n", "Wall time: 3min 31s\n" ] } ], "source": [ "%%time\n", "documents = pp.process(articles)" ] }, { "cell_type": "code", "execution_count": 9, "id": "de6e1690-131a-41d1-a473-c908c2e40939", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Document 91ad1d1a24e93abacabd5a5478a96977 is 14251 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document e3e2bf8b3399979cb16219b175041b4d is 11336 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document 1625c431c0fcfaf81c13e0da59071a81 is 13395 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document 790d3b2d94a68cbec6d77f3c15d0e679 is 13484 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n", "Document e2dcf80a1f9dfc118aed059255f9b90b is 13217 characters long after preprocessing, where the maximum length should be 10000. Something might be wrong with the splitting, check the document affected to prevent issues at query time. This document will be now hard-split at 10000 chars recursively.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 6.86 s, sys: 1.31 s, total: 8.16 s\n", "Wall time: 1min 33s\n" ] } ], "source": [ "%%time\n", "import os\n", "import concurrent.futures\n", "\n", "def parallel_preprocessing(articles):\n", " # Utility function to divide the articles into smaller chunks\n", " def chunkify(lst, n):\n", " \"\"\"Yield successive n-sized chunks from lst.\"\"\"\n", " for i in range(0, len(lst), n):\n", " yield lst[i:i + n]\n", "\n", " # Size of each chunk. Adjust based on your needs.\n", " CHUNK_SIZE = 10_000 \n", " article_chunks = list(chunkify(articles, CHUNK_SIZE))\n", "\n", " # Number of processes to run in parallel.\n", " # Use all available CPUs, but you can reduce the number if you wish to leave some CPUs free.\n", " NUM_PROCESSES = os.cpu_count() \n", "\n", " with concurrent.futures.ProcessPoolExecutor(max_workers=NUM_PROCESSES) as executor:\n", " documents_list = list(executor.map(pp.process, article_chunks))\n", "\n", " # Flatten the documents_list to get a single list of documents\n", " documents = [doc for sublist in documents_list for doc in sublist]\n", " return documents\n", "\n", "documents = parallel_preprocessing(articles)\n" ] }, { "cell_type": "code", "execution_count": 10, "id": "dab1658a-79a7-40f2-9a8c-1798e0d124bf", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a4d4ade8158144c6a06f072b550157c3", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/23 [00:00" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "documents[0]" ] }, { "cell_type": "code", "execution_count": 12, "id": "b34890bf-9dba-459a-9b0d-aa4b5929cbe8", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "documents[1]" ] }, { "cell_type": "code", "execution_count": 13, "id": "e6f50c27-a486-47e9-ba60-d567f5e530db", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "documents[10102]" ] }, { "cell_type": "code", "execution_count": 14, "id": "5485cc27-3d3f-4b96-8884-accf5324da2d", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2094596\n" ] } ], "source": [ "!cat \"$folder_out_str\"/*.ndjson | wc -l" ] }, { "cell_type": "code", "execution_count": null, "id": "c5833dba-1bf6-48aa-be6f-0d70c71e54aa", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" } }, "nbformat": 4, "nbformat_minor": 5 }