lucahue gordon-posit commited on
Commit
0f711c1
0 Parent(s):

Duplicate from posit/quarto-template

Browse files

Co-authored-by: Gordon Shotwell <gordon-posit@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ /.quarto/
2
+
3
+ .DS_Store
4
+ .venv/**
5
+ src/_site/
Dockerfile ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ARG QUARTO_VERSION="1.4.550"
2
+
3
+ # Use the Quarto base image
4
+ FROM ghcr.io/quarto-dev/quarto:${QUARTO_VERSION} AS builder
5
+
6
+ COPY src /app
7
+ WORKDIR /app
8
+
9
+ # Install Python requirements
10
+ USER root
11
+ RUN apt-get update && apt-get install -y python3 python3-pip
12
+ COPY requirements.txt /app/
13
+ RUN pip3 install -r requirements.txt
14
+
15
+ RUN quarto render .
16
+
17
+ EXPOSE 7860
18
+ CMD ["python3", "-m", "http.server", "7860", "--directory", "_site"]
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Quarto Template
3
+ emoji: 🌖
4
+ colorFrom: green
5
+ colorTo: pink
6
+ sdk: docker
7
+ pinned: false
8
+ ---
9
+
10
+ To get started working with quarto we recommend you first [install quarto](https://quarto.org/docs/get-started/) locally so that you can render the site without Docker.
11
+ We also recommend the [Quarto VS Code Extension](https://marketplace.visualstudio.com/items?itemName=quarto.quarto) which provides syntax highlighting, code completion, a preview button and more.
12
+
13
+ The quarto source is located in `src` and you can preview the site with:
14
+
15
+ ```
16
+ quarto preview src
17
+ ```
18
+
19
+ A web browser should open up with a live preview of the site.
20
+
21
+ ## Making changes
22
+
23
+ The `src/_quarto.yml` contains the site-level configuration for the quarto website and tells quarto which files to render, and how they should be organized.
24
+ For example if you wanted to modify the [site navigation](https://quarto.org/docs/reference/site-navigation.html) you should modify this file.
25
+
26
+ Quarto can render markdown, ipynb, and .qmd files, and you can mix formats in a single document.
27
+
28
+ ## Executing code
29
+
30
+ One of the main virtues of Quarto is that it lets you combine code and text in a single document.
31
+ By default if you include a code chunk in your document, Quarto will execute that code and include the output in the rendered document.
32
+ This is great for reproducibility and for creating documents that are always up-to-date.
33
+
34
+ ```{python}
35
+ import seaborn as sns
36
+ import matplotlib.pyplot as plt
37
+
38
+ # Sample data
39
+ tips = sns.load_dataset("tips")
40
+
41
+ # Create a seaborn plot
42
+ sns.set_style("whitegrid")
43
+ g = sns.lmplot(x="total_bill", y="tip", data=tips, aspect=2)
44
+ g = (g.set_axis_labels("Total bill (USD)", "Tip").set(xlim=(0, 60), ylim=(0, 12)))
45
+
46
+ plt.title("Tip by Total Bill")
47
+ plt.show()
48
+ ```
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ pandas
2
+ seaborn
3
+ jupyter
serve.py ADDED
File without changes
src/.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ /.quarto/
src/_quarto.yml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ project:
2
+ type: website
3
+ website:
4
+ title: "Open-Source AI Cookbook"
5
+ sidebar:
6
+ style: "docked"
7
+ search: true
8
+ collapse-level: 3
9
+ contents:
10
+ - section: "About"
11
+ contents:
12
+ - href: index.qmd
13
+ text: About Quarto
14
+ - section: "Open-Source AI Cookbook"
15
+ contents:
16
+ - section: "RAG Techniques"
17
+ contents:
18
+ - href: notebooks/rag_zephyr_langchain.qmd
19
+ text: "RAG Zephyr & LangChain"
20
+ - href: notebooks/advanced_rag.qmd
21
+ text: "Advanced RAG"
22
+ - href: notebooks/rag_evaluation.qmd
23
+ text: "RAG Evaluation"
24
+ - section: "Additional Techniques"
25
+ contents:
26
+ - href: notebooks/automatic_embedding.ipynb
27
+ text: "Automatic Embedding"
28
+ - href: notebooks/faiss.ipynb
29
+ text: "FAISS for Efficient Search"
30
+ - href: notebooks/single_gpu.ipynb
31
+ text: "Single GPU Optimization"
32
+
33
+ format:
34
+ html:
35
+ theme: cosmo
36
+ css: styles.css
37
+ toc: true
src/about.qmd ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ---
2
+ title: "About"
3
+ ---
4
+
5
+ About this site
src/index.qmd ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: "About Quarto"
3
+ ---
4
+
5
+ [Quarto](https://quarto.org/) is a Markdown-based documentation system that lets you write documents in Markdown or Jupyter Notebooks, and render them to a variety of formats including HTML, PDF, PowerPoint, and more.
6
+ You can also use Quarto to write [books](https://quarto.org/docs/books/), create [dashboards](https://quarto.org/docs/dashboards/), and embed web applications with [Observable](https://quarto.org/docs/interactive/ojs/) and [Shinylive](https://quarto.org/docs/blog/posts/2022-10-25-shinylive-extension/).
7
+
8
+ ## Getting started with Quarto
9
+
10
+ Once you've created the space, click on the `Files` tab in the top right to take a look at the files which make up this Space.
11
+ There are a couple of important files which you should pay attention to:
12
+
13
+ - `Dockerfile`: This contains the system setup to build and serve the Quarto site on Hugging Face. You probably won't need to change this file that
14
+ often unless you need to add additional system dependencies or modify the Quarto version.
15
+ - `requirements.txt`: This is where you should include any Python dependencies which you need for your website.
16
+ These are installed when the Dockerfile builds.
17
+ - The `src` directory contains the source files for the Quarto website. You can include Jupyter notebooks or markdown (`.qmd` or `.md`) files.
18
+ - `src/_quarto.yml` defines the navigation for your website. If you want to add new pages or reorganize the existing ones, you'll need to change this file.
19
+
20
+
21
+ ## Recommended Workflow
22
+
23
+ 1. **Clone the space locally**
24
+ 2. **Install Quarto**: In order to render your Quarto site without Docker, we recommend installing Quarto by following the instructions on the [official Quarto website](https://quarto.org/docs/get-started/).
25
+ 3. **Install Quarto VS Code extension**: The [Quarto VS Code Extension](https://quarto.org/docs/tools/vscode.html) includes a number of productivity tools including YAML Autocomplete, a preview button, and a visual editor. Quarto works great with VS Code, but the extension does make it easier to get the most out of Quarto.
26
+ 4. **Edit the site**: The website files are contained in the `src` directory, and the site navigation is defined in `src/_quarto.yml`. Try editing these files and either clicking the "Preview" button in VS Code, or calling `quarto preview src` from the command line.
27
+ 5. **Learn more about Quarto**: You can do a lot of things with Quarto, and they are all documented on the [Quarto Website](https://quarto.org/guide/). In particular, you may be interested in:
28
+
29
+ - All about building [websites](https://quarto.org/docs/websites/)
30
+ - Building Static [Dashboards](https://quarto.org/docs/dashboards/)
31
+ - How to write [books](https://quarto.org/docs/books/index.html) and [manuscripts](https://quarto.org/docs/manuscripts/)
32
+ - Reproducible [presentations](https://quarto.org/docs/presentations/)
33
+ - Including [Observable](https://quarto.org/docs/interactive/ojs/) or [Shiny](https://quarto.org/docs/interactive/shiny/) applications in your Quarto site
34
+
35
+ ::: {.callout-warning}
36
+ It can take a couple of minutes for the Space to deploy to Hugging Face after the Docker build process completes. Two see your changes you will need to do two things:
37
+
38
+ 1) Wait for your space's status to go from 'Building' to 'Running'(this is visible in the status bar above the Space)
39
+ 2) Force-reload the web page by holding Shift and hitting the reload button in your browser.
40
+ :::
41
+
42
+ ## Code Execution
43
+
44
+ One of the main virtues of Quarto is that it lets you combine code and text in a single document.
45
+ By default, if you include a code chunk in your document, Quarto will execute that code and include the output in the rendered document.
46
+ This is great for reproducibility and for creating documents that are always up-to-date.
47
+ For example you can include code which generates a plot like this:
48
+
49
+ ```{python}
50
+ import seaborn as sns
51
+ import matplotlib.pyplot as plt
52
+
53
+ # Sample data
54
+ tips = sns.load_dataset("tips")
55
+ # Create a seaborn plot
56
+ sns.set_style("whitegrid")
57
+ g = sns.lmplot(x="total_bill", y="tip", data=tips, aspect=2)
58
+ g = g.set_axis_labels("Total bill (USD)", "Tip").set(xlim=(0, 60), ylim=(0, 12))
59
+
60
+ plt.title("Tip by Total Bill")
61
+ plt.show()
62
+ ```
63
+
64
+ When the website is built the Python code will run and the output will be included in the document.
65
+
66
+ You can also include [inline code](https://quarto.org/docs/computations/inline-code.html) to insert computed values into text.
67
+ For example we can include the maximum tip value in the `tips` data frame like this: ``{python} tips['tip'].max()``.
68
+ You can control [code execution](https://quarto.org/docs/computations/execution-options.html), or [freeze code output](https://quarto.org/docs/projects/code-execution.html#freeze) to capture the output of long running computations.
69
+
70
+
71
+ ## About the Open Source AI Cookbook
72
+
73
+ To provide a realistic example of how Quarto can help you organize long-form documentation,
74
+ we've implemented the Hugging Face [Open-Source AI Cookbook](https://github.com/huggingface/cookbook) in Quarto.
75
+ The Open-Source AI Cookbook is a collection of notebooks illustrating practical aspects of building AI applications and solving various machine learning tasks using open-source tools and models.
76
+ You can read more about it, or contribute your own Notebook on the [Github Repo](https://github.com/huggingface/cookbook)
77
+
78
+
src/notebooks/advanced_rag.qmd ADDED
@@ -0,0 +1,588 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Advanced RAG
3
+ jupyter: python3
4
+ eval: false
5
+ code-annotations: hover
6
+ ---
7
+
8
+ This notebook demonstrates how you can build an advanced RAG (Retrieval Augmented Generation) for answering a user's question about a specific knowledge base (here, the HuggingFace documentation), using LangChain.
9
+
10
+ For an introduction to RAG, you can check [this other cookbook](rag_zephyr_langchain.qmd)!
11
+
12
+ RAG systems are complex, with many moving parts: here a RAG diagram, where we noted in blue all possibilities for system enhancement:
13
+
14
+ <img src="https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/RAG_workflow.png" height="700">
15
+
16
+ ::: callout-note
17
+ 💡 As you can see, there are many steps to tune in this architecture: tuning the system properly will yield significant performance gains.
18
+ :::
19
+
20
+ In this notebook, we will take a look into many of these blue notes to see how to tune your RAG system and get the best performance.
21
+
22
+ __Let's dig into the model building!__ First, we install the required model dependancies.
23
+
24
+ ```{python}
25
+ !pip install -q torch transformers transformers accelerate bitsandbytes langchain sentence-transformers faiss-gpu openpyxl pacmap
26
+ ```
27
+
28
+ ```{python}
29
+ %reload_ext dotenv
30
+ %dotenv
31
+ ```
32
+
33
+ ```{python}
34
+ from tqdm.notebook import tqdm
35
+ import pandas as pd
36
+ from typing import Optional, List, Tuple
37
+ from datasets import Dataset
38
+ import matplotlib.pyplot as plt
39
+
40
+ pd.set_option(
41
+ "display.max_colwidth", None # <1>
42
+ )
43
+ ```
44
+ 1. This will be helpful when visualizing retriever outputs
45
+
46
+ ### Load your knowledge base
47
+
48
+ ```{python}
49
+ import datasets
50
+
51
+ ds = datasets.load_dataset("m-ric/huggingface_doc", split="train")
52
+ ```
53
+
54
+ ```{python}
55
+ from langchain.docstore.document import Document as LangchainDocument
56
+
57
+ RAW_KNOWLEDGE_BASE = [
58
+ LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]})
59
+ for doc in tqdm(ds)
60
+ ]
61
+ ```
62
+
63
+ # 1. Retriever - embeddings 🗂️
64
+ The __retriever acts like an internal search engine__: given the user query, it returns a few relevant snippets from your knowledge base.
65
+
66
+ These snippets will then be fed to the Reader Model to help it generate its answer.
67
+
68
+ So __our objective here is, given a user question, to find the most snippets from our knowledge base to answer that question.__
69
+
70
+ This is a wide objective, it leaves open some questions. How many snippets should we retrieve? This parameter will be named `top_k`.
71
+
72
+ How long should these snippets be? This is called the `chunk size`. There's no one-size-fits-all answers, but here are a few elements:
73
+ - 🔀 Your `chunk size` is allowed to vary from one snippet to the other.
74
+ - Since there will always be some noise in your retrieval, increasing the `top_k` increases the chance to get relevant elements in your retrieved snippets. 🎯 Shooting more arrows increases your probability to hit your target.
75
+ - Meanwhile, the summed length of your retrieved documents should not be too high: for instance, for most current models 16k tokens will probably drown your Reader model in information due to [Lost-in-the-middle phenomenon](https://huggingface.co/papers/2307.03172). 🎯 Give your reader model only the most relevant insights, not a huge pile of books!
76
+
77
+ ::: callout-note
78
+ In this notebook, we use Langchain library since __it offers a huge variety of options for vector databases and allows us to keep document metadata throughout the processing__.
79
+ :::
80
+
81
+ ### 1.1 Split the documents into chunks
82
+
83
+ - In this part, __we split the documents from our knowledge base into smaller chunks__ which will be the snippets on which the reader LLM will base its answer.
84
+ - The goal is to prepare a collection of **semantically relevant snippets**. So their size should be adapted to precise ideas: too small will truncate ideas, too large will dilute them.
85
+
86
+ ::: callout-tip
87
+ 💡 Many options exist for text splitting: splitting on words, on sentence boundaries, recursive chunking that processes documents in a tree-like way to preserve structure information... To learn more about chunking, I recommend you read [this great notebook](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb) by Greg Kamradt.
88
+ :::
89
+
90
+
91
+ - **Recursive chunking** breaks down the text into smaller parts step by step using a given list of separators sorted from the most important to the least important separator. If the first split doesn't give the right size or shape chunks, the method repeats itself on the new chunks using a different separator. For instance with the list of separators `["\n\n", "\n", ".", ""]`:
92
+ - The method will first break down the document wherever there is a double line break `"\n\n"`.
93
+ - Resulting documents will be split again on simple line breaks `"\n"`, then on sentence ends `"."`.
94
+ - And finally, if some chunks are still too big, they will be split whenever they overflow the maximum size.
95
+
96
+ - With this method, the global structure is well preserved, at the expense of getting slight variations in chunk size.
97
+
98
+ > [This space](https://huggingface.co/spaces/A-Roucher/chunk_visualizer) lets you visualize how different splitting options affect the chunks you get.
99
+
100
+ 🔬 Let's experiment a bit with chunk sizes, beginning with an arbitrary size, and see how splits work. We use Langchain's implementation of recursive chunking with `RecursiveCharacterTextSplitter`.
101
+ - Parameter `chunk_size` controls the length of individual chunks: this length is counted by default as the number of characters in the chunk.
102
+ - Parameter `chunk_overlap` lets adjacent chunks get a bit of overlap on each other. This reduces the probability that an idea could be cut in half by the split between two adjacent chunks. We ~arbitrarily set this to 1/10th of the chunk size, you could try different values!
103
+
104
+ ```{python}
105
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
106
+
107
+ # We use a hierarchical list of separators specifically tailored for splitting Markdown documents
108
+ # This list is taken from LangChain's MarkdownTextSplitter class.
109
+ MARKDOWN_SEPARATORS = [
110
+ "\n#{1,6} ",
111
+ "```\n",
112
+ "\n\\*\\*\\*+\n",
113
+ "\n---+\n",
114
+ "\n___+\n",
115
+ "\n\n",
116
+ "\n",
117
+ " ",
118
+ "",
119
+ ]
120
+
121
+ text_splitter = RecursiveCharacterTextSplitter(
122
+ chunk_size=1000, # <1>
123
+ chunk_overlap=100, # <2>
124
+ add_start_index=True, # <3>
125
+ strip_whitespace=True, # <4>
126
+ separators=MARKDOWN_SEPARATORS,
127
+ )
128
+
129
+ docs_processed = []
130
+ for doc in RAW_KNOWLEDGE_BASE:
131
+ docs_processed += text_splitter.split_documents([doc])
132
+ ```
133
+ 1. The maximum number of characters in a chunk: we selected this value arbitrally
134
+ 2. The number of characters to overlap between chunks
135
+ 3. If `True`, includes chunk's start index in metadata
136
+ 4. If `True`, strips whitespace from the start and end of every document
137
+
138
+
139
+ We also have to keep in mind that when embedding documents, we will use an embedding model that has accepts a certain maximum sequence length `max_seq_length`.
140
+
141
+ So we should make sure that our chunk sizes are below this limit, because any longer chunk will be truncated before processing, thus losing relevancy.
142
+
143
+ ```{python}
144
+ #| colab: {referenced_widgets: [ae043feeb0914c879e2a9008b413d952]}
145
+ from sentence_transformers import SentenceTransformer
146
+
147
+ # To get the value of the max sequence_length, we will query the underlying `SentenceTransformer` object used in the RecursiveCharacterTextSplitter.
148
+ print(
149
+ f"Model's maximum sequence length: {SentenceTransformer('thenlper/gte-small').max_seq_length}"
150
+ )
151
+
152
+ from transformers import AutoTokenizer
153
+
154
+ tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small")
155
+ lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]
156
+
157
+ # Plot the distrubution of document lengths, counted as the number of tokens
158
+ fig = pd.Series(lengths).hist()
159
+ plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
160
+ plt.show()
161
+ ```
162
+
163
+ 👀 As you can see, __the chunk lengths are not aligned with our limit of 512 tokens__, and some documents are above the limit, thus some part of them will be lost in truncation!
164
+ - So we should change the `RecursiveCharacterTextSplitter` class to count length in number of tokens instead of number of characters.
165
+ - Then we can choose a specific chunk size, here we would choose a lower threshold than 512:
166
+ - smaller documents could allow the split to focus more on specific ideas.
167
+ - But too small chunks would split sentences in half, thus losing meaning again: the proper tuning is a matter of balance.
168
+
169
+ ```{python}
170
+ #| colab: {referenced_widgets: [f900cf4ab3a94f45bfa7298f433566ed]}
171
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
172
+ from transformers import AutoTokenizer
173
+
174
+ EMBEDDING_MODEL_NAME = "thenlper/gte-small"
175
+
176
+
177
+ def split_documents(
178
+ chunk_size: int,
179
+ knowledge_base: List[LangchainDocument],
180
+ tokenizer_name: Optional[str] = EMBEDDING_MODEL_NAME,
181
+ ) -> List[LangchainDocument]:
182
+ """
183
+ Split documents into chunks of maximum size `chunk_size` tokens and return a list of documents.
184
+ """
185
+ text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
186
+ AutoTokenizer.from_pretrained(tokenizer_name),
187
+ chunk_size=chunk_size,
188
+ chunk_overlap=int(chunk_size / 10),
189
+ add_start_index=True,
190
+ strip_whitespace=True,
191
+ separators=MARKDOWN_SEPARATORS,
192
+ )
193
+
194
+ docs_processed = []
195
+ for doc in knowledge_base:
196
+ docs_processed += text_splitter.split_documents([doc])
197
+
198
+ # Remove duplicates
199
+ unique_texts = {}
200
+ docs_processed_unique = []
201
+ for doc in docs_processed:
202
+ if doc.page_content not in unique_texts:
203
+ unique_texts[doc.page_content] = True
204
+ docs_processed_unique.append(doc)
205
+
206
+ return docs_processed_unique
207
+
208
+
209
+ docs_processed = split_documents(
210
+ 512, # We choose a chunk size adapted to our model
211
+ RAW_KNOWLEDGE_BASE,
212
+ tokenizer_name=EMBEDDING_MODEL_NAME,
213
+ )
214
+
215
+ # Let's visualize the chunk sizes we would have in tokens from a common model
216
+ from transformers import AutoTokenizer
217
+
218
+ tokenizer = AutoTokenizer.from_pretrained(EMBEDDING_MODEL_NAME)
219
+ lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]
220
+ fig = pd.Series(lengths).hist()
221
+ plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
222
+ plt.show()
223
+ ```
224
+
225
+ ➡️ Now the chunk length distribution looks better!
226
+
227
+ ### 1.2 Building the vector database
228
+
229
+ We want to compute the embeddings for all the chunks of our knowledge base: to learn more on sentence embeddings, we recommend reading [this guide](https://osanseviero.github.io/hackerllama/blog/posts/sentence_embeddings/).
230
+
231
+ #### How does retrieval work ?
232
+
233
+ Once the chunks are all embedded, we store them into a vector database. When the user types in a query, it gets embedded by the same model previously used, and a similarity search returns the closest documents from the vector database.
234
+
235
+ The technical challenge is thus, given a query vector, to quickly find the nearest neighbours of this vector in the vector database. To do this, we need to choose two things: a distance, and a search algorithm to find the nearest neighbors quickly within a database of thousands of records.
236
+
237
+ ##### Nearest Neighbor search algorithm
238
+
239
+ There are plentiful choices for the nearest neighbor search algorithm: we go with Facebook's [FAISS](https://github.com/facebookresearch/faiss), since FAISS is performant enough for most use cases, and it is well known thus widely implemented.
240
+
241
+ ##### Distances
242
+
243
+ Regarding distances, you can find a good guide [here](https://osanseviero.github.io/hackerllama/blog/posts/sentence_embeddings/#distance-between-embeddings). In short:
244
+
245
+ - **Cosine similarity** computes similarity between two vectors as the cosinus of their relative angle: it allows us to compare vector directions are regardless of their magnitude. Using it requires to normalize all vectors, to rescale them into unit norm.
246
+ - **Dot product** takes into account magnitude, with the sometimes undesirable effect that increasing a vector's length will make it more similar to all others.
247
+ - **Euclidean distance** is the distance between the ends of vectors.
248
+
249
+ You can try [this small exercise](https://developers.google.com/machine-learning/clustering/similarity/check-your-understanding) to check your understanding of these concepts. But once vectors are normalized, [the choice of a specific distance does not matter much](https://platform.openai.com/docs/guides/embeddings/which-distance-function-should-i-use).
250
+
251
+ Our particular model works well with cosine similarity, so choose this distance, and we set it up both in the Embedding model, and in the `distance_strategy` argument of our FAISS index. With cosine similarity, we have to normalize our embeddings.
252
+
253
+ ::: {.callout-warning}
254
+ 🚨👇 The cell below takes a few minutes to run on A10G!
255
+ :::
256
+
257
+ ```{python}
258
+ from langchain.vectorstores import FAISS
259
+ from langchain_community.embeddings import HuggingFaceEmbeddings
260
+ from langchain_community.vectorstores.utils import DistanceStrategy
261
+
262
+ embedding_model = HuggingFaceEmbeddings(
263
+ model_name=EMBEDDING_MODEL_NAME,
264
+ multi_process=True,
265
+ model_kwargs={"device": "cuda"},
266
+ encode_kwargs={"normalize_embeddings": True}, # set True for cosine similarity
267
+ )
268
+
269
+ KNOWLEDGE_VECTOR_DATABASE = FAISS.from_documents(
270
+ docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE
271
+ )
272
+ ```
273
+
274
+ 👀 To visualize the search for the closest documents, let's project our embeddings from 384 dimensions down to 2 dimensions using PaCMAP.
275
+
276
+ ::: {.callout-note}
277
+ 💡 We chose PaCMAP rather than other techniques such as t-SNE or UMAP, since [it is efficient (preserves local and global structure), robust to initialization parameters and fast](https://www.nature.com/articles/s42003-022-03628-x#Abs1).
278
+ :::
279
+
280
+
281
+ ```{python}
282
+ # embed a user query in the same space
283
+ user_query = "How to create a pipeline object?"
284
+ query_vector = embedding_model.embed_query(user_query)
285
+ ```
286
+
287
+ ```{python}
288
+ import pacmap
289
+ import numpy as np
290
+ import plotly.express as px
291
+
292
+ embedding_projector = pacmap.PaCMAP(
293
+ n_components=2, n_neighbors=None, MN_ratio=0.5, FP_ratio=2.0, random_state=1
294
+ )
295
+
296
+ embeddings_2d = [
297
+ list(KNOWLEDGE_VECTOR_DATABASE.index.reconstruct_n(idx, 1)[0])
298
+ for idx in range(len(docs_processed))
299
+ ] + [query_vector]
300
+
301
+ # fit the data (The index of transformed data corresponds to the index of the original data)
302
+ documents_projected = embedding_projector.fit_transform(np.array(embeddings_2d), init="pca")
303
+ ```
304
+
305
+ ```{python}
306
+ df = pd.DataFrame.from_dict(
307
+ [
308
+ {
309
+ "x": documents_projected[i, 0],
310
+ "y": documents_projected[i, 1],
311
+ "source": docs_processed[i].metadata["source"].split("/")[1],
312
+ "extract": docs_processed[i].page_content[:100] + "...",
313
+ "symbol": "circle",
314
+ "size_col": 4,
315
+ }
316
+ for i in range(len(docs_processed))
317
+ ]
318
+ + [
319
+ {
320
+ "x": documents_projected[-1, 0],
321
+ "y": documents_projected[-1, 1],
322
+ "source": "User query",
323
+ "extract": user_query,
324
+ "size_col": 100,
325
+ "symbol": "star",
326
+ }
327
+ ]
328
+ )
329
+
330
+ # visualize the embedding
331
+ fig = px.scatter(
332
+ df,
333
+ x="x",
334
+ y="y",
335
+ color="source",
336
+ hover_data="extract",
337
+ size="size_col",
338
+ symbol="symbol",
339
+ color_discrete_map={"User query": "black"},
340
+ width=1000,
341
+ height=700,
342
+ )
343
+ fig.update_traces(
344
+ marker=dict(opacity=1, line=dict(width=0, color="DarkSlateGrey")), selector=dict(mode="markers")
345
+ )
346
+ fig.update_layout(
347
+ legend_title_text="<b>Chunk source</b>",
348
+ title="<b>2D Projection of Chunk Embeddings via PaCMAP</b>",
349
+ )
350
+ fig.show()
351
+ ```
352
+
353
+ <img src="https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/PaCMAP_embeddings.png" height="700">
354
+
355
+
356
+ ➡️ On the graph above, you can see a spatial representation of the kowledge base documents. As the vector embeddings represent the document's meaning, their closeness in meaning should be reflected in their embedding's closeness.
357
+
358
+ The user query's embedding is also shown : we want to find the `k` document that have the closest meaning, thus we pick the `k` closest vectors.
359
+
360
+ In the LangChain vector database implementation, this search operation is performed by the method `vector_database.similarity_search(query)`.
361
+
362
+ Here is the result:
363
+
364
+ ```{python}
365
+ print(f"\nStarting retrieval for {user_query=}...")
366
+ retrieved_docs = KNOWLEDGE_VECTOR_DATABASE.similarity_search(query=user_query, k=5)
367
+ print("\n==================================Top document==================================")
368
+ print(retrieved_docs[0].page_content)
369
+ print("==================================Metadata==================================")
370
+ print(retrieved_docs[0].metadata)
371
+ ```
372
+
373
+ # 2. Reader - LLM 💬
374
+
375
+ In this part, the __LLM Reader reads the retrieved context to formulate its answer.__
376
+
377
+ There are actually substeps that can all be tuned:
378
+ 1. The content of the retrieved documents is aggregated together into the "context", with many processing options like _prompt compression_.
379
+ 2. The context and the user query are aggregated into a prompt then given to the LLM to generate its answer.
380
+
381
+ ### 2.1. Reader model
382
+
383
+ The choice of a reader model is important on a few aspects:
384
+ - the reader model's `max_seq_length` must accomodate our prompt, which includes the context output by the retriever call: the context consists in 5 documents of 512 tokens each, so we aim for a context length of 4k tokens at least.
385
+ - the reader model
386
+
387
+ For this example, we chose [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a small but powerful model.
388
+
389
+ ::: callout-note
390
+ With many models being released every week, you may want to substitute this model to the latest and greatest. The best way to keep track of open source LLMs is to check the [Open-source LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
391
+ :::
392
+
393
+ To make inference faster, we will load the quantized version of the model:
394
+
395
+ ```{python}
396
+ #| colab: {referenced_widgets: [db31fd28d3604e78aead26af87b0384f]}
397
+ from transformers import pipeline
398
+ import torch
399
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
400
+
401
+ READER_MODEL_NAME = "HuggingFaceH4/zephyr-7b-beta"
402
+
403
+ bnb_config = BitsAndBytesConfig(
404
+ load_in_4bit=True,
405
+ bnb_4bit_use_double_quant=True,
406
+ bnb_4bit_quant_type="nf4",
407
+ bnb_4bit_compute_dtype=torch.bfloat16,
408
+ )
409
+ model = AutoModelForCausalLM.from_pretrained(READER_MODEL_NAME, quantization_config=bnb_config)
410
+ tokenizer = AutoTokenizer.from_pretrained(READER_MODEL_NAME)
411
+
412
+ READER_LLM = pipeline(
413
+ model=model,
414
+ tokenizer=tokenizer,
415
+ task="text-generation",
416
+ do_sample=True,
417
+ temperature=0.2,
418
+ repetition_penalty=1.1,
419
+ return_full_text=False,
420
+ max_new_tokens=500,
421
+ )
422
+ ```
423
+
424
+ ```{python}
425
+ READER_LLM("What is 4+4? Answer:")
426
+ ```
427
+
428
+ ### 2.2. Prompt
429
+
430
+ The RAG prompt template below is what we will feed to the Reader LLM: it is important to have it formatted in the Reader LLM's chat template.
431
+
432
+ We give it our context and the user's question.
433
+
434
+ ```{python}
435
+ prompt_in_chat_format = [
436
+ {
437
+ "role": "system",
438
+ "content": """Using the information contained in the context,
439
+ give a comprehensive answer to the question.
440
+ Respond only to the question asked, response should be concise and relevant to the question.
441
+ Provide the number of the source document when relevant.
442
+ If the answer cannot be deduced from the context, do not give an answer.""",
443
+ },
444
+ {
445
+ "role": "user",
446
+ "content": """Context:
447
+ {context}
448
+ ---
449
+ Now here is the question you need to answer.
450
+
451
+ Question: {question}""",
452
+ },
453
+ ]
454
+ RAG_PROMPT_TEMPLATE = tokenizer.apply_chat_template(
455
+ prompt_in_chat_format, tokenize=False, add_generation_prompt=True
456
+ )
457
+ print(RAG_PROMPT_TEMPLATE)
458
+ ```
459
+
460
+ Let's test our Reader on our previously retrieved documents!
461
+
462
+ ```{python}
463
+ retrieved_docs_text = [
464
+ doc.page_content for doc in retrieved_docs
465
+ ] # we only need the text of the documents
466
+ context = "\nExtracted documents:\n"
467
+ context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(retrieved_docs_text)])
468
+
469
+ final_prompt = RAG_PROMPT_TEMPLATE.format(
470
+ question="How to create a pipeline object?", context=context
471
+ )
472
+
473
+ # Redact an answer
474
+ answer = READER_LLM(final_prompt)[0]["generated_text"]
475
+ print(answer)
476
+ ```
477
+
478
+ ### 2.3. Reranking
479
+
480
+ A good option for RAG is to retrieve more documents than you want in the end, then rerank the results with a more powerful retrieval model before keeping only the `top_k`.
481
+
482
+ For this, [Colbertv2](https://arxiv.org/abs/2112.01488) is a great choice: instead of a bi-encoder like our classical embedding models, it is a cross-encoder that computes more fine-grained interactions between the query tokens and each document's tokens.
483
+
484
+ It is easily usable thanks to [the RAGatouille library](https://github.com/bclavie/RAGatouille).
485
+
486
+ ```{python}
487
+ from ragatouille import RAGPretrainedModel
488
+
489
+ RERANKER = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")
490
+ ```
491
+
492
+ # 3. Assembling it all!
493
+
494
+ ```{python}
495
+ from transformers import Pipeline
496
+
497
+
498
+ def answer_with_rag(
499
+ question: str,
500
+ llm: Pipeline,
501
+ knowledge_index: FAISS,
502
+ reranker: Optional[RAGPretrainedModel] = None,
503
+ num_retrieved_docs: int = 30,
504
+ num_docs_final: int = 5,
505
+ ) -> Tuple[str, List[LangchainDocument]]:
506
+ # Gather documents with retriever
507
+ print("=> Retrieving documents...")
508
+ relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)
509
+ relevant_docs = [doc.page_content for doc in relevant_docs] # keep only the text
510
+
511
+ # Optionally rerank results
512
+ if reranker:
513
+ print("=> Reranking documents...")
514
+ relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)
515
+ relevant_docs = [doc["content"] for doc in relevant_docs]
516
+
517
+ relevant_docs = relevant_docs[:num_docs_final]
518
+
519
+ # Build the final prompt
520
+ context = "\nExtracted documents:\n"
521
+ context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(relevant_docs)])
522
+
523
+ final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)
524
+
525
+ # Redact an answer
526
+ print("=> Generating answer...")
527
+ answer = llm(final_prompt)[0]["generated_text"]
528
+
529
+ return answer, relevant_docs
530
+ ```
531
+
532
+ Let's see how our RAG pipeline answers a user query.
533
+
534
+ ```{python}
535
+ question = "how to create a pipeline object?"
536
+
537
+ answer, relevant_docs = answer_with_rag(
538
+ question, READER_LLM, KNOWLEDGE_VECTOR_DATABASE, reranker=RERANKER
539
+ )
540
+ ```
541
+
542
+ ```{python}
543
+ print("==================================Answer==================================")
544
+ print(f"{answer}")
545
+ print("==================================Source docs==================================")
546
+ for i, doc in enumerate(relevant_docs):
547
+ print(f"Document {i}------------------------------------------------------------")
548
+ print(doc)
549
+ ```
550
+
551
+ ✅ We now have a fully functional, performant RAG sytem. That's it for today! Congratulations for making it to the end 🥳
552
+
553
+
554
+ # To go further 🗺️
555
+
556
+ This is not the end of the journey! You can try many steps to improve your RAG system. We recommend doing so in an iterative way: bring small changes to the system and see what improves performance.
557
+
558
+ ### Setting up an evaluation pipeline
559
+
560
+ - 💬 "You cannot improve the model performance that you do not measure", said Gandhi... or at least Llama2 told me he said it. Anyway, you should absolutely start by measuring performance: this means building a small evaluation dataset, then monitor the performance of your RAG system on this evaluation dataset.
561
+
562
+ ### Improving the retriever
563
+
564
+ 🛠️ __You can use these options to tune the results:__
565
+
566
+ - Tune the chunking method:
567
+ - Size of the chunks
568
+ - Method: split on different separators, use [semantic chunking](https://python.langchain.com/docs/modules/data_connection/document_transformers/semantic-chunker)...
569
+ - Change the embedding model
570
+
571
+ 👷‍♀️ __More could be considered:__
572
+ - Try another chunking method, like semantic chunking
573
+ - Change the index used (here, FAISS)
574
+ - Query expansion: reformulate the user query in slightly different ways to retrieve more documents.
575
+
576
+ ### Improving the reader
577
+
578
+ 🛠️ __Here you can try the following options to improve results:__
579
+ - Tune the prompt
580
+ - Switch reranking on/off
581
+ - Choose a more powerful reader model
582
+
583
+ 💡 __Many options could be considered here to further improve the results:__
584
+ - Compress the retrieved context to keep only the most relevant parts to answer the query.
585
+ - Extend the RAG system to make it more user-friendly:
586
+ - cite source
587
+ - make conversational
588
+
src/notebooks/automatic_embedding.ipynb ADDED
@@ -0,0 +1,825 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "5d9aca72-957a-4ee2-862f-e011b9cd3a62",
6
+ "metadata": {},
7
+ "source": [
8
+ "---\n",
9
+ "title: \"Inference Endpoints\"\n",
10
+ "---\n",
11
+ "\n",
12
+ "# How to use Inference Endpoints to Embed Documents\n",
13
+ "\n",
14
+ "_Authored by: [Derek Thomas](https://huggingface.co/derek-thomas)_\n",
15
+ "\n",
16
+ "## Goal\n",
17
+ "I have a dataset I want to embed for semantic search (or QA, or RAG), I want the easiest way to do embed this and put it in a new dataset.\n",
18
+ "\n",
19
+ "## Approach\n",
20
+ "I'm using a dataset from my favorite subreddit [r/bestofredditorupdates](https://www.reddit.com/r/bestofredditorupdates/). Because it has long entries, I will use the new [jinaai/jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) since it has an 8k context length. I will deploy this using [Inference Endpoint](https://huggingface.co/inference-endpoints) to save time and money. To follow this tutorial, you will need to **have already added a payment method**. If you haven't, you can add one here in [billing](https://huggingface.co/docs/hub/billing#billing). To make it even easier, I'll make this fully API based.\n",
21
+ "\n",
22
+ "To make this MUCH faster I will use the [Text Embeddings Inference](https://github.com/huggingface/text-embeddings-inference) image. This has many benefits like:\n",
23
+ "- No model graph compilation step\n",
24
+ "- Small docker images and fast boot times. Get ready for true serverless!\n",
25
+ "- Token based dynamic batching\n",
26
+ "- Optimized transformers code for inference using Flash Attention, Candle and cuBLASLt\n",
27
+ "- Safetensors weight loading\n",
28
+ "- Production ready (distributed tracing with Open Telemetry, Prometheus metrics)\n",
29
+ "\n",
30
+ "![img](https://media.githubusercontent.com/media/huggingface/text-embeddings-inference/main/assets/bs1-tp.png)"
31
+ ]
32
+ },
33
+ {
34
+ "cell_type": "markdown",
35
+ "id": "3c830114-dd88-45a9-81b9-78b0e3da7384",
36
+ "metadata": {},
37
+ "source": [
38
+ "## Requirements"
39
+ ]
40
+ },
41
+ {
42
+ "cell_type": "code",
43
+ "execution_count": null,
44
+ "id": "35386f72-32cb-49fa-a108-3aa504e20429",
45
+ "metadata": {
46
+ "tags": []
47
+ },
48
+ "outputs": [],
49
+ "source": [
50
+ "!pip install -q aiohttp==3.8.3 datasets==2.14.6 pandas==1.5.3 requests==2.31.0 tqdm==4.66.1 huggingface-hub>=0.20"
51
+ ]
52
+ },
53
+ {
54
+ "cell_type": "markdown",
55
+ "id": "b6f72042-173d-4a72-ade1-9304b43b528d",
56
+ "metadata": {},
57
+ "source": [
58
+ "## Imports"
59
+ ]
60
+ },
61
+ {
62
+ "cell_type": "code",
63
+ "execution_count": 3,
64
+ "id": "e2beecdd-d033-4736-bd45-6754ec53b4ac",
65
+ "metadata": {
66
+ "tags": []
67
+ },
68
+ "outputs": [],
69
+ "source": [
70
+ "import asyncio\n",
71
+ "from getpass import getpass\n",
72
+ "import json\n",
73
+ "from pathlib import Path\n",
74
+ "import time\n",
75
+ "from typing import Optional\n",
76
+ "\n",
77
+ "from aiohttp import ClientSession, ClientTimeout\n",
78
+ "from datasets import load_dataset, Dataset, DatasetDict\n",
79
+ "from huggingface_hub import notebook_login, create_inference_endpoint, list_inference_endpoints, whoami\n",
80
+ "import numpy as np\n",
81
+ "import pandas as pd\n",
82
+ "import requests\n",
83
+ "from tqdm.auto import tqdm"
84
+ ]
85
+ },
86
+ {
87
+ "cell_type": "markdown",
88
+ "id": "5eece903-64ce-435d-a2fd-096c0ff650bf",
89
+ "metadata": {},
90
+ "source": [
91
+ "## Config\n",
92
+ "`DATASET_IN` is where your text data is\n",
93
+ "`DATASET_OUT` is where your embeddings will be stored\n",
94
+ "\n",
95
+ "Note I used 5 for the `MAX_WORKERS` since `jina-embeddings-v2` are quite memory hungry. "
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": 4,
101
+ "id": "df2f79f0-9f28-46e6-9fc7-27e9537ff5be",
102
+ "metadata": {
103
+ "tags": []
104
+ },
105
+ "outputs": [],
106
+ "source": [
107
+ "DATASET_IN = 'derek-thomas/dataset-creator-reddit-bestofredditorupdates'\n",
108
+ "DATASET_OUT = \"processed-subset-bestofredditorupdates\"\n",
109
+ "ENDPOINT_NAME = \"boru-jina-embeddings-demo-ie\"\n",
110
+ "\n",
111
+ "MAX_WORKERS = 5 # This is for how many async workers you want. Choose based on the model and hardware \n",
112
+ "ROW_COUNT = 100 # Choose None to use all rows, Im using 100 just for a demo"
113
+ ]
114
+ },
115
+ {
116
+ "cell_type": "markdown",
117
+ "id": "1e680f3d-4900-46cc-8b49-bb6ba3e27e2b",
118
+ "metadata": {},
119
+ "source": [
120
+ "Hugging Face offers a number of GPUs that you can choose from a number of GPUs that you can choose in Inference Endpoints. Here they are in table form:\n",
121
+ "\n",
122
+ "| GPU | instanceType | instanceSize | vRAM |\n",
123
+ "|---------------------|----------------|--------------|-------|\n",
124
+ "| 1x Nvidia Tesla T4 | g4dn.xlarge | small | 16GB |\n",
125
+ "| 4x Nvidia Tesla T4 | g4dn.12xlarge | large | 64GB |\n",
126
+ "| 1x Nvidia A10G | g5.2xlarge | medium | 24GB |\n",
127
+ "| 4x Nvidia A10G | g5.12xlarge | xxlarge | 96GB |\n",
128
+ "| 1x Nvidia A100* | p4de | xlarge | 80GB |\n",
129
+ "| 2x Nvidia A100* | p4de | 2xlarge | 160GB |\n",
130
+ "\n",
131
+ "\\*Note that for A100s you might get a note to email us to get access."
132
+ ]
133
+ },
134
+ {
135
+ "cell_type": "code",
136
+ "execution_count": 4,
137
+ "id": "3c2106c1-2e5a-443a-9ea8-a3cd0e9c5a94",
138
+ "metadata": {
139
+ "tags": []
140
+ },
141
+ "outputs": [],
142
+ "source": [
143
+ "# GPU Choice\n",
144
+ "VENDOR=\"aws\"\n",
145
+ "REGION=\"us-east-1\"\n",
146
+ "INSTANCE_SIZE=\"medium\"\n",
147
+ "INSTANCE_TYPE=\"g5.2xlarge\""
148
+ ]
149
+ },
150
+ {
151
+ "cell_type": "code",
152
+ "execution_count": 5,
153
+ "id": "0ca1140c-3fcc-4b99-9210-6da1505a27b7",
154
+ "metadata": {
155
+ "tags": []
156
+ },
157
+ "outputs": [
158
+ {
159
+ "data": {
160
+ "application/vnd.jupyter.widget-view+json": {
161
+ "model_id": "ee80821056e147fa9cabf30f64dc85a8",
162
+ "version_major": 2,
163
+ "version_minor": 0
164
+ },
165
+ "text/plain": [
166
+ "VBox(children=(HTML(value='<center> <img\\nsrc=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
167
+ ]
168
+ },
169
+ "metadata": {},
170
+ "output_type": "display_data"
171
+ }
172
+ ],
173
+ "source": [
174
+ "notebook_login()"
175
+ ]
176
+ },
177
+ {
178
+ "cell_type": "markdown",
179
+ "id": "5f4ba0a8-0a6c-4705-a73b-7be09b889610",
180
+ "metadata": {},
181
+ "source": [
182
+ "Some users might have payment registered in an organization. This allows you to connect to an organization (that you are a member of) with a payment method.\n",
183
+ "\n",
184
+ "Leave it blank is you want to use your username."
185
+ ]
186
+ },
187
+ {
188
+ "cell_type": "code",
189
+ "execution_count": 6,
190
+ "id": "88cdbd73-5923-4ae9-9940-b6be935f70fa",
191
+ "metadata": {
192
+ "tags": []
193
+ },
194
+ "outputs": [
195
+ {
196
+ "name": "stdout",
197
+ "output_type": "stream",
198
+ "text": [
199
+ "What is your Hugging Face 🤗 username or organization? (with an added payment method) ········\n"
200
+ ]
201
+ }
202
+ ],
203
+ "source": [
204
+ "who = whoami()\n",
205
+ "organization = getpass(prompt=\"What is your Hugging Face 🤗 username or organization? (with an added payment method)\")\n",
206
+ "\n",
207
+ "namespace = organization or who['name']"
208
+ ]
209
+ },
210
+ {
211
+ "cell_type": "markdown",
212
+ "id": "b972a719-2aed-4d2e-a24f-fae7776d5fa4",
213
+ "metadata": {},
214
+ "source": [
215
+ "## Get Dataset"
216
+ ]
217
+ },
218
+ {
219
+ "cell_type": "code",
220
+ "execution_count": 7,
221
+ "id": "27835fa4-3a4f-44b1-a02a-5e31584a1bba",
222
+ "metadata": {
223
+ "tags": []
224
+ },
225
+ "outputs": [
226
+ {
227
+ "data": {
228
+ "application/vnd.jupyter.widget-view+json": {
229
+ "model_id": "4041cedd3b3f4f8db3e29ec102f46a3a",
230
+ "version_major": 2,
231
+ "version_minor": 0
232
+ },
233
+ "text/plain": [
234
+ "Downloading readme: 0%| | 0.00/1.73k [00:00<?, ?B/s]"
235
+ ]
236
+ },
237
+ "metadata": {},
238
+ "output_type": "display_data"
239
+ },
240
+ {
241
+ "data": {
242
+ "text/plain": [
243
+ "Dataset({\n",
244
+ " features: ['id', 'content', 'score', 'date_utc', 'title', 'flair', 'poster', 'permalink', 'new', 'updated'],\n",
245
+ " num_rows: 10042\n",
246
+ "})"
247
+ ]
248
+ },
249
+ "execution_count": 7,
250
+ "metadata": {},
251
+ "output_type": "execute_result"
252
+ }
253
+ ],
254
+ "source": [
255
+ "dataset = load_dataset(DATASET_IN)\n",
256
+ "dataset['train']"
257
+ ]
258
+ },
259
+ {
260
+ "cell_type": "code",
261
+ "execution_count": 8,
262
+ "id": "8846087e-4d0d-4c0e-8aeb-ea95d9e97126",
263
+ "metadata": {
264
+ "tags": []
265
+ },
266
+ "outputs": [
267
+ {
268
+ "data": {
269
+ "text/plain": [
270
+ "(100,\n",
271
+ " {'id': '10004zw',\n",
272
+ " 'content': '[removed]',\n",
273
+ " 'score': 1,\n",
274
+ " 'date_utc': Timestamp('2022-12-31 18:16:22'),\n",
275
+ " 'title': 'To All BORU contributors, Thank you :)',\n",
276
+ " 'flair': 'CONCLUDED',\n",
277
+ " 'poster': 'IsItAcOnSeQuEnCe',\n",
278
+ " 'permalink': '/r/BestofRedditorUpdates/comments/10004zw/to_all_boru_contributors_thank_you/',\n",
279
+ " 'new': False,\n",
280
+ " 'updated': False})"
281
+ ]
282
+ },
283
+ "execution_count": 8,
284
+ "metadata": {},
285
+ "output_type": "execute_result"
286
+ }
287
+ ],
288
+ "source": [
289
+ "documents = dataset['train'].to_pandas().to_dict('records')[:ROW_COUNT]\n",
290
+ "len(documents), documents[0]"
291
+ ]
292
+ },
293
+ {
294
+ "cell_type": "markdown",
295
+ "id": "93096cbc-81c6-4137-a283-6afb0f48fbb9",
296
+ "metadata": {},
297
+ "source": [
298
+ "# Inference Endpoints\n",
299
+ "## Create Inference Endpoint\n",
300
+ "We are going to use the [API](https://huggingface.co/docs/inference-endpoints/api_reference) to create an [Inference Endpoint](https://huggingface.co/inference-endpoints). This should provide a few main benefits:\n",
301
+ "- It's convenient (No clicking)\n",
302
+ "- It's repeatable (We have the code to run it easily)\n",
303
+ "- It's cheaper (No time spent waiting for it to load, and automatically shut it down)\n",
304
+ "\n"
305
+ ]
306
+ },
307
+ {
308
+ "cell_type": "code",
309
+ "execution_count": 9,
310
+ "id": "9e59de46-26b7-4bb9-bbad-8bba9931bde7",
311
+ "metadata": {
312
+ "tags": []
313
+ },
314
+ "outputs": [],
315
+ "source": [
316
+ "try:\n",
317
+ " endpoint = create_inference_endpoint(\n",
318
+ " ENDPOINT_NAME,\n",
319
+ " repository=\"jinaai/jina-embeddings-v2-base-en\",\n",
320
+ " revision=\"7302ac470bed880590f9344bfeee32ff8722d0e5\",\n",
321
+ " task=\"sentence-embeddings\",\n",
322
+ " framework=\"pytorch\",\n",
323
+ " accelerator=\"gpu\",\n",
324
+ " instance_size=INSTANCE_SIZE,\n",
325
+ " instance_type=INSTANCE_TYPE,\n",
326
+ " region=REGION,\n",
327
+ " vendor=VENDOR,\n",
328
+ " namespace=namespace,\n",
329
+ " custom_image={\n",
330
+ " \"health_route\": \"/health\",\n",
331
+ " \"env\": {\n",
332
+ " \"MAX_BATCH_TOKENS\": str(MAX_WORKERS * 2048),\n",
333
+ " \"MAX_CONCURRENT_REQUESTS\": \"512\",\n",
334
+ " \"MODEL_ID\": \"/repository\"\n",
335
+ " },\n",
336
+ " \"url\": \"ghcr.io/huggingface/text-embeddings-inference:0.5.0\",\n",
337
+ " },\n",
338
+ " type=\"protected\",\n",
339
+ " )\n",
340
+ "except:\n",
341
+ " endpoint = [ie for ie in list_inference_endpoints(namespace=namespace) if ie.name == ENDPOINT_NAME][0]\n",
342
+ " print('Loaded endpoint')"
343
+ ]
344
+ },
345
+ {
346
+ "cell_type": "markdown",
347
+ "id": "0f2c97dc-34e8-49e9-b60e-f5b7366294c0",
348
+ "metadata": {},
349
+ "source": [
350
+ "There are a few design choices here:\n",
351
+ "- As discussed before we are using `jinaai/jina-embeddings-v2-base-en` as our model. \n",
352
+ " - For reproducibility we are pinning it to a specific revision.\n",
353
+ "- If you are interested in more models, check out the supported list [here](https://huggingface.co/docs/text-embeddings-inference/supported_models). \n",
354
+ " - Note that most embedding models are based on the BERT architecture.\n",
355
+ "- `MAX_BATCH_TOKENS` is chosen based on our number of workers and the context window of our embedding model.\n",
356
+ "- `type=\"protected\"` utilized the security from Inference Endpoints detailed here.\n",
357
+ "- I'm using **1x Nvidia A10** since `jina-embeddings-v2` is memory hungry (remember the 8k context length). \n",
358
+ "- You should consider further tuning `MAX_BATCH_TOKENS` and `MAX_CONCURRENT_REQUESTS` if you have high workloads\n"
359
+ ]
360
+ },
361
+ {
362
+ "cell_type": "markdown",
363
+ "id": "96d173b2-8980-4554-9039-c62843d3fc7d",
364
+ "metadata": {},
365
+ "source": [
366
+ "## Wait until it's running"
367
+ ]
368
+ },
369
+ {
370
+ "cell_type": "code",
371
+ "execution_count": 10,
372
+ "id": "5f3a8bd2-753c-49a8-9452-899578beddc5",
373
+ "metadata": {
374
+ "tags": []
375
+ },
376
+ "outputs": [
377
+ {
378
+ "name": "stdout",
379
+ "output_type": "stream",
380
+ "text": [
381
+ "CPU times: user 48.1 ms, sys: 15.7 ms, total: 63.8 ms\n",
382
+ "Wall time: 52.6 s\n"
383
+ ]
384
+ },
385
+ {
386
+ "data": {
387
+ "text/plain": [
388
+ "InferenceEndpoint(name='boru-jina-embeddings-demo-ie', namespace='HF-test-lab', repository='jinaai/jina-embeddings-v2-base-en', status='running', url='https://k7l1xeok1jwnpbx5.us-east-1.aws.endpoints.huggingface.cloud')"
389
+ ]
390
+ },
391
+ "execution_count": 10,
392
+ "metadata": {},
393
+ "output_type": "execute_result"
394
+ }
395
+ ],
396
+ "source": [
397
+ "%%time\n",
398
+ "endpoint.wait()"
399
+ ]
400
+ },
401
+ {
402
+ "cell_type": "markdown",
403
+ "id": "a906645e-60de-4eb6-b8b6-3ec98a9d9b00",
404
+ "metadata": {},
405
+ "source": [
406
+ "When we use `endpoint.client.post` we get a bytes string back. This is a little tedious because we need to convert this to an `np.array`, but it's just a couple quick lines in python."
407
+ ]
408
+ },
409
+ {
410
+ "cell_type": "code",
411
+ "execution_count": 12,
412
+ "id": "e09253d5-70ff-4d0e-8888-0022ce0adf7b",
413
+ "metadata": {
414
+ "tags": []
415
+ },
416
+ "outputs": [
417
+ {
418
+ "data": {
419
+ "text/plain": [
420
+ "array([-0.05630935, -0.03560849, 0.02789049, 0.02792823, -0.02800371,\n",
421
+ " -0.01530391, -0.01863454, -0.0077982 , 0.05374297, 0.03672185,\n",
422
+ " -0.06114018, -0.06880157, -0.0093503 , -0.03174005, -0.03206085,\n",
423
+ " 0.0610647 , 0.02243694, 0.03217408, 0.04181686, 0.00248854])"
424
+ ]
425
+ },
426
+ "execution_count": 12,
427
+ "metadata": {},
428
+ "output_type": "execute_result"
429
+ }
430
+ ],
431
+ "source": [
432
+ "response = endpoint.client.post(json={\"inputs\": 'This sound track was beautiful! It paints the senery in your mind so well I would recomend it even to people who hate vid. game music!', 'truncate': True}, task=\"feature-extraction\")\n",
433
+ "response = np.array(json.loads(response.decode()))\n",
434
+ "response[0][:20]"
435
+ ]
436
+ },
437
+ {
438
+ "cell_type": "markdown",
439
+ "id": "0d024788-6e6e-4a8d-b192-36ee3dacca13",
440
+ "metadata": {},
441
+ "source": [
442
+ "You may have inputs that exceed the context. In such scenarios, it's up to you to handle them. In my case, I'd like to truncate rather than have an error. Let's test that it works."
443
+ ]
444
+ },
445
+ {
446
+ "cell_type": "code",
447
+ "execution_count": 13,
448
+ "id": "a4a1cd15-dda3-4cfa-8bda-788d8c1b9e32",
449
+ "metadata": {
450
+ "tags": []
451
+ },
452
+ "outputs": [
453
+ {
454
+ "name": "stdout",
455
+ "output_type": "stream",
456
+ "text": [
457
+ "The length of the embedding_input is: 300000\n"
458
+ ]
459
+ },
460
+ {
461
+ "data": {
462
+ "text/plain": [
463
+ "array([-0.03088215, -0.0351537 , 0.05749275, 0.00983467, 0.02108356,\n",
464
+ " 0.04539965, 0.06107162, -0.02536954, 0.03887688, 0.01998681,\n",
465
+ " -0.05391388, 0.01529677, -0.1279156 , 0.01653782, -0.01940958,\n",
466
+ " 0.0367411 , 0.0031748 , 0.04716022, -0.00713609, -0.00155313])"
467
+ ]
468
+ },
469
+ "execution_count": 13,
470
+ "metadata": {},
471
+ "output_type": "execute_result"
472
+ }
473
+ ],
474
+ "source": [
475
+ "embedding_input = 'This input will get multiplied' * 10000\n",
476
+ "print(f'The length of the embedding_input is: {len(embedding_input)}')\n",
477
+ "response = endpoint.client.post(json={\"inputs\": embedding_input, 'truncate': True}, task=\"feature-extraction\")\n",
478
+ "response = np.array(json.loads(response.decode()))\n",
479
+ "response[0][:20]"
480
+ ]
481
+ },
482
+ {
483
+ "cell_type": "markdown",
484
+ "id": "f7186126-ef6a-47d0-b158-112810649cd9",
485
+ "metadata": {},
486
+ "source": [
487
+ "# Get Embeddings"
488
+ ]
489
+ },
490
+ {
491
+ "cell_type": "markdown",
492
+ "id": "1dadfd68-6d46-4ce8-a165-bfeb43b1f114",
493
+ "metadata": {},
494
+ "source": [
495
+ "Here I send a document, update it with the embedding, and return it. This happens in parallel with `MAX_WORKERS`."
496
+ ]
497
+ },
498
+ {
499
+ "cell_type": "code",
500
+ "execution_count": 14,
501
+ "id": "ad3193fb-3def-42a8-968e-c63f2b864ca8",
502
+ "metadata": {
503
+ "tags": []
504
+ },
505
+ "outputs": [],
506
+ "source": [
507
+ "async def request(document, semaphore):\n",
508
+ " # Semaphore guard\n",
509
+ " async with semaphore:\n",
510
+ " result = await endpoint.async_client.post(json={\"inputs\": document['content'], 'truncate': True}, task=\"feature-extraction\")\n",
511
+ " result = np.array(json.loads(result.decode()))\n",
512
+ " document['embedding'] = result[0] # Assuming the API's output can be directly assigned\n",
513
+ " return document\n",
514
+ "\n",
515
+ "async def main(documents):\n",
516
+ " # Semaphore to limit concurrent requests. Adjust the number as needed.\n",
517
+ " semaphore = asyncio.BoundedSemaphore(MAX_WORKERS)\n",
518
+ "\n",
519
+ " # Creating a list of tasks\n",
520
+ " tasks = [request(document, semaphore) for document in documents]\n",
521
+ " \n",
522
+ " # Using tqdm to show progress. It's been integrated into the async loop.\n",
523
+ " for f in tqdm(asyncio.as_completed(tasks), total=len(documents)):\n",
524
+ " await f"
525
+ ]
526
+ },
527
+ {
528
+ "cell_type": "code",
529
+ "execution_count": 15,
530
+ "id": "ec4983af-65eb-4841-808a-3738fb4d682d",
531
+ "metadata": {
532
+ "tags": []
533
+ },
534
+ "outputs": [
535
+ {
536
+ "data": {
537
+ "application/vnd.jupyter.widget-view+json": {
538
+ "model_id": "48a2affdee8d46f3b0c1f691eaac4b89",
539
+ "version_major": 2,
540
+ "version_minor": 0
541
+ },
542
+ "text/plain": [
543
+ " 0%| | 0/100 [00:00<?, ?it/s]"
544
+ ]
545
+ },
546
+ "metadata": {},
547
+ "output_type": "display_data"
548
+ },
549
+ {
550
+ "name": "stdout",
551
+ "output_type": "stream",
552
+ "text": [
553
+ "Embeddings = 100 documents = 100\n",
554
+ "0 min 21.33 sec\n"
555
+ ]
556
+ }
557
+ ],
558
+ "source": [
559
+ "start = time.perf_counter()\n",
560
+ "\n",
561
+ "# Get embeddings\n",
562
+ "await main(documents)\n",
563
+ "\n",
564
+ "# Make sure we got it all\n",
565
+ "count = 0\n",
566
+ "for document in documents:\n",
567
+ " if 'embedding' in document.keys() and len(document['embedding']) == 768:\n",
568
+ " count += 1\n",
569
+ "print(f'Embeddings = {count} documents = {len(documents)}')\n",
570
+ "\n",
571
+ " \n",
572
+ "# Print elapsed time\n",
573
+ "elapsed_time = time.perf_counter() - start\n",
574
+ "minutes, seconds = divmod(elapsed_time, 60)\n",
575
+ "print(f\"{int(minutes)} min {seconds:.2f} sec\")"
576
+ ]
577
+ },
578
+ {
579
+ "cell_type": "markdown",
580
+ "id": "bab97c7b-7bac-4bf5-9752-b528294dadc7",
581
+ "metadata": {},
582
+ "source": [
583
+ "## Pause Inference Endpoint\n",
584
+ "Now that we have finished, let's pause the endpoint so we don't incur any extra charges, this will also allow us to analyze the cost."
585
+ ]
586
+ },
587
+ {
588
+ "cell_type": "code",
589
+ "execution_count": 16,
590
+ "id": "540a0978-7670-4ce3-95c1-3823cc113b85",
591
+ "metadata": {
592
+ "tags": []
593
+ },
594
+ "outputs": [
595
+ {
596
+ "name": "stdout",
597
+ "output_type": "stream",
598
+ "text": [
599
+ "Endpoint Status: paused\n"
600
+ ]
601
+ }
602
+ ],
603
+ "source": [
604
+ "endpoint = endpoint.pause()\n",
605
+ "\n",
606
+ "print(f\"Endpoint Status: {endpoint.status}\")"
607
+ ]
608
+ },
609
+ {
610
+ "cell_type": "markdown",
611
+ "id": "45ad65b7-3da2-4113-9b95-8fb4e21ae793",
612
+ "metadata": {},
613
+ "source": [
614
+ "# Push updated dataset to Hub\n",
615
+ "We now have our documents updated with the embeddings we wanted. First we need to convert it back to a `Dataset` format. I find it easiest to go from list of dicts -> `pd.DataFrame` -> `Dataset`"
616
+ ]
617
+ },
618
+ {
619
+ "cell_type": "code",
620
+ "execution_count": 17,
621
+ "id": "9bb993f8-d624-4192-9626-8e9ed9888a1b",
622
+ "metadata": {
623
+ "tags": []
624
+ },
625
+ "outputs": [],
626
+ "source": [
627
+ "df = pd.DataFrame(documents)\n",
628
+ "dd = DatasetDict({'train': Dataset.from_pandas(df)})"
629
+ ]
630
+ },
631
+ {
632
+ "cell_type": "markdown",
633
+ "id": "129760c8-cae1-4b1e-8216-f5152df8c536",
634
+ "metadata": {},
635
+ "source": [
636
+ "I'm uploading it to the user's account by default (as opposed to uploading to an organization) but feel free to push to wherever you want by setting the user in the `repo_id` or in the config by setting `DATASET_OUT`"
637
+ ]
638
+ },
639
+ {
640
+ "cell_type": "code",
641
+ "execution_count": 18,
642
+ "id": "f48e7c55-d5b7-4ed6-8516-272ae38716b1",
643
+ "metadata": {
644
+ "tags": []
645
+ },
646
+ "outputs": [
647
+ {
648
+ "data": {
649
+ "application/vnd.jupyter.widget-view+json": {
650
+ "model_id": "d3af2e864770481db5adc3968500b5d3",
651
+ "version_major": 2,
652
+ "version_minor": 0
653
+ },
654
+ "text/plain": [
655
+ "Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]"
656
+ ]
657
+ },
658
+ "metadata": {},
659
+ "output_type": "display_data"
660
+ },
661
+ {
662
+ "data": {
663
+ "application/vnd.jupyter.widget-view+json": {
664
+ "model_id": "4e063c42d8f4490c939bc64e626b507a",
665
+ "version_major": 2,
666
+ "version_minor": 0
667
+ },
668
+ "text/plain": [
669
+ "Downloading metadata: 0%| | 0.00/823 [00:00<?, ?B/s]"
670
+ ]
671
+ },
672
+ "metadata": {},
673
+ "output_type": "display_data"
674
+ }
675
+ ],
676
+ "source": [
677
+ "dd.push_to_hub(repo_id=DATASET_OUT)"
678
+ ]
679
+ },
680
+ {
681
+ "cell_type": "code",
682
+ "execution_count": 19,
683
+ "id": "85ea2244-a4c6-4f04-b187-965a2fc356a8",
684
+ "metadata": {
685
+ "tags": []
686
+ },
687
+ "outputs": [
688
+ {
689
+ "name": "stdout",
690
+ "output_type": "stream",
691
+ "text": [
692
+ "Dataset is at https://huggingface.co/datasets/derek-thomas/processed-subset-bestofredditorupdates\n"
693
+ ]
694
+ }
695
+ ],
696
+ "source": [
697
+ "print(f'Dataset is at https://huggingface.co/datasets/{who[\"name\"]}/{DATASET_OUT}')"
698
+ ]
699
+ },
700
+ {
701
+ "cell_type": "markdown",
702
+ "id": "41abea64-379d-49de-8d9a-355c2f4ce1ac",
703
+ "metadata": {},
704
+ "source": [
705
+ "# Analyze Usage\n",
706
+ "1. Go to your `dashboard_url` printed below\n",
707
+ "1. Click on the Usage & Cost tab\n",
708
+ "1. See how much you have spent"
709
+ ]
710
+ },
711
+ {
712
+ "cell_type": "code",
713
+ "execution_count": 20,
714
+ "id": "16815445-3079-43da-b14e-b54176a07a62",
715
+ "metadata": {},
716
+ "outputs": [
717
+ {
718
+ "name": "stdout",
719
+ "output_type": "stream",
720
+ "text": [
721
+ "https://ui.endpoints.huggingface.co/HF-test-lab/endpoints/boru-jina-embeddings-demo-ie\n"
722
+ ]
723
+ }
724
+ ],
725
+ "source": [
726
+ "dashboard_url = f'https://ui.endpoints.huggingface.co/{namespace}/endpoints/{ENDPOINT_NAME}'\n",
727
+ "print(dashboard_url)"
728
+ ]
729
+ },
730
+ {
731
+ "cell_type": "code",
732
+ "execution_count": 21,
733
+ "id": "81096c6f-d12f-4781-84ec-9066cfa465b3",
734
+ "metadata": {},
735
+ "outputs": [
736
+ {
737
+ "name": "stdout",
738
+ "output_type": "stream",
739
+ "text": [
740
+ "Hit enter to continue with the notebook \n"
741
+ ]
742
+ },
743
+ {
744
+ "data": {
745
+ "text/plain": [
746
+ "''"
747
+ ]
748
+ },
749
+ "execution_count": 21,
750
+ "metadata": {},
751
+ "output_type": "execute_result"
752
+ }
753
+ ],
754
+ "source": [
755
+ "input(\"Hit enter to continue with the notebook\")"
756
+ ]
757
+ },
758
+ {
759
+ "cell_type": "markdown",
760
+ "id": "847d524e-9aa6-4a6f-a275-8a552e289818",
761
+ "metadata": {},
762
+ "source": [
763
+ "We can see that it only took `$0.04` to pay for this!\n"
764
+ ]
765
+ },
766
+ {
767
+ "cell_type": "markdown",
768
+ "id": "b953d5be-2494-4ff8-be42-9daf00c99c41",
769
+ "metadata": {},
770
+ "source": [
771
+ "\n",
772
+ "# Delete Endpoint\n",
773
+ "Now that we are done, we don't need our endpoint anymore. We can delete our endpoint programmatically. \n",
774
+ "\n",
775
+ "![Cost](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/automatic_embedding_tei_inference_endpoints.png)"
776
+ ]
777
+ },
778
+ {
779
+ "cell_type": "code",
780
+ "execution_count": 22,
781
+ "id": "c310c0f3-6f12-4d5c-838b-3a4c1f2e54ad",
782
+ "metadata": {
783
+ "tags": []
784
+ },
785
+ "outputs": [
786
+ {
787
+ "name": "stdout",
788
+ "output_type": "stream",
789
+ "text": [
790
+ "Endpoint deleted successfully\n"
791
+ ]
792
+ }
793
+ ],
794
+ "source": [
795
+ "endpoint = endpoint.delete()\n",
796
+ "\n",
797
+ "if not endpoint:\n",
798
+ " print('Endpoint deleted successfully')\n",
799
+ "else:\n",
800
+ " print('Delete Endpoint in manually') "
801
+ ]
802
+ }
803
+ ],
804
+ "metadata": {
805
+ "kernelspec": {
806
+ "display_name": "Python 3 (ipykernel)",
807
+ "language": "python",
808
+ "name": "python3"
809
+ },
810
+ "language_info": {
811
+ "codemirror_mode": {
812
+ "name": "ipython",
813
+ "version": 3
814
+ },
815
+ "file_extension": ".py",
816
+ "mimetype": "text/x-python",
817
+ "name": "python",
818
+ "nbconvert_exporter": "python",
819
+ "pygments_lexer": "ipython3",
820
+ "version": "3.10.8"
821
+ }
822
+ },
823
+ "nbformat": 4,
824
+ "nbformat_minor": 5
825
+ }
src/notebooks/faiss.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
src/notebooks/rag_evaluation.qmd ADDED
@@ -0,0 +1,786 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: RAG Evaluation
3
+ jupyter: python3
4
+ eval: false
5
+ ---
6
+
7
+ ```{python}
8
+ !pip install -q torch transformers transformers langchain sentence-transformers faiss-gpu openpyxl openai
9
+ ```
10
+
11
+ ```{python}
12
+ %reload_ext autoreload
13
+ %autoreload 2
14
+ %reload_ext dotenv
15
+ %dotenv
16
+ ```
17
+
18
+ ```{python}
19
+ from tqdm.notebook import tqdm
20
+ import pandas as pd
21
+ from typing import Optional, List, Tuple
22
+ from langchain_core.language_models import BaseChatModel
23
+ import json
24
+ import datasets
25
+
26
+ pd.set_option("display.max_colwidth", None)
27
+ ```
28
+
29
+ ### Load your knowledge base
30
+
31
+ ```{python}
32
+ ds = datasets.load_dataset("m-ric/huggingface_doc", split="train")
33
+ ```
34
+
35
+ # 1. Build a synthetic dataset for evaluation
36
+ We first build a synthetic dataset of questions and associated contexts. The method is to get elements from our knowledge base, and ask an LLM to generate questions based on these documents.
37
+
38
+ Then we setup other LLM agents to act as quality filters for the generated QA couples: each of them will act as the filter for a specific flaw.
39
+
40
+ ### 1.1. Prepare source documents
41
+
42
+ ```{python}
43
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
44
+ from langchain.docstore.document import Document as LangchainDocument
45
+
46
+ langchain_docs = [
47
+ LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]})
48
+ for doc in tqdm(ds)
49
+ ]
50
+
51
+
52
+ text_splitter = RecursiveCharacterTextSplitter(
53
+ chunk_size=2000,
54
+ chunk_overlap=200,
55
+ add_start_index=True,
56
+ separators=["\n\n", "\n", ".", " ", ""],
57
+ )
58
+
59
+ docs_processed = []
60
+ for doc in langchain_docs:
61
+ docs_processed += text_splitter.split_documents([doc])
62
+ ```
63
+
64
+ ### 1.2. Setup agents for question generation
65
+
66
+ We use [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for QA couple generation because it it has excellent performance in leaderboards such as [Chatbot Arena](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
67
+
68
+ ```{python}
69
+ from langchain_community.llms import HuggingFaceHub
70
+
71
+ repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
72
+
73
+ llm = HuggingFaceHub(
74
+ repo_id=repo_id,
75
+ task="text-generation",
76
+ model_kwargs={
77
+ "max_new_tokens": 512,
78
+ "top_k": 30,
79
+ "temperature": 0.1,
80
+ "repetition_penalty": 1.03,
81
+ },
82
+ )
83
+ ```
84
+
85
+ ```{python}
86
+ from langchain_community.chat_models import ChatHuggingFace
87
+
88
+ chat_model = ChatHuggingFace(llm=llm)
89
+ ```
90
+
91
+ ```{python}
92
+ from langchain.prompts import ChatPromptTemplate
93
+
94
+ QA_generation_prompt = """
95
+ Your task is to write a factoid question and an answer given a context.
96
+ Your factoid question should be answerable with a specific, concise piece of factual information from the context.
97
+ Your factoid question should be formulated in the same style as questions users could ask in a search engine.
98
+ This means that your factoid question MUST NOT mention something like "according to the passage" or "context".
99
+
100
+ Provide your answer as follows:
101
+
102
+ Output:::
103
+ Factoid question: (your factoid question)
104
+ Answer: (your answer to the factoid question)
105
+
106
+ Now here is the context.
107
+
108
+ Context: {context}\n
109
+ Output:::"""
110
+
111
+ QA_generation_prompt = ChatPromptTemplate.from_template(QA_generation_prompt)
112
+ QA_generation_agent = QA_generation_prompt | chat_model
113
+ ```
114
+
115
+ Now let's generate our QA couples.
116
+ For this example, we generate only 10 QA couples and will load the rest from the Hub.
117
+
118
+ But for your specific knowledge base, given that you want to get at least ~100 test samples, and accounting for the fact that we will filter out around half of these with our critique agents later on, you should generate much more, in the >200 samples.
119
+
120
+ ```{python}
121
+ import random
122
+
123
+ N_GENERATIONS = (
124
+ 10 # We intentionally generate only 10 QA couples here for cost and time considerations
125
+ )
126
+
127
+ print(f"Generating {N_GENERATIONS} QA couples...")
128
+ outputs = []
129
+ for context in tqdm(random.sample(langchain_docs, N_GENERATIONS)):
130
+ # Generate QA couple
131
+ output_QA_couple = QA_generation_agent.invoke({"context": context.page_content}).content
132
+ try:
133
+ question = output_QA_couple.split("Factoid question: ")[1].split("Answer: ")[0]
134
+ answer = output_QA_couple.split("Answer: ")[1]
135
+ outputs.append(
136
+ {
137
+ "context": context.page_content,
138
+ "question": question,
139
+ "answer": answer,
140
+ "source_doc": context.metadata["source"],
141
+ }
142
+ )
143
+ except:
144
+ continue
145
+ ```
146
+
147
+ ```{python}
148
+ display(pd.DataFrame(outputs).head(1))
149
+ ```
150
+
151
+ ### 1.3. Setup critique agents
152
+
153
+ The questions generated by the previous agent can have many flaws: we should do a quality check before validating these questions.
154
+
155
+ We thus build critique agents that will rate each question on several criteria, given in [this paper](https://huggingface.co/papers/2312.10003):
156
+ - **Groundedness:** can the question be answered from the given context?
157
+ - **Relevance:** is the question relevant to users? For instance, `"What is the date when transformers 4.29.1 was released?"` is not relevant for ML practicioners.
158
+
159
+ One last failure case we've noticed is when a function is tailored for the particular setting where the question was generated, but undecipherable by itself, like `"What is the name of the function used in this guide?"`.
160
+ We also build a critique agent for this criteria:
161
+ - **Stand-alone**: is the question understandable free of any context, for someone with domain knowledge/Internet access? The opposite of this would be `What is the function used in this article?` for a question generated from a specific blog article.
162
+
163
+ We systematically score functions with all these agents, and whenever the score is too low for any one of the agents, we eliminate the question from our eval dataset.
164
+
165
+ 💡 ___When asking the agents to output a score, we first ask them to produce its rationale. This will help us verify scores, but most importantly, asking it to first output rationale gives the model more tokens to think and elaborate an answer before summarizing it into a single score token.___
166
+
167
+ We now build and run these critique agents.
168
+
169
+ ```{python}
170
+ question_groundedness_critique_prompt = """
171
+ You will be given a context and a question.
172
+ Your task is to provide a 'total rating' scoring how well one can answer the given question unambiguously with the given context.
173
+ Give your answer on a scale of 1 to 5, where 1 means that the question is not answerable at all given the context, and 5 means that the question is clearly and unambiguously answerable with the context.
174
+
175
+ Provide your answer as follows:
176
+
177
+ Answer:::
178
+ Evaluation: (your rationale for the rating)
179
+ Total rating: (your rating)
180
+
181
+ Now here are the question and context.
182
+
183
+ Question: {question}\n
184
+ Context: {context}\n
185
+ Answer::: """
186
+
187
+ question_relevance_critique_prompt = """
188
+ You will be given a question.
189
+ Your task is to provide a 'total rating' representing how useful this question can be to machine learning developers building NLP applications with the Hugging Face ecosystem.
190
+ Give your answer on a scale of 1 to 5, where 1 means that the question is not useful at all, and 5 means that the question is extremely useful.
191
+
192
+ Provide your answer as follows:
193
+
194
+ Answer:::
195
+ Evaluation: (your rationale for the rating)
196
+ Total rating: (your rating)
197
+
198
+ Now here is the question.
199
+
200
+ Question: {question}\n
201
+ Answer::: """
202
+
203
+ question_standalone_critique_prompt = """
204
+ You will be given a question.
205
+ Your task is to provide a 'total rating' representing how context-independant this question is.
206
+ Give your answer on a scale of 1 to 5, where 1 means that the question only makes sense in a specific context, and 5 means that the question makes sense by itself.
207
+ For instance, if the question refers to a particular setting, like 'in the context' or 'in the document', the rating must be 1.
208
+ The questions can contain obscure technical nouns or acronyms like Gradio, Hub, Hugging Face or Space and still be a 5: it must simply be clear to an operator with access to documentation what the question is about.
209
+
210
+ Provide your answer as follows:
211
+
212
+ Answer:::
213
+ Evaluation: (your rationale for the rating)
214
+ Total rating: (your rating)
215
+
216
+ Now here is the question.
217
+
218
+ Question: {question}\n
219
+ Answer::: """
220
+
221
+ question_groundedness_critique_prompt = ChatPromptTemplate.from_template(
222
+ question_groundedness_critique_prompt
223
+ )
224
+ question_groundedness_critique_agent = question_groundedness_critique_prompt | chat_model
225
+
226
+ question_relevance_critique_prompt = ChatPromptTemplate.from_template(
227
+ question_relevance_critique_prompt
228
+ )
229
+ question_relevance_critique_agent = question_relevance_critique_prompt | chat_model
230
+
231
+ question_standalone_critique_prompt = ChatPromptTemplate.from_template(
232
+ question_standalone_critique_prompt
233
+ )
234
+ question_standalone_critique_agent = question_standalone_critique_prompt | chat_model
235
+ ```
236
+
237
+ ```{python}
238
+ print("Generating critique for each QA couple...")
239
+ for output in tqdm(outputs):
240
+ # Critique the generated QA couple
241
+ question_groundedness_evaluation = question_groundedness_critique_agent.invoke(
242
+ {"context": output["context"], "question": output["question"]}
243
+ ).content
244
+ question_relevance_evaluation = question_relevance_critique_agent.invoke(
245
+ {"question": output["question"]}
246
+ ).content
247
+ question_standalone_evaluation = question_standalone_critique_agent.invoke(
248
+ {"question": output["question"]}
249
+ ).content
250
+
251
+ try:
252
+ groundedness_score = int(question_groundedness_evaluation.split("Total rating: ")[1][0])
253
+ groundedness_eval = question_groundedness_evaluation.split("Total rating: ")[0].split(
254
+ "Evaluation: "
255
+ )[1]
256
+ relevance_score = int(question_relevance_evaluation.split("Total rating: ")[1][0])
257
+ relevance_eval = question_relevance_evaluation.split("Total rating: ")[0].split(
258
+ "Evaluation: "
259
+ )[1]
260
+ standalone_score = int(question_standalone_evaluation.split("Total rating: ")[1][0])
261
+ standalone_eval = question_standalone_evaluation.split("Total rating: ")[0].split(
262
+ "Evaluation: "
263
+ )[1]
264
+ output.update(
265
+ {
266
+ "groundedness_score": groundedness_score,
267
+ "groundedness_eval": groundedness_eval,
268
+ "relevance_score": relevance_score,
269
+ "relevance_eval": relevance_eval,
270
+ "standalone_score": standalone_score,
271
+ "standalone_eval": standalone_eval,
272
+ }
273
+ )
274
+ except:
275
+ continue
276
+ ```
277
+
278
+ Now let us filter out bad questions based on our critique agent scores:
279
+
280
+ ```{python}
281
+ import pandas as pd
282
+
283
+ pd.set_option("display.max_colwidth", None)
284
+
285
+ generated_questions = pd.DataFrame.from_dict(outputs)
286
+
287
+ print("Evaluation dataset before filtering:")
288
+ display(
289
+ generated_questions[
290
+ ["question", "answer", "groundedness_score", "relevance_score", "standalone_score"]
291
+ ]
292
+ )
293
+ generated_questions = generated_questions.loc[
294
+ (generated_questions["groundedness_score"] >= 4)
295
+ & (generated_questions["relevance_score"] >= 4)
296
+ & (generated_questions["standalone_score"] >= 4)
297
+ ]
298
+ print("============================================")
299
+ print("Final evaluation dataset:")
300
+ display(
301
+ generated_questions[
302
+ ["question", "answer", "groundedness_score", "relevance_score", "standalone_score"]
303
+ ]
304
+ )
305
+
306
+ eval_dataset = datasets.Dataset.from_pandas(
307
+ generated_questions, split="train", preserve_index=False
308
+ )
309
+ ```
310
+
311
+ Now our synthetic evaluation dataset is complete! We can evaluate different RAG systems on this evaluation dataset.
312
+
313
+ We have generated only a few QA couples here to reduce time and cost. But let's kick start the next part by loading a pre-generated dataset:
314
+
315
+ ```{python}
316
+ eval_dataset = datasets.load_dataset("m-ric/huggingface_doc_qa_eval", split="train")
317
+ ```
318
+
319
+ # 2. Build our RAG System
320
+
321
+ ### 2.1. Preprocessing documents to build our vector database
322
+
323
+ - In this part, __we split the documents from our knowledge base into smaller chunks__: these will be the snippets that are picked by the Retriever, to then be ingested by the Reader LLM as supporting elements for its answer.
324
+ - The goal is to build semantically relevant snippets: not too small to be sufficient for supporting an answer, and not too large too avoid diluting individual ideas.
325
+
326
+ Many options exist for text splitting:
327
+ - split every `n` words / characters, but this has the risk of cutting in half paragraphs or even sentences
328
+ - split after `n` words / character, but only on sentence boundaries
329
+ - **recursive split** tries to preserve even more of the document structure, by processing it tree-like way, splitting first on the largest units (chapters) then recursively splitting on smaller units (paragraphs, sentences).
330
+
331
+ To learn more about chunking, I recommend you read [this great notebook](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb) by Greg Kamradt.
332
+
333
+ [This space](https://huggingface.co/spaces/m-ric/chunk_visualizer) lets you visualize how different splitting options affect the chunks you get.
334
+
335
+ > In the following, we use Langchain's `RecursiveCharacterTextSplitter`.
336
+
337
+ 💡 _To measure chunk length in our Text Splitter, our length function will not be the count of characters, but the count of tokens in the tokenized text: indeed, for subsequent embedder that processes token, measuring length in tokens is more relevant and empirically performs better._
338
+
339
+ ```{python}
340
+ from langchain.docstore.document import Document as LangchainDocument
341
+
342
+ RAW_KNOWLEDGE_BASE = [
343
+ LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]})
344
+ for doc in tqdm(ds)
345
+ ]
346
+ ```
347
+
348
+ ```{python}
349
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
350
+ from transformers import AutoTokenizer
351
+
352
+
353
+ def split_documents(
354
+ chunk_size: int,
355
+ knowledge_base: List[LangchainDocument],
356
+ tokenizer_name: str,
357
+ ) -> List[LangchainDocument]:
358
+ """
359
+ Split documents into chunks of size `chunk_size` characters and return a list of documents.
360
+ """
361
+ text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
362
+ AutoTokenizer.from_pretrained(tokenizer_name),
363
+ chunk_size=chunk_size,
364
+ chunk_overlap=int(chunk_size / 10),
365
+ add_start_index=True,
366
+ strip_whitespace=True,
367
+ separators=["\n\n", "\n", ".", " ", ""],
368
+ )
369
+
370
+ docs_processed = []
371
+ for doc in knowledge_base:
372
+ docs_processed += text_splitter.split_documents([doc])
373
+
374
+ # Remove duplicates
375
+ unique_texts = {}
376
+ docs_processed_unique = []
377
+ for doc in docs_processed:
378
+ if doc.page_content not in unique_texts:
379
+ unique_texts[doc.page_content] = True
380
+ docs_processed_unique.append(doc)
381
+
382
+ return docs_processed_unique
383
+ ```
384
+
385
+ ### 2.2. Retriever - embeddings 🗂️
386
+ The __retriever acts like an internal search engine__: given the user query, it returns the most relevant documents from your knowledge base.
387
+
388
+ > For the knowledge base, we use Langchain vector databases since __it offers a convenient [FAISS](https://github.com/facebookresearch/faiss) index and allows us to keep document metadata throughout the processing__.
389
+
390
+ 🛠️ __Options included:__
391
+
392
+ - Tune the chunking method:
393
+ - Size of the chunks
394
+ - Method: split on different separators, use [semantic chunking](https://python.langchain.com/docs/modules/data_connection/document_transformers/semantic-chunker)...
395
+ - Change the embedding model
396
+
397
+ ```{python}
398
+ from langchain.vectorstores import FAISS
399
+ from langchain_community.embeddings import HuggingFaceEmbeddings
400
+ from langchain_community.vectorstores.utils import DistanceStrategy
401
+ import os
402
+
403
+
404
+ def load_embeddings(
405
+ langchain_docs: List[LangchainDocument],
406
+ chunk_size: int,
407
+ embedding_model_name: Optional[str] = "thenlper/gte-small",
408
+ ) -> FAISS:
409
+ """
410
+ Creates a FAISS index from the given embedding model and documents. Loads the index directly if it already exists.
411
+
412
+ Args:
413
+ langchain_docs: list of documents
414
+ chunk_size: size of the chunks to split the documents into
415
+ embedding_model_name: name of the embedding model to use
416
+
417
+ Returns:
418
+ FAISS index
419
+ """
420
+ # load embedding_model
421
+ embedding_model = HuggingFaceEmbeddings(
422
+ model_name=embedding_model_name,
423
+ multi_process=True,
424
+ model_kwargs={"device": "cuda"},
425
+ encode_kwargs={"normalize_embeddings": True}, # set True to compute cosine similarity
426
+ )
427
+
428
+ # Check if embeddings already exist on disk
429
+ index_name = f"index_chunk:{chunk_size}_embeddings:{embedding_model_name.replace('/', '~')}"
430
+ index_folder_path = f"./data/indexes/{index_name}/"
431
+ if os.path.isdir(index_folder_path):
432
+ return FAISS.load_local(
433
+ index_folder_path,
434
+ embedding_model,
435
+ distance_strategy=DistanceStrategy.COSINE,
436
+ )
437
+
438
+ else:
439
+ print("Index not found, generating it...")
440
+ docs_processed = split_documents(
441
+ chunk_size,
442
+ langchain_docs,
443
+ embedding_model_name,
444
+ )
445
+ knowledge_index = FAISS.from_documents(
446
+ docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE
447
+ )
448
+ knowledge_index.save_local(index_folder_path)
449
+ return knowledge_index
450
+ ```
451
+
452
+ ### 2.3. Reader - LLM 💬
453
+
454
+ In this part, the __LLM Reader reads the retrieved documents to formulate its answer.__
455
+
456
+ 🛠️ Here we tried the following options to improve results:
457
+ - Switch reranking on/off
458
+ - Change the reader model
459
+
460
+ ```{python}
461
+ RAG_PROMPT_TEMPLATE = """
462
+ <|system|>
463
+ Using the information contained in the context,
464
+ give a comprehensive answer to the question.
465
+ Respond only to the question asked, response should be concise and relevant to the question.
466
+ Provide the number of the source document when relevant.
467
+ If the answer cannot be deduced from the context, do not give an answer.</s>
468
+ <|user|>
469
+ Context:
470
+ {context}
471
+ ---
472
+ Now here is the question you need to answer.
473
+
474
+ Question: {question}
475
+ </s>
476
+ <|assistant|>
477
+ """
478
+ ```
479
+
480
+ ```{python}
481
+ from langchain_community.llms import HuggingFaceHub
482
+
483
+ repo_id = "HuggingFaceH4/zephyr-7b-beta"
484
+ READER_MODEL_NAME = "zephyr-7b-beta"
485
+
486
+ READER_LLM = HuggingFaceHub(
487
+ repo_id=repo_id,
488
+ task="text-generation",
489
+ model_kwargs={
490
+ "max_new_tokens": 512,
491
+ "top_k": 30,
492
+ "temperature": 0.1,
493
+ "repetition_penalty": 1.03,
494
+ },
495
+ )
496
+ ```
497
+
498
+ ```{python}
499
+ from ragatouille import RAGPretrainedModel
500
+ from langchain_core.vectorstores import VectorStore
501
+ from langchain_core.language_models.llms import LLM
502
+
503
+
504
+ def answer_with_rag(
505
+ question: str,
506
+ llm: LLM,
507
+ knowledge_index: VectorStore,
508
+ reranker: Optional[RAGPretrainedModel] = None,
509
+ num_retrieved_docs: int = 30,
510
+ num_docs_final: int = 7,
511
+ ) -> Tuple[str, List[LangchainDocument]]:
512
+ """Answer a question using RAG with the given knowledge index."""
513
+ # Gather documents with retriever
514
+ relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)
515
+ relevant_docs = [doc.page_content for doc in relevant_docs] # keep only the text
516
+
517
+ # Optionally rerank results
518
+ if reranker:
519
+ relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)
520
+ relevant_docs = [doc["content"] for doc in relevant_docs]
521
+
522
+ relevant_docs = relevant_docs[:num_docs_final]
523
+
524
+ # Build the final prompt
525
+ context = "\nExtracted documents:\n"
526
+ context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(relevant_docs)])
527
+
528
+ final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)
529
+
530
+ # Redact an answer
531
+ answer = llm(final_prompt)
532
+
533
+ return answer, relevant_docs
534
+ ```
535
+
536
+ # 3. Benchmarking the RAG system
537
+
538
+ The RAG system and the evaluation datasets are now ready. The last step is to judge the RAG system's output on this evlauation dataset.
539
+
540
+ To this end, __we setup a judge agent__. ⚖️🤖
541
+
542
+ Out of [the different RAG evaluation metrics](https://docs.ragas.io/en/latest/concepts/metrics/index.html), we choose to focus only on faithfulness since it the best end-to-end metric of our system's performance.
543
+
544
+ > We use GPT4 as a judge for its empirically good performance, but you could try with other models such as [kaist-ai/prometheus-13b-v1.0](https://huggingface.co/kaist-ai/prometheus-13b-v1.0) or [BAAI/JudgeLM-33B-v1.0](https://huggingface.co/BAAI/JudgeLM-33B-v1.0).
545
+
546
+ 💡 _In the evaluation prompt, we give a detailed description each metric on the scale 1-5, as is done in [Prometheus's prompt template](https://huggingface.co/kaist-ai/prometheus-13b-v1.0): this helps the model ground its metric precisely. If instead you give the judge LLM a vague scale to work with, the outputs will not be consistent enough between different examples._
547
+
548
+ 💡 _Again, prompting the LLM to output rationale before giving its final score gives it more tokens to help it formalize and elaborate a judgement._
549
+
550
+ ```{python}
551
+ def run_rag_tests(
552
+ eval_dataset: datasets.Dataset,
553
+ llm: BaseChatModel,
554
+ knowledge_index: VectorStore,
555
+ output_file: str,
556
+ reranker: Optional[RAGPretrainedModel] = None,
557
+ verbose: Optional[bool] = True,
558
+ test_settings: Optional[str] = None, # To document the test settings used
559
+ ):
560
+ """Runs RAG tests on the given dataset and saves the results to the given output file."""
561
+ try: # load previous generations if they exist
562
+ with open(output_file, "r") as f:
563
+ outputs = json.load(f)
564
+ except:
565
+ outputs = []
566
+
567
+ for example in tqdm(eval_dataset):
568
+ question = example["question"]
569
+ if question in [output["question"] for output in outputs]:
570
+ continue
571
+
572
+ answer, relevant_docs = answer_with_rag(question, llm, knowledge_index, reranker=reranker)
573
+ if verbose:
574
+ print("=======================================================")
575
+ print(f"Question: {question}")
576
+ print(f"Answer: {answer}")
577
+ print(f'True answer: {example["answer"]}')
578
+ result = {
579
+ "question": question,
580
+ "true_answer": example["answer"],
581
+ "source_doc": example["source_doc"],
582
+ "generated_answer": answer,
583
+ "retrieved_docs": [doc for doc in relevant_docs],
584
+ }
585
+ if test_settings:
586
+ result["test_settings"] = test_settings
587
+ outputs.append(result)
588
+
589
+ with open(output_file, "w") as f:
590
+ json.dump(outputs, f)
591
+ ```
592
+
593
+ ```{python}
594
+ EVALUATION_PROMPT = """###Task Description:
595
+ An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
596
+ 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
597
+ 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
598
+ 3. The output format should look as follows: \"Feedback: {{write a feedback for criteria}} [RESULT] {{an integer number between 1 and 5}}\"
599
+ 4. Please do not generate any other opening, closing, and explanations. Be sure to include [RESULT] in your output.
600
+
601
+ ###The instruction to evaluate:
602
+ {instruction}
603
+
604
+ ###Response to evaluate:
605
+ {response}
606
+
607
+ ###Reference Answer (Score 5):
608
+ {reference_answer}
609
+
610
+ ###Score Rubrics:
611
+ [Is the response correct, accurate, and factual based on the reference answer?]
612
+ Score 1: The response is completely incorrect, inaccurate, and/or not factual.
613
+ Score 2: The response is mostly incorrect, inaccurate, and/or not factual.
614
+ Score 3: The response is somewhat correct, accurate, and/or factual.
615
+ Score 4: The response is mostly correct, accurate, and factual.
616
+ Score 5: The response is completely correct, accurate, and factual.
617
+
618
+ ###Feedback:"""
619
+
620
+ from langchain.prompts.chat import (
621
+ ChatPromptTemplate,
622
+ HumanMessagePromptTemplate,
623
+ )
624
+ from langchain.schema import SystemMessage
625
+
626
+
627
+ evaluation_prompt_template = ChatPromptTemplate.from_messages(
628
+ [
629
+ SystemMessage(content="You are a fair evaluator language model."),
630
+ HumanMessagePromptTemplate.from_template(EVALUATION_PROMPT),
631
+ ]
632
+ )
633
+ ```
634
+
635
+ ```{python}
636
+ from langchain.chat_models import ChatOpenAI
637
+
638
+ eval_chat_model = ChatOpenAI(model="gpt-4-1106-preview", temperature=0)
639
+ evaluator_name = "GPT4"
640
+
641
+
642
+ def evaluate_answers(
643
+ answer_path: str,
644
+ eval_chat_model: BaseChatModel,
645
+ evaluator_name: str,
646
+ evaluation_prompt_template: ChatPromptTemplate,
647
+ ) -> None:
648
+ """Evaluates generated answers. Modifies the given answer file in place for better checkpointing."""
649
+ answers = []
650
+ if os.path.isfile(answer_path): # load previous generations if they exist
651
+ answers = json.load(open(answer_path, "r"))
652
+
653
+ for experiment in tqdm(answers):
654
+ if f"eval_score_{evaluator_name}" in experiment:
655
+ continue
656
+
657
+ eval_prompt = evaluation_prompt_template.format_messages(
658
+ instruction=experiment["question"],
659
+ response=experiment["generated_answer"],
660
+ reference_answer=experiment["true_answer"],
661
+ )
662
+ eval_result = eval_chat_model.invoke(eval_prompt)
663
+ feedback, score = [item.strip() for item in eval_result.content.split("[RESULT]")]
664
+ experiment[f"eval_score_{evaluator_name}"] = score
665
+ experiment[f"eval_feedback_{evaluator_name}"] = feedback
666
+
667
+ with open(answer_path, "w") as f:
668
+ json.dump(answers, f)
669
+ ```
670
+
671
+ 🚀 Let's run the tests and evaluate answers!👇
672
+
673
+ ```{python}
674
+ if not os.path.exists("./output"):
675
+ os.mkdir("./output")
676
+
677
+ for chunk_size in [200]: # Add other chunk sizes (in tokens) as needed
678
+ for embeddings in ["thenlper/gte-small"]: # Add other embeddings as needed
679
+ for rerank in [True, False]:
680
+ settings_name = f"chunk:{chunk_size}_embeddings:{embeddings.replace('/', '~')}_rerank:{rerank}_reader-model:{READER_MODEL_NAME}"
681
+ output_file_name = f"./output/rag_{settings_name}.json"
682
+
683
+ print(f"Running evaluation for {settings_name}:")
684
+
685
+ print("Loading knowledge base embeddings...")
686
+ knowledge_index = load_embeddings(
687
+ RAW_KNOWLEDGE_BASE,
688
+ chunk_size=chunk_size,
689
+ embedding_model_name=embeddings,
690
+ )
691
+
692
+ print("Running RAG...")
693
+ reranker = (
694
+ RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0") if rerank else None
695
+ )
696
+ run_rag_tests(
697
+ eval_dataset=eval_dataset,
698
+ llm=READER_LLM,
699
+ knowledge_index=knowledge_index,
700
+ output_file=output_file_name,
701
+ reranker=reranker,
702
+ verbose=False,
703
+ test_settings=settings_name,
704
+ )
705
+
706
+ print("Running evaluation...")
707
+ evaluate_answers(
708
+ output_file_name,
709
+ eval_chat_model,
710
+ evaluator_name,
711
+ evaluation_prompt_template,
712
+ )
713
+ ```
714
+
715
+ ### Inspect results
716
+
717
+ ```{python}
718
+ import glob
719
+
720
+ outputs = []
721
+ for file in glob.glob("./output/*.json"):
722
+ output = pd.DataFrame(json.load(open(file, "r")))
723
+ output["settings"] = file
724
+ outputs.append(output)
725
+ result = pd.concat(outputs)
726
+ ```
727
+
728
+ ```{python}
729
+ result["eval_score_GPT4"] = result["eval_score_GPT4"].apply(
730
+ lambda x: int(x) if isinstance(x, str) else 1
731
+ )
732
+ result["eval_score_GPT4"] = (result["eval_score_GPT4"] - 1) / 4
733
+ ```
734
+
735
+ ```{python}
736
+ average_scores = result.groupby("settings")["eval_score_GPT4"].mean()
737
+ average_scores.sort_values()
738
+ ```
739
+
740
+ ## Example results
741
+
742
+ Let us load the results that I obtained by tweaking the different options available in this notebook.
743
+ For more detail on why these options could work on not, see the notebook on [advanced_RAG](advanced_rag).
744
+
745
+ As you can see in the graph below, some tweaks do not bring any improvement, some give huge performance boosts.
746
+
747
+ ➡️ ___There is no single good recipe: you should try several different directions when tuning your RAG systems.___
748
+
749
+ ```{python}
750
+ import plotly.express as px
751
+
752
+ scores = datasets.load_dataset("m-ric/rag_scores_cookbook", split="train")
753
+ scores = pd.Series(scores["score"], index=scores["settings"])
754
+ ```
755
+
756
+ ```{python}
757
+ fig = px.bar(
758
+ scores,
759
+ color=scores,
760
+ labels={
761
+ "value": "Accuracy",
762
+ "settings": "Configuration",
763
+ },
764
+ color_continuous_scale="bluered",
765
+ )
766
+ fig.update_layout(w
767
+ width=1000,
768
+ height=600,
769
+ barmode="group",
770
+ yaxis_range=[0, 100],
771
+ title="<b>Accuracy of different RAG configurations</b>",
772
+ xaxis_title="RAG settings",
773
+ font=dict(size=15),
774
+ )
775
+ fig.layout.yaxis.ticksuffix = "%"
776
+ fig.update_coloraxes(showscale=False)
777
+ fig.update_traces(texttemplate="%{y:.1f}", textposition="outside")
778
+ fig.show()
779
+ ```
780
+
781
+ <img src="https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/RAG_settings_accuracy.png" height="500" width="800">
782
+
783
+ As you can see, these had varying impact on performance. In particular, tuning the chunk size is both easy and very impactful.
784
+
785
+ But this is our case: your results could be very different: now that you have a robust evaluation pipeline, you can set on to explore other options! 🗺️
786
+
src/notebooks/rag_zephyr_langchain.qmd ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Simple RAG
3
+ jupyter: python3
4
+ eval: false
5
+ code-annotations: hover
6
+
7
+ ---
8
+
9
+ ```{python}
10
+ !pip install -q torch transformers accelerate bitsandbytes transformers sentence-transformers faiss-gpu
11
+ ```
12
+
13
+ ```{python}
14
+ !pip install -q langchain
15
+ ```
16
+
17
+ ::: callout-note
18
+ If running in Google Colab, you may need to run this cell to make sure you're using UTF-8 locale to install LangChain
19
+ ```{python}
20
+ import locale
21
+ locale.getpreferredencoding = lambda: "UTF-8"
22
+ ```
23
+ :::
24
+
25
+
26
+ ## Prepare the data
27
+
28
+ In this example, we'll load all of the issues (both open and closed) from [PEFT library's repo](https://github.com/huggingface/peft).
29
+
30
+ First, you need to acquire a [GitHub personal access token](https://github.com/settings/tokens?type=beta) to access the GitHub API.
31
+
32
+ ```{python}
33
+ from getpass import getpass
34
+
35
+ ACCESS_TOKEN = getpass("YOUR_GITHUB_PERSONAL_TOKEN") # <1>
36
+ ```
37
+ 1. You can also use an environment variable to store your token.
38
+
39
+ Next, we'll load all of the issues in the [huggingface/peft](https://github.com/huggingface/peft) repo:
40
+ - By default, pull requests are considered issues as well, here we chose to exclude them from data with by setting `include_prs=False`
41
+ - Setting `state = "all"` means we will load both open and closed issues.
42
+
43
+ ```{python}
44
+ from langchain.document_loaders import GitHubIssuesLoader
45
+
46
+ loader = GitHubIssuesLoader(
47
+ repo="huggingface/peft",
48
+ access_token=ACCESS_TOKEN,
49
+ include_prs=False,
50
+ state="all"
51
+ )
52
+
53
+ docs = loader.load()
54
+ ```
55
+
56
+ The content of individual GitHub issues may be longer than what an embedding model can take as input. If we want to embed all of the available content, we need to chunk the documents into appropriately sized pieces.
57
+
58
+ The most common and straightforward approach to chunking is to define a fixed size of chunks and whether there should be any overlap between them. Keeping some overlap between chunks allows us to preserve some semantic context between the chunks.
59
+
60
+ Other approaches are typically more involved and take into account the documents' structure and context. For example, one may want to split a document based on sentences or paragraphs, or create chunks based on the
61
+
62
+ The fixed-size chunking, however, works well for most common cases, so that is what we'll do here.
63
+
64
+ ```{python}
65
+ from langchain.text_splitter import CharacterTextSplitter
66
+
67
+ splitter = CharacterTextSplitter(chunk_size=512, chunk_overlap=30)
68
+
69
+ chunked_docs = splitter.split_documents(docs)
70
+ ```
71
+
72
+ ## Create the embeddings + retriever
73
+
74
+ Now that the docs are all of the appropriate size, we can create a database with their embeddings.
75
+
76
+ To create document chunk embeddings we'll use the `HuggingFaceEmbeddings` and the [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5) embeddings model. To create the vector database, we'll use `FAISS`, a library developed by Facebook AI. This library offers efficient similarity search and clustering of dense vectors, which is what we need here. FAISS is currently one of the most used libraries for NN search in massive datasets.
77
+
78
+ ::: callout-tip
79
+ There are many other embeddings models available on the Hub, and you can keep an eye on the best performing ones by checking the [Massive Text Embedding Benchmark (MTEB) Leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
80
+ :::
81
+
82
+ We'll access both the embeddings model and FAISS via LangChain API.
83
+
84
+ ```{python}
85
+ from langchain.vectorstores import FAISS
86
+ from langchain.embeddings import HuggingFaceEmbeddings
87
+
88
+ db = FAISS.from_documents(chunked_docs,
89
+ HuggingFaceEmbeddings(model_name='BAAI/bge-base-en-v1.5'))
90
+ ```
91
+
92
+ We need a way to return(retrieve) the documents given an unstructured query. For that, we'll use the `as_retriever` method using the `db` as a backbone:
93
+ - `search_type="similarity"` means we want to perform similarity search between the query and documents
94
+ - `search_kwargs={'k': 4}` instructs the retriever to return top 4 results.
95
+
96
+ ```{python}
97
+ retriever = db.as_retriever(
98
+ search_type="similarity", # <1>
99
+ search_kwargs={'k': 4} # <1>
100
+ )
101
+ ```
102
+ 1. The ideal search type is context dependent, and you should experiment to find the best one for your data.
103
+
104
+ The vector database and retriever are now set up, next we need to set up the next piece of the chain - the model.
105
+
106
+ ## Load quantized model
107
+
108
+ For this example, we chose [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), a small but powerful model.
109
+ To make inference faster, we will load the quantized version of the model:
110
+
111
+ :::::: {.callout-tip}
112
+ With many models being released every week, you may want to substitute this model to the latest and greatest. The best way to keep track of open source LLMs is to check the [Open-source LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
113
+ :::
114
+
115
+ ```{python}
116
+ import torch
117
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
118
+
119
+ model_name = 'HuggingFaceH4/zephyr-7b-beta'
120
+
121
+ bnb_config = BitsAndBytesConfig(
122
+ load_in_4bit=True,
123
+ bnb_4bit_use_double_quant=True,
124
+ bnb_4bit_quant_type="nf4",
125
+ bnb_4bit_compute_dtype=torch.bfloat16
126
+ )
127
+
128
+ model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=bnb_config)
129
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
130
+ ```
131
+
132
+ ## Setup the LLM chain
133
+
134
+ Finally, we have all the pieces we need to set up the LLM chain.
135
+
136
+ First, create a text_generation pipeline using the loaded model and its tokenizer.
137
+
138
+ Next, create a prompt template - this should follow the format of the model, so if you substitute the model checkpoint, make sure to use the appropriate formatting.
139
+
140
+ ```{python}
141
+ from langchain.llms import HuggingFacePipeline
142
+ from langchain.prompts import PromptTemplate
143
+ from transformers import pipeline
144
+ from langchain_core.output_parsers import StrOutputParser
145
+
146
+ text_generation_pipeline = pipeline(
147
+ model=model, # <1>
148
+ tokenizer=tokenizer, # <2>
149
+ task="text-generation", # <3>
150
+ temperature=0.2, # <4>
151
+ do_sample=True, # <5>
152
+ repetition_penalty=1.1, # <6>
153
+ return_full_text=True, # <7>
154
+ max_new_tokens=400, # <8>
155
+ )
156
+
157
+ llm = HuggingFacePipeline(pipeline=text_generation_pipeline)
158
+
159
+ prompt_template = """
160
+ <|system|>
161
+ Answer the question based on your knowledge. Use the following context to help:
162
+
163
+ {context}
164
+
165
+ </s>
166
+ <|user|>
167
+ {question}
168
+ </s>
169
+ <|assistant|>
170
+
171
+ """
172
+
173
+ prompt = PromptTemplate(
174
+ input_variables=["context", "question"],
175
+ template=prompt_template,
176
+ )
177
+
178
+ llm_chain = prompt | llm | StrOutputParser()
179
+ ```
180
+
181
+ 1. The pre-trained model for text generation.
182
+ 2. Tokenizer to preprocess input text and postprocess generated output.
183
+ 3. Specifies the task as text generation.
184
+ 4. Controls the randomness in the output generation. Lower values make the output more deterministic.
185
+ 5. Enables sampling to introduce randomness in the output generation.
186
+ 6. Penalizes repetition in the output to encourage diversity.
187
+ 7. Returns the full generated text including the input prompt.
188
+ 8. Limits the maximum number of new tokens generated.
189
+
190
+ Note: _You can also use `tokenizer.apply_chat_template` to convert a list of messages (as dicts: `{'role': 'user', 'content': '(...)'}`) into a string with the appropriate chat format._
191
+
192
+
193
+ Finally, we need to combine the `llm_chain` with the retriever to create a RAG chain. We pass the original question through to the final generation step, as well as the retrieved context docs:
194
+
195
+ ```{python}
196
+ from langchain_core.runnables import RunnablePassthrough
197
+
198
+ retriever = db.as_retriever()
199
+
200
+ rag_chain = (
201
+ {"context": retriever, "question": RunnablePassthrough()}
202
+ | llm_chain
203
+ )
204
+ ```
205
+
206
+ ## Compare the results
207
+
208
+ Let's see the difference RAG makes in generating answers to the library-specific questions.
209
+
210
+ ```{python}
211
+ question = "How do you combine multiple adapters?"
212
+ ```
213
+
214
+ First, let's see what kind of answer we can get with just the model itself, no context added:
215
+
216
+ ```{python}
217
+ #| colab: {base_uri: 'https://localhost:8080/', height: 125}
218
+ llm_chain.invoke({"context":"", "question": question})
219
+ ```
220
+
221
+ As you can see, the model interpreted the question as one about physical computer adapters, while in the context of PEFT, "adapters" refer to LoRA adapters.
222
+ Let's see if adding context from GitHub issues helps the model give a more relevant answer:
223
+
224
+ ```{python}
225
+ #| colab: {base_uri: 'https://localhost:8080/', height: 125}
226
+ rag_chain.invoke(question)
227
+ ```
228
+
229
+ As we can see, the added context, really helps the exact same model, provide a much more relevant and informed answer to the library-specific question.
230
+
231
+ Notably, combining multiple adapters for inference has been added to the library, and one can find this information in the documentation, so for the next iteration of this RAG it may be worth including documentation embeddings.
232
+
src/notebooks/single_gpu.ipynb ADDED
@@ -0,0 +1,1129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "FNdZ-kD0l78P"
7
+ },
8
+ "source": [
9
+ "---\n",
10
+ "title: Single GPU Fine-tuning\n",
11
+ "---\n",
12
+ "\n",
13
+ "# Fine-tuning a Code LLM on Custom Code on a single GPU\n",
14
+ "\n",
15
+ "_Authored by: [Maria Khalusova](https://github.com/MKhalusova)_\n",
16
+ "\n",
17
+ "Publicly available code LLMs such as Codex, StarCoder, and Code Llama are great at generating code that adheres to general programming principles and syntax, but they may not align with an organization's internal conventions, or be aware of proprietary libraries.\n",
18
+ "\n",
19
+ "In this notebook, we'll see show how you can fine-tune a code LLM on private code bases to enhance its contextual awareness and improve a model's usefulness to your organization's needs. Since the code LLMs are quite large, fine-tuning them in a traditional manner can be resource-draining. Worry not! We will show how you can optimize fine-tuning to fit on a single GPU.\n",
20
+ "\n",
21
+ "\n",
22
+ "## Dataset\n",
23
+ "\n",
24
+ "For this example, we picked the top 10 Hugging Face public repositories on GitHub. We have excluded non-code files from the data, such as images, audio files, presentations, and so on. For Jupyter notebooks, we've kept only cells containing code. The resulting code is stored as a dataset that you can find on the Hugging Face Hub under [`smangrul/hf-stack-v1`](https://huggingface.co/datasets/smangrul/hf-stack-v1). It contains repo id, file path, and file content.\n",
25
+ "\n",
26
+ "\n",
27
+ "## Model\n",
28
+ "\n",
29
+ "We'll finetune [`bigcode/starcoderbase-1b`](https://huggingface.co/bigcode/starcoderbase-1b), which is a 1B parameter model trained on 80+ programming languages. This is a gated model, so if you plan to run this notebook with this exact model, you'll need to gain access to it on the model's page. Log in to your Hugging Face account to do so:"
30
+ ]
31
+ },
32
+ {
33
+ "cell_type": "code",
34
+ "execution_count": null,
35
+ "metadata": {
36
+ "id": "bPlCJYDK6vrF"
37
+ },
38
+ "outputs": [],
39
+ "source": [
40
+ "from huggingface_hub import notebook_login\n",
41
+ "\n",
42
+ "notebook_login()"
43
+ ]
44
+ },
45
+ {
46
+ "cell_type": "markdown",
47
+ "metadata": {
48
+ "id": "WMVe_c8q43Qo"
49
+ },
50
+ "source": [
51
+ "To get started, let's install all the necessary libraries. As you can see, in addition to `transformers` and `datasets`, we'll be using `peft`, `bitsandbytes`, and `flash-attn` to optimize the training.\n",
52
+ "\n",
53
+ "By employing parameter-efficient training techniques, we can run this notebook on a single A100 High-RAM GPU."
54
+ ]
55
+ },
56
+ {
57
+ "cell_type": "code",
58
+ "execution_count": null,
59
+ "metadata": {
60
+ "id": "Fp7i8WMCjKJG"
61
+ },
62
+ "outputs": [],
63
+ "source": [
64
+ "!pip install -q transformers datasets peft bitsandbytes flash-attn"
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "markdown",
69
+ "metadata": {
70
+ "id": "16EdABzt3_Ig"
71
+ },
72
+ "source": [
73
+ "Let's define some variables now. Feel free to play with these."
74
+ ]
75
+ },
76
+ {
77
+ "cell_type": "code",
78
+ "execution_count": null,
79
+ "metadata": {
80
+ "id": "hru3G-CLmqis"
81
+ },
82
+ "outputs": [],
83
+ "source": [
84
+ "MODEL=\"bigcode/starcoderbase-1b\" # Model checkpoint on the Hugging Face Hub\n",
85
+ "DATASET=\"smangrul/hf-stack-v1\" # Dataset on the Hugging Face Hub\n",
86
+ "DATA_COLUMN=\"content\" # Column name containing the code content\n",
87
+ "\n",
88
+ "SEQ_LENGTH=2048 # Sequence length\n",
89
+ "\n",
90
+ "# Training arguments\n",
91
+ "MAX_STEPS=2000 # max_steps\n",
92
+ "BATCH_SIZE=16 # batch_size\n",
93
+ "GR_ACC_STEPS=1 # gradient_accumulation_steps\n",
94
+ "LR=5e-4 # learning_rate\n",
95
+ "LR_SCHEDULER_TYPE=\"cosine\" # lr_scheduler_type\n",
96
+ "WEIGHT_DECAY=0.01 # weight_decay\n",
97
+ "NUM_WARMUP_STEPS=30 # num_warmup_steps\n",
98
+ "EVAL_FREQ=100 # eval_freq\n",
99
+ "SAVE_FREQ=100 # save_freq\n",
100
+ "LOG_FREQ=25 # log_freq\n",
101
+ "OUTPUT_DIR=\"peft-starcoder-lora-a100\" # output_dir\n",
102
+ "BF16=True # bf16\n",
103
+ "FP16=False # no_fp16\n",
104
+ "\n",
105
+ "# FIM trasformations arguments\n",
106
+ "FIM_RATE=0.5 # fim_rate\n",
107
+ "FIM_SPM_RATE=0.5 # fim_spm_rate\n",
108
+ "\n",
109
+ "# LORA\n",
110
+ "LORA_R=8 # lora_r\n",
111
+ "LORA_ALPHA=32 # lora_alpha\n",
112
+ "LORA_DROPOUT=0.0 # lora_dropout\n",
113
+ "LORA_TARGET_MODULES=\"c_proj,c_attn,q_attn,c_fc,c_proj\" # lora_target_modules\n",
114
+ "\n",
115
+ "# bitsandbytes config\n",
116
+ "USE_NESTED_QUANT=True # use_nested_quant\n",
117
+ "BNB_4BIT_COMPUTE_DTYPE=\"bfloat16\"# bnb_4bit_compute_dtype\n",
118
+ "\n",
119
+ "SEED=0"
120
+ ]
121
+ },
122
+ {
123
+ "cell_type": "code",
124
+ "execution_count": null,
125
+ "metadata": {
126
+ "id": "FyZSXTbJrcnC"
127
+ },
128
+ "outputs": [],
129
+ "source": [
130
+ "from transformers import (\n",
131
+ " AutoModelForCausalLM,\n",
132
+ " AutoTokenizer,\n",
133
+ " Trainer,\n",
134
+ " TrainingArguments,\n",
135
+ " logging,\n",
136
+ " set_seed,\n",
137
+ " BitsAndBytesConfig,\n",
138
+ ")\n",
139
+ "\n",
140
+ "set_seed(SEED)"
141
+ ]
142
+ },
143
+ {
144
+ "cell_type": "markdown",
145
+ "metadata": {
146
+ "id": "pO7F5L5AtKo1"
147
+ },
148
+ "source": [
149
+ "## Prepare the data"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "markdown",
154
+ "metadata": {
155
+ "id": "1LmrIZqP0oUE"
156
+ },
157
+ "source": [
158
+ "Begin by loading the data. As the dataset is likely to be quite large, make sure to enable the streaming mode. Streaming allows us to load the data progressively as we iterate over the dataset instead of downloading the whole dataset at once.\n",
159
+ "\n",
160
+ "We'll reserve the first 4000 examples as the validation set, and everything else will be the training data."
161
+ ]
162
+ },
163
+ {
164
+ "cell_type": "code",
165
+ "execution_count": null,
166
+ "metadata": {
167
+ "id": "4oJZvZb-1J88"
168
+ },
169
+ "outputs": [],
170
+ "source": [
171
+ "from datasets import load_dataset\n",
172
+ "import torch\n",
173
+ "from tqdm import tqdm\n",
174
+ "\n",
175
+ "\n",
176
+ "dataset = load_dataset(\n",
177
+ " DATASET,\n",
178
+ " data_dir=\"data\",\n",
179
+ " split=\"train\",\n",
180
+ " streaming=True,\n",
181
+ ")\n",
182
+ "\n",
183
+ "valid_data = dataset.take(4000)\n",
184
+ "train_data = dataset.skip(4000)\n",
185
+ "train_data = train_data.shuffle(buffer_size=5000, seed=SEED)"
186
+ ]
187
+ },
188
+ {
189
+ "cell_type": "markdown",
190
+ "metadata": {
191
+ "id": "sLQ8t0LM2GR6"
192
+ },
193
+ "source": [
194
+ "At this step, the dataset still contains raw data with code of arbitraty length. For training, we need inputs of fixed length. Let's create an Iterable dataset that would return constant-length chunks of tokens from a stream of text files.\n",
195
+ "\n",
196
+ "First, let's estimate the average number of characters per token in the dataset, which will help us later estimate the number of tokens in the text buffer later. By default, we'll only take 400 examples (`nb_examples`) from the dataset. Using only a subset of the entire dataset will reduce computational cost while still providing a reasonable estimate of the overall character-to-token ratio."
197
+ ]
198
+ },
199
+ {
200
+ "cell_type": "code",
201
+ "execution_count": null,
202
+ "metadata": {
203
+ "colab": {
204
+ "base_uri": "https://localhost:8080/"
205
+ },
206
+ "id": "KCiAvydztNsu",
207
+ "outputId": "cabf7fd0-a922-4371-cbc6-60ee99ef7469"
208
+ },
209
+ "outputs": [
210
+ {
211
+ "name": "stderr",
212
+ "output_type": "stream",
213
+ "text": [
214
+ "100%|██████████| 400/400 [00:10<00:00, 39.87it/s] "
215
+ ]
216
+ },
217
+ {
218
+ "name": "stdout",
219
+ "output_type": "stream",
220
+ "text": [
221
+ "The character to token ratio of the dataset is: 2.43\n"
222
+ ]
223
+ },
224
+ {
225
+ "name": "stderr",
226
+ "output_type": "stream",
227
+ "text": [
228
+ "\n"
229
+ ]
230
+ }
231
+ ],
232
+ "source": [
233
+ "tokenizer = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)\n",
234
+ "\n",
235
+ "def chars_token_ratio(dataset, tokenizer, data_column, nb_examples=400):\n",
236
+ " \"\"\"\n",
237
+ " Estimate the average number of characters per token in the dataset.\n",
238
+ " \"\"\"\n",
239
+ "\n",
240
+ " total_characters, total_tokens = 0, 0\n",
241
+ " for _, example in tqdm(zip(range(nb_examples), iter(dataset)), total=nb_examples):\n",
242
+ " total_characters += len(example[data_column])\n",
243
+ " total_tokens += len(tokenizer(example[data_column]).tokens())\n",
244
+ "\n",
245
+ " return total_characters / total_tokens\n",
246
+ "\n",
247
+ "\n",
248
+ "chars_per_token = chars_token_ratio(train_data, tokenizer, DATA_COLUMN)\n",
249
+ "print(f\"The character to token ratio of the dataset is: {chars_per_token:.2f}\")"
250
+ ]
251
+ },
252
+ {
253
+ "cell_type": "markdown",
254
+ "metadata": {
255
+ "id": "6F13VGobB3Ma"
256
+ },
257
+ "source": [
258
+ "The character-to-token ratio can also be used as an indicator of the quality of text tokenization. For instance, a character-to-token ratio of 1.0 would mean that each character is represented with a token, which is not very meaningful. This would indicate poor tokenization. In standard English text, one token is typically equivalent to approximately four characters, meaning the character-to-token ratio is around 4.0. We can expect a lower ratio in the code dataset, but generally speaking, a number between 2.0 and 3.5 can be considered good enough."
259
+ ]
260
+ },
261
+ {
262
+ "cell_type": "markdown",
263
+ "metadata": {
264
+ "id": "rcwYFRPpwxea"
265
+ },
266
+ "source": [
267
+ "**Optional FIM transformations**\n",
268
+ "\n",
269
+ "\n",
270
+ "Autoregressive language models typically generate sequences from left to right. By applying the FIM transformations, the model can also learn to infill text. Check out [\"Efficient Training of Language Models to Fill in the Middle\" paper](https://arxiv.org/pdf/2207.14255.pdf) to learn more about the technique.\n",
271
+ "We'll define the FIM transformations here and will use them when creating the Iterable Dataset. However, if you want to omit transformations, feel free to set `fim_rate` to 0."
272
+ ]
273
+ },
274
+ {
275
+ "cell_type": "code",
276
+ "execution_count": null,
277
+ "metadata": {
278
+ "id": "zmejYvEKw1E-"
279
+ },
280
+ "outputs": [],
281
+ "source": [
282
+ "import functools\n",
283
+ "import numpy as np\n",
284
+ "\n",
285
+ "\n",
286
+ "# Helper function to get token ids of the special tokens for prefix, suffix and middle for FIM transformations.\n",
287
+ "@functools.lru_cache(maxsize=None)\n",
288
+ "def get_fim_token_ids(tokenizer):\n",
289
+ " try:\n",
290
+ " FIM_PREFIX, FIM_MIDDLE, FIM_SUFFIX, FIM_PAD = tokenizer.special_tokens_map[\"additional_special_tokens\"][1:5]\n",
291
+ " suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id = (\n",
292
+ " tokenizer.vocab[tok] for tok in [FIM_SUFFIX, FIM_PREFIX, FIM_MIDDLE, FIM_PAD]\n",
293
+ " )\n",
294
+ " except KeyError:\n",
295
+ " suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id = None, None, None, None\n",
296
+ " return suffix_tok_id, prefix_tok_id, middle_tok_id, pad_tok_id\n",
297
+ "\n",
298
+ "\n",
299
+ "## Adapted from https://github.com/bigcode-project/Megatron-LM/blob/6c4bf908df8fd86b4977f54bf5b8bd4b521003d1/megatron/data/gpt_dataset.py\n",
300
+ "def permute(\n",
301
+ " sample,\n",
302
+ " np_rng,\n",
303
+ " suffix_tok_id,\n",
304
+ " prefix_tok_id,\n",
305
+ " middle_tok_id,\n",
306
+ " pad_tok_id,\n",
307
+ " fim_rate=0.5,\n",
308
+ " fim_spm_rate=0.5,\n",
309
+ " truncate_or_pad=False,\n",
310
+ "):\n",
311
+ " \"\"\"\n",
312
+ " Take in a sample (list of tokens) and perform a FIM transformation on it with a probability of fim_rate, using two FIM modes:\n",
313
+ " PSM and SPM (with a probability of fim_spm_rate).\n",
314
+ " \"\"\"\n",
315
+ "\n",
316
+ " # The if condition will trigger with the probability of fim_rate\n",
317
+ " # This means FIM transformations will apply to samples with a probability of fim_rate\n",
318
+ " if np_rng.binomial(1, fim_rate):\n",
319
+ "\n",
320
+ " # Split the sample into prefix, middle, and suffix, based on randomly generated indices stored in the boundaries list.\n",
321
+ " boundaries = list(np_rng.randint(low=0, high=len(sample) + 1, size=2))\n",
322
+ " boundaries.sort()\n",
323
+ "\n",
324
+ " prefix = np.array(sample[: boundaries[0]], dtype=np.int64)\n",
325
+ " middle = np.array(sample[boundaries[0] : boundaries[1]], dtype=np.int64)\n",
326
+ " suffix = np.array(sample[boundaries[1] :], dtype=np.int64)\n",
327
+ "\n",
328
+ " if truncate_or_pad:\n",
329
+ " # calculate the new total length of the sample, taking into account tokens indicating prefix, middle, and suffix\n",
330
+ " new_length = suffix.shape[0] + prefix.shape[0] + middle.shape[0] + 3\n",
331
+ " diff = new_length - len(sample)\n",
332
+ "\n",
333
+ " # trancate or pad if there's a difference in length between the new length and the original\n",
334
+ " if diff > 0:\n",
335
+ " if suffix.shape[0] <= diff:\n",
336
+ " return sample, np_rng\n",
337
+ " suffix = suffix[: suffix.shape[0] - diff]\n",
338
+ " elif diff < 0:\n",
339
+ " suffix = np.concatenate([suffix, np.full((-1 * diff), pad_tok_id)])\n",
340
+ "\n",
341
+ " # With the probability of fim_spm_rateapply SPM variant of FIM transformations\n",
342
+ " # SPM: suffix, prefix, middle\n",
343
+ " if np_rng.binomial(1, fim_spm_rate):\n",
344
+ " new_sample = np.concatenate(\n",
345
+ " [\n",
346
+ " [prefix_tok_id, suffix_tok_id],\n",
347
+ " suffix,\n",
348
+ " [middle_tok_id],\n",
349
+ " prefix,\n",
350
+ " middle,\n",
351
+ " ]\n",
352
+ " )\n",
353
+ " # Otherwise, apply the PSM variant of FIM transformations\n",
354
+ " # PSM: prefix, suffix, middle\n",
355
+ " else:\n",
356
+ "\n",
357
+ " new_sample = np.concatenate(\n",
358
+ " [\n",
359
+ " [prefix_tok_id],\n",
360
+ " prefix,\n",
361
+ " [suffix_tok_id],\n",
362
+ " suffix,\n",
363
+ " [middle_tok_id],\n",
364
+ " middle,\n",
365
+ " ]\n",
366
+ " )\n",
367
+ " else:\n",
368
+ " # don't apply FIM transformations\n",
369
+ " new_sample = sample\n",
370
+ "\n",
371
+ " return list(new_sample), np_rng\n"
372
+ ]
373
+ },
374
+ {
375
+ "cell_type": "markdown",
376
+ "metadata": {
377
+ "id": "AwW5FviD9xBH"
378
+ },
379
+ "source": [
380
+ "Let's define the `ConstantLengthDataset`, an Iterable dataset that will return constant-length chunks of tokens. To do so, we'll read a buffer of text from the original dataset until we hit the size limits and then apply tokenizer to convert the raw text into tokenized inputs. Optionally, we'll perform FIM transformations on some sequences (the proportion of sequences affected is controlled by `fim_rate`).\n",
381
+ "\n",
382
+ "Once defined, we can create instances of the `ConstantLengthDataset` from both training and validation data."
383
+ ]
384
+ },
385
+ {
386
+ "cell_type": "code",
387
+ "execution_count": null,
388
+ "metadata": {
389
+ "id": "AgDW-692wzOl"
390
+ },
391
+ "outputs": [],
392
+ "source": [
393
+ "from torch.utils.data import IterableDataset\n",
394
+ "from torch.utils.data.dataloader import DataLoader\n",
395
+ "import random\n",
396
+ "\n",
397
+ "# Create an Iterable dataset that returns constant-length chunks of tokens from a stream of text files.\n",
398
+ "\n",
399
+ "class ConstantLengthDataset(IterableDataset):\n",
400
+ " \"\"\"\n",
401
+ " Iterable dataset that returns constant length chunks of tokens from stream of text files.\n",
402
+ " Args:\n",
403
+ " tokenizer (Tokenizer): The processor used for proccessing the data.\n",
404
+ " dataset (dataset.Dataset): Dataset with text files.\n",
405
+ " infinite (bool): If True the iterator is reset after dataset reaches end else stops.\n",
406
+ " seq_length (int): Length of token sequences to return.\n",
407
+ " num_of_sequences (int): Number of token sequences to keep in buffer.\n",
408
+ " chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.\n",
409
+ " fim_rate (float): Rate (0.0 to 1.0) that sample will be permuted with FIM.\n",
410
+ " fim_spm_rate (float): Rate (0.0 to 1.0) of FIM permuations that will use SPM.\n",
411
+ " seed (int): Seed for random number generator.\n",
412
+ " \"\"\"\n",
413
+ "\n",
414
+ " def __init__(\n",
415
+ " self,\n",
416
+ " tokenizer,\n",
417
+ " dataset,\n",
418
+ " infinite=False,\n",
419
+ " seq_length=1024,\n",
420
+ " num_of_sequences=1024,\n",
421
+ " chars_per_token=3.6,\n",
422
+ " content_field=\"content\",\n",
423
+ " fim_rate=0.5,\n",
424
+ " fim_spm_rate=0.5,\n",
425
+ " seed=0,\n",
426
+ " ):\n",
427
+ " self.tokenizer = tokenizer\n",
428
+ " self.concat_token_id = tokenizer.eos_token_id\n",
429
+ " self.dataset = dataset\n",
430
+ " self.seq_length = seq_length\n",
431
+ " self.infinite = infinite\n",
432
+ " self.current_size = 0\n",
433
+ " self.max_buffer_size = seq_length * chars_per_token * num_of_sequences\n",
434
+ " self.content_field = content_field\n",
435
+ " self.fim_rate = fim_rate\n",
436
+ " self.fim_spm_rate = fim_spm_rate\n",
437
+ " self.seed = seed\n",
438
+ "\n",
439
+ " (\n",
440
+ " self.suffix_tok_id,\n",
441
+ " self.prefix_tok_id,\n",
442
+ " self.middle_tok_id,\n",
443
+ " self.pad_tok_id,\n",
444
+ " ) = get_fim_token_ids(self.tokenizer)\n",
445
+ " if not self.suffix_tok_id and self.fim_rate > 0:\n",
446
+ " print(\"FIM is not supported by tokenizer, disabling FIM\")\n",
447
+ " self.fim_rate = 0\n",
448
+ "\n",
449
+ " def __iter__(self):\n",
450
+ " iterator = iter(self.dataset)\n",
451
+ " more_examples = True\n",
452
+ " np_rng = np.random.RandomState(seed=self.seed)\n",
453
+ " while more_examples:\n",
454
+ " buffer, buffer_len = [], 0\n",
455
+ " while True:\n",
456
+ " if buffer_len >= self.max_buffer_size:\n",
457
+ " break\n",
458
+ " try:\n",
459
+ " buffer.append(next(iterator)[self.content_field])\n",
460
+ " buffer_len += len(buffer[-1])\n",
461
+ " except StopIteration:\n",
462
+ " if self.infinite:\n",
463
+ " iterator = iter(self.dataset)\n",
464
+ " else:\n",
465
+ " more_examples = False\n",
466
+ " break\n",
467
+ " tokenized_inputs = self.tokenizer(buffer, truncation=False)[\"input_ids\"]\n",
468
+ " all_token_ids = []\n",
469
+ "\n",
470
+ " for tokenized_input in tokenized_inputs:\n",
471
+ " # optionally do FIM permutations\n",
472
+ " if self.fim_rate > 0:\n",
473
+ " tokenized_input, np_rng = permute(\n",
474
+ " tokenized_input,\n",
475
+ " np_rng,\n",
476
+ " self.suffix_tok_id,\n",
477
+ " self.prefix_tok_id,\n",
478
+ " self.middle_tok_id,\n",
479
+ " self.pad_tok_id,\n",
480
+ " fim_rate=self.fim_rate,\n",
481
+ " fim_spm_rate=self.fim_spm_rate,\n",
482
+ " truncate_or_pad=False,\n",
483
+ " )\n",
484
+ "\n",
485
+ " all_token_ids.extend(tokenized_input + [self.concat_token_id])\n",
486
+ " examples = []\n",
487
+ " for i in range(0, len(all_token_ids), self.seq_length):\n",
488
+ " input_ids = all_token_ids[i : i + self.seq_length]\n",
489
+ " if len(input_ids) == self.seq_length:\n",
490
+ " examples.append(input_ids)\n",
491
+ " random.shuffle(examples)\n",
492
+ " for example in examples:\n",
493
+ " self.current_size += 1\n",
494
+ " yield {\n",
495
+ " \"input_ids\": torch.LongTensor(example),\n",
496
+ " \"labels\": torch.LongTensor(example),\n",
497
+ " }\n",
498
+ "\n",
499
+ "\n",
500
+ "train_dataset = ConstantLengthDataset(\n",
501
+ " tokenizer,\n",
502
+ " train_data,\n",
503
+ " infinite=True,\n",
504
+ " seq_length=SEQ_LENGTH,\n",
505
+ " chars_per_token=chars_per_token,\n",
506
+ " content_field=DATA_COLUMN,\n",
507
+ " fim_rate=FIM_RATE,\n",
508
+ " fim_spm_rate=FIM_SPM_RATE,\n",
509
+ " seed=SEED,\n",
510
+ ")\n",
511
+ "eval_dataset = ConstantLengthDataset(\n",
512
+ " tokenizer,\n",
513
+ " valid_data,\n",
514
+ " infinite=False,\n",
515
+ " seq_length=SEQ_LENGTH,\n",
516
+ " chars_per_token=chars_per_token,\n",
517
+ " content_field=DATA_COLUMN,\n",
518
+ " fim_rate=FIM_RATE,\n",
519
+ " fim_spm_rate=FIM_SPM_RATE,\n",
520
+ " seed=SEED,\n",
521
+ ")"
522
+ ]
523
+ },
524
+ {
525
+ "cell_type": "markdown",
526
+ "metadata": {
527
+ "id": "rxev1sk6tRW9"
528
+ },
529
+ "source": [
530
+ "## Prepare the model"
531
+ ]
532
+ },
533
+ {
534
+ "cell_type": "markdown",
535
+ "metadata": {
536
+ "id": "UCtWV-U42Eq_"
537
+ },
538
+ "source": [
539
+ "Now that the data is prepared, it's time to load the model! We're going to load the quantized version of the model.\n",
540
+ "\n",
541
+ "This will allow us to reduce memory usage, as quantization represents data with fewer bits. We'll use the `bitsandbytes` library to quantize the model, as it has a nice integration with `transformers`. All we need to do is define a `bitsandbytes` config, and then use it when loading the model.\n",
542
+ "\n",
543
+ "There are different variants of 4bit quantization, but generally, we recommend using NF4 quantization for better performance (`bnb_4bit_quant_type=\"nf4\"`).\n",
544
+ "\n",
545
+ "The `bnb_4bit_use_double_quant` option adds a second quantization after the first one to save an additional 0.4 bits per parameter.\n",
546
+ "\n",
547
+ "To learn more about quantization, check out the [\"Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA\" blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes).\n",
548
+ "\n",
549
+ "Once defined, pass the config to the `from_pretrained` method to load the quantized version of the model."
550
+ ]
551
+ },
552
+ {
553
+ "cell_type": "code",
554
+ "execution_count": null,
555
+ "metadata": {
556
+ "id": "XuwoX6U2DUvK"
557
+ },
558
+ "outputs": [],
559
+ "source": [
560
+ "from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training\n",
561
+ "from peft.tuners.lora import LoraLayer\n",
562
+ "\n",
563
+ "load_in_8bit = False\n",
564
+ "\n",
565
+ "# 4-bit quantization\n",
566
+ "compute_dtype = getattr(torch, BNB_4BIT_COMPUTE_DTYPE)\n",
567
+ "\n",
568
+ "bnb_config = BitsAndBytesConfig(\n",
569
+ " load_in_4bit=True,\n",
570
+ " bnb_4bit_quant_type=\"nf4\",\n",
571
+ " bnb_4bit_compute_dtype=compute_dtype,\n",
572
+ " bnb_4bit_use_double_quant=USE_NESTED_QUANT,\n",
573
+ ")\n",
574
+ "\n",
575
+ "device_map = {\"\": 0}\n",
576
+ "\n",
577
+ "model = AutoModelForCausalLM.from_pretrained(\n",
578
+ " MODEL,\n",
579
+ " load_in_8bit=load_in_8bit,\n",
580
+ " quantization_config=bnb_config,\n",
581
+ " device_map=device_map,\n",
582
+ " use_cache=False, # We will be using gradient checkpointing\n",
583
+ " trust_remote_code=True,\n",
584
+ " use_flash_attention_2=True,\n",
585
+ ")\n"
586
+ ]
587
+ },
588
+ {
589
+ "cell_type": "markdown",
590
+ "metadata": {
591
+ "id": "bO9e2FV8D8ZF"
592
+ },
593
+ "source": [
594
+ "When using a quantized model for training, you need to call the `prepare_model_for_kbit_training()` function to preprocess the quantized model for training."
595
+ ]
596
+ },
597
+ {
598
+ "cell_type": "code",
599
+ "execution_count": null,
600
+ "metadata": {
601
+ "id": "Qb_eB4xzEDBk"
602
+ },
603
+ "outputs": [],
604
+ "source": [
605
+ "model = prepare_model_for_kbit_training(model)"
606
+ ]
607
+ },
608
+ {
609
+ "cell_type": "markdown",
610
+ "metadata": {
611
+ "id": "lmnLjPZpDVtg"
612
+ },
613
+ "source": [
614
+ "Now that the quantized model is ready, we can set up a LoRA configuration. LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.\n",
615
+ "\n",
616
+ "To train a model using LoRA technique, we need to wrap the base model as a `PeftModel`. This involves definign LoRA configuration with `LoraConfig`, and wrapping the original model with `get_peft_model()` using the `LoraConfig`.\n",
617
+ "\n",
618
+ "To learn more about LoRA and its parameters, refer to [PEFT documentation](https://huggingface.co/docs/peft/conceptual_guides/lora)."
619
+ ]
620
+ },
621
+ {
622
+ "cell_type": "code",
623
+ "execution_count": null,
624
+ "metadata": {
625
+ "colab": {
626
+ "base_uri": "https://localhost:8080/"
627
+ },
628
+ "id": "_pAUU2FR2Gey",
629
+ "outputId": "63328c2b-e693-49b1-ce0a-3ca8722f852a"
630
+ },
631
+ "outputs": [
632
+ {
633
+ "name": "stdout",
634
+ "output_type": "stream",
635
+ "text": [
636
+ "trainable params: 5,554,176 || all params: 1,142,761,472 || trainable%: 0.4860310866343243\n"
637
+ ]
638
+ }
639
+ ],
640
+ "source": [
641
+ "# Set up lora\n",
642
+ "peft_config = LoraConfig(\n",
643
+ " lora_alpha=LORA_ALPHA,\n",
644
+ " lora_dropout=LORA_DROPOUT,\n",
645
+ " r=LORA_R,\n",
646
+ " bias=\"none\",\n",
647
+ " task_type=\"CAUSAL_LM\",\n",
648
+ " target_modules=LORA_TARGET_MODULES.split(\",\"),\n",
649
+ ")\n",
650
+ "\n",
651
+ "model = get_peft_model(model, peft_config)\n",
652
+ "model.print_trainable_parameters()"
653
+ ]
654
+ },
655
+ {
656
+ "cell_type": "markdown",
657
+ "metadata": {
658
+ "id": "tHe7AElXzXVV"
659
+ },
660
+ "source": [
661
+ "As you can see, by applying LoRA technique we will now need to train less than 1% of the parameters."
662
+ ]
663
+ },
664
+ {
665
+ "cell_type": "markdown",
666
+ "metadata": {
667
+ "id": "T_CqVydc40IM"
668
+ },
669
+ "source": [
670
+ "## Train the model"
671
+ ]
672
+ },
673
+ {
674
+ "cell_type": "markdown",
675
+ "metadata": {
676
+ "id": "Q_iN2khjrbD3"
677
+ },
678
+ "source": [
679
+ "Now that we have prepared the data, and optimized the model, we are ready to bring everything together to start the training.\n",
680
+ "\n",
681
+ "To instantiate a `Trainer`, you need to define the training configuration. The most important is the `TrainingArguments`, which is a class that contains all the attributes to configure the training.\n",
682
+ "\n",
683
+ "These are similar to any other kind of model training you may run, so we won't go into detail here."
684
+ ]
685
+ },
686
+ {
687
+ "cell_type": "code",
688
+ "execution_count": null,
689
+ "metadata": {
690
+ "id": "65QHS8l1tKQe"
691
+ },
692
+ "outputs": [],
693
+ "source": [
694
+ "train_data.start_iteration = 0\n",
695
+ "\n",
696
+ "\n",
697
+ "training_args = TrainingArguments(\n",
698
+ " output_dir=f\"Your_HF_username/{OUTPUT_DIR}\",\n",
699
+ " dataloader_drop_last=True,\n",
700
+ " evaluation_strategy=\"steps\",\n",
701
+ " save_strategy=\"steps\",\n",
702
+ " max_steps=MAX_STEPS,\n",
703
+ " eval_steps=EVAL_FREQ,\n",
704
+ " save_steps=SAVE_FREQ,\n",
705
+ " logging_steps=LOG_FREQ,\n",
706
+ " per_device_train_batch_size=BATCH_SIZE,\n",
707
+ " per_device_eval_batch_size=BATCH_SIZE,\n",
708
+ " learning_rate=LR,\n",
709
+ " lr_scheduler_type=LR_SCHEDULER_TYPE,\n",
710
+ " warmup_steps=NUM_WARMUP_STEPS,\n",
711
+ " gradient_accumulation_steps=GR_ACC_STEPS,\n",
712
+ " gradient_checkpointing=True,\n",
713
+ " fp16=FP16,\n",
714
+ " bf16=BF16,\n",
715
+ " weight_decay=WEIGHT_DECAY,\n",
716
+ " push_to_hub=True,\n",
717
+ " include_tokens_per_second=True,\n",
718
+ ")\n"
719
+ ]
720
+ },
721
+ {
722
+ "cell_type": "markdown",
723
+ "metadata": {
724
+ "id": "kB_fLRex09ut"
725
+ },
726
+ "source": [
727
+ "As a final step, instantiate the `Trainer` and call the `train` method. "
728
+ ]
729
+ },
730
+ {
731
+ "cell_type": "code",
732
+ "execution_count": null,
733
+ "metadata": {
734
+ "colab": {
735
+ "base_uri": "https://localhost:8080/",
736
+ "height": 1000
737
+ },
738
+ "id": "rS3nVwhUC69O",
739
+ "outputId": "61a5bdb2-b7d0-4aed-8290-4bf20c2ccd38"
740
+ },
741
+ "outputs": [
742
+ {
743
+ "name": "stdout",
744
+ "output_type": "stream",
745
+ "text": [
746
+ "Training...\n"
747
+ ]
748
+ },
749
+ {
750
+ "data": {
751
+ "text/html": [
752
+ "\n",
753
+ " <div>\n",
754
+ " \n",
755
+ " <progress value='2000' max='2000' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
756
+ " [2000/2000 4:16:10, Epoch 1/9223372036854775807]\n",
757
+ " </div>\n",
758
+ " <table border=\"1\" class=\"dataframe\">\n",
759
+ " <thead>\n",
760
+ " <tr style=\"text-align: left;\">\n",
761
+ " <th>Step</th>\n",
762
+ " <th>Training Loss</th>\n",
763
+ " <th>Validation Loss</th>\n",
764
+ " </tr>\n",
765
+ " </thead>\n",
766
+ " <tbody>\n",
767
+ " <tr>\n",
768
+ " <td>100</td>\n",
769
+ " <td>5.524600</td>\n",
770
+ " <td>7.456872</td>\n",
771
+ " </tr>\n",
772
+ " <tr>\n",
773
+ " <td>200</td>\n",
774
+ " <td>5.617800</td>\n",
775
+ " <td>7.262190</td>\n",
776
+ " </tr>\n",
777
+ " <tr>\n",
778
+ " <td>300</td>\n",
779
+ " <td>5.129100</td>\n",
780
+ " <td>6.410039</td>\n",
781
+ " </tr>\n",
782
+ " <tr>\n",
783
+ " <td>400</td>\n",
784
+ " <td>5.052200</td>\n",
785
+ " <td>6.306774</td>\n",
786
+ " </tr>\n",
787
+ " <tr>\n",
788
+ " <td>500</td>\n",
789
+ " <td>5.202900</td>\n",
790
+ " <td>6.117062</td>\n",
791
+ " </tr>\n",
792
+ " <tr>\n",
793
+ " <td>600</td>\n",
794
+ " <td>4.654100</td>\n",
795
+ " <td>6.018349</td>\n",
796
+ " </tr>\n",
797
+ " <tr>\n",
798
+ " <td>700</td>\n",
799
+ " <td>5.100200</td>\n",
800
+ " <td>6.000355</td>\n",
801
+ " </tr>\n",
802
+ " <tr>\n",
803
+ " <td>800</td>\n",
804
+ " <td>5.049800</td>\n",
805
+ " <td>5.889457</td>\n",
806
+ " </tr>\n",
807
+ " <tr>\n",
808
+ " <td>900</td>\n",
809
+ " <td>4.541200</td>\n",
810
+ " <td>5.813823</td>\n",
811
+ " </tr>\n",
812
+ " <tr>\n",
813
+ " <td>1000</td>\n",
814
+ " <td>5.000700</td>\n",
815
+ " <td>5.834208</td>\n",
816
+ " </tr>\n",
817
+ " <tr>\n",
818
+ " <td>1100</td>\n",
819
+ " <td>5.026500</td>\n",
820
+ " <td>5.781939</td>\n",
821
+ " </tr>\n",
822
+ " <tr>\n",
823
+ " <td>1200</td>\n",
824
+ " <td>4.411800</td>\n",
825
+ " <td>5.720596</td>\n",
826
+ " </tr>\n",
827
+ " <tr>\n",
828
+ " <td>1300</td>\n",
829
+ " <td>4.782500</td>\n",
830
+ " <td>5.736376</td>\n",
831
+ " </tr>\n",
832
+ " <tr>\n",
833
+ " <td>1400</td>\n",
834
+ " <td>4.980200</td>\n",
835
+ " <td>5.712276</td>\n",
836
+ " </tr>\n",
837
+ " <tr>\n",
838
+ " <td>1500</td>\n",
839
+ " <td>4.368700</td>\n",
840
+ " <td>5.689637</td>\n",
841
+ " </tr>\n",
842
+ " <tr>\n",
843
+ " <td>1600</td>\n",
844
+ " <td>4.884700</td>\n",
845
+ " <td>5.675920</td>\n",
846
+ " </tr>\n",
847
+ " <tr>\n",
848
+ " <td>1700</td>\n",
849
+ " <td>4.914400</td>\n",
850
+ " <td>5.662421</td>\n",
851
+ " </tr>\n",
852
+ " <tr>\n",
853
+ " <td>1800</td>\n",
854
+ " <td>4.248700</td>\n",
855
+ " <td>5.660122</td>\n",
856
+ " </tr>\n",
857
+ " <tr>\n",
858
+ " <td>1900</td>\n",
859
+ " <td>4.798400</td>\n",
860
+ " <td>5.664026</td>\n",
861
+ " </tr>\n",
862
+ " <tr>\n",
863
+ " <td>2000</td>\n",
864
+ " <td>4.704200</td>\n",
865
+ " <td>5.655665</td>\n",
866
+ " </tr>\n",
867
+ " </tbody>\n",
868
+ "</table><p>"
869
+ ],
870
+ "text/plain": [
871
+ "<IPython.core.display.HTML object>"
872
+ ]
873
+ },
874
+ "metadata": {},
875
+ "output_type": "display_data"
876
+ },
877
+ {
878
+ "data": {
879
+ "text/plain": [
880
+ "TrainOutput(global_step=2000, training_loss=4.885598585128784, metrics={'train_runtime': 15380.3075, 'train_samples_per_second': 2.081, 'train_steps_per_second': 0.13, 'train_tokens_per_second': 4261.033, 'total_flos': 4.0317260660736e+17, 'train_loss': 4.885598585128784, 'epoch': 1.0})"
881
+ ]
882
+ },
883
+ "execution_count": 19,
884
+ "metadata": {},
885
+ "output_type": "execute_result"
886
+ }
887
+ ],
888
+ "source": [
889
+ "trainer = Trainer(\n",
890
+ " model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset\n",
891
+ ")\n",
892
+ "\n",
893
+ "print(\"Training...\")\n",
894
+ "trainer.train()\n"
895
+ ]
896
+ },
897
+ {
898
+ "cell_type": "markdown",
899
+ "metadata": {
900
+ "id": "aAERlCnt1PEW"
901
+ },
902
+ "source": [
903
+ "Finally, you can push the fine-tuned model to your Hub repository to share with your team."
904
+ ]
905
+ },
906
+ {
907
+ "cell_type": "code",
908
+ "execution_count": null,
909
+ "metadata": {
910
+ "id": "1h7_AUTTDwE1"
911
+ },
912
+ "outputs": [],
913
+ "source": [
914
+ "trainer.push_to_hub()"
915
+ ]
916
+ },
917
+ {
918
+ "cell_type": "markdown",
919
+ "metadata": {
920
+ "id": "KBVH7uFOM_UF"
921
+ },
922
+ "source": [
923
+ "## Inference\n",
924
+ "\n",
925
+ "Once the model is uploaded to Hub, we can use it for inference. To do so we first initialize the original base model and its tokenizer. Next, we need to merge the fine-duned weights with the base model."
926
+ ]
927
+ },
928
+ {
929
+ "cell_type": "code",
930
+ "execution_count": null,
931
+ "metadata": {
932
+ "id": "jtL37piINBFe"
933
+ },
934
+ "outputs": [],
935
+ "source": [
936
+ "from peft import PeftModel\n",
937
+ "import torch\n",
938
+ "\n",
939
+ "# load the original model first\n",
940
+ "tokenizer = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True)\n",
941
+ "base_model = AutoModelForCausalLM.from_pretrained(\n",
942
+ " MODEL,\n",
943
+ " quantization_config=None,\n",
944
+ " device_map=None,\n",
945
+ " trust_remote_code=True,\n",
946
+ " torch_dtype=torch.bfloat16,\n",
947
+ ").cuda()\n",
948
+ "\n",
949
+ "# merge fine-tuned weights with the base model\n",
950
+ "peft_model_id = f\"Your_HF_username/{OUTPUT_DIR}\"\n",
951
+ "model = PeftModel.from_pretrained(base_model, peft_model_id)\n",
952
+ "model.merge_and_unload()"
953
+ ]
954
+ },
955
+ {
956
+ "cell_type": "markdown",
957
+ "metadata": {
958
+ "id": "3USQ2suvDi9M"
959
+ },
960
+ "source": [
961
+ "Now we can use the merged model for inference. For convenience, we'll define a `get_code_completion` - feel free to experiment with text generation parameters!"
962
+ ]
963
+ },
964
+ {
965
+ "cell_type": "code",
966
+ "execution_count": null,
967
+ "metadata": {
968
+ "id": "RoTGpNbjDeWI"
969
+ },
970
+ "outputs": [],
971
+ "source": [
972
+ "def get_code_completion(prefix, suffix):\n",
973
+ " text = prompt = f\"\"\"<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>\"\"\"\n",
974
+ " model.eval()\n",
975
+ " outputs = model.generate(\n",
976
+ " input_ids=tokenizer(text, return_tensors=\"pt\").input_ids.cuda(),\n",
977
+ " max_new_tokens=128,\n",
978
+ " temperature=0.2,\n",
979
+ " top_k=50,\n",
980
+ " top_p=0.95,\n",
981
+ " do_sample=True,\n",
982
+ " repetition_penalty=1.0,\n",
983
+ " )\n",
984
+ " return tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]"
985
+ ]
986
+ },
987
+ {
988
+ "cell_type": "markdown",
989
+ "metadata": {
990
+ "id": "0kMJiGDfDrBf"
991
+ },
992
+ "source": [
993
+ "Now all we need to do to get code completion is call the `get_code_complete` function and pass the first few lines that we want to be completed as a prefix, and an empty string as a suffix."
994
+ ]
995
+ },
996
+ {
997
+ "cell_type": "code",
998
+ "execution_count": null,
999
+ "metadata": {
1000
+ "colab": {
1001
+ "base_uri": "https://localhost:8080/"
1002
+ },
1003
+ "id": "nXlco2_-YcvM",
1004
+ "outputId": "41c411ad-b7dc-4277-f975-c173888234bb"
1005
+ },
1006
+ "outputs": [
1007
+ {
1008
+ "name": "stdout",
1009
+ "output_type": "stream",
1010
+ "text": [
1011
+ "from peft import LoraConfig, TaskType, get_peft_model\n",
1012
+ "from transformers import AutoModelForCausalLM\n",
1013
+ "peft_config = LoraConfig(\n",
1014
+ " task_type=TaskType.CAUSAL_LM,\n",
1015
+ " r=8,\n",
1016
+ " lora_alpha=32,\n",
1017
+ " target_modules=[\"q_proj\", \"v_proj\"],\n",
1018
+ " lora_dropout=0.1,\n",
1019
+ " bias=\"none\",\n",
1020
+ " modules_to_save=[\"q_proj\", \"v_proj\"],\n",
1021
+ " inference_mode=False,\n",
1022
+ ")\n",
1023
+ "model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n",
1024
+ "model = get_peft_model(model, peft_config)\n",
1025
+ "model.print_trainable_parameters()\n"
1026
+ ]
1027
+ }
1028
+ ],
1029
+ "source": [
1030
+ "prefix = \"\"\"from peft import LoraConfig, TaskType, get_peft_model\n",
1031
+ "from transformers import AutoModelForCausalLM\n",
1032
+ "peft_config = LoraConfig(\n",
1033
+ "\"\"\"\n",
1034
+ "suffix =\"\"\"\"\"\"\n",
1035
+ "\n",
1036
+ "print(get_code_completion(prefix, suffix))"
1037
+ ]
1038
+ },
1039
+ {
1040
+ "cell_type": "markdown",
1041
+ "metadata": {
1042
+ "id": "Ql2563kGlnmu"
1043
+ },
1044
+ "source": [
1045
+ "As someone who has just used the PEFT library earlier in this notebook, you can see that the generated result for creating a `LoraConfig` is rather good!\n",
1046
+ "\n",
1047
+ "If you go back to the cell where we instantiate the model for inference, and comment out the lines where we merge the fine-tuned weights, you can see what the original model would've generated for the exact same prefix:"
1048
+ ]
1049
+ },
1050
+ {
1051
+ "cell_type": "code",
1052
+ "execution_count": null,
1053
+ "metadata": {
1054
+ "colab": {
1055
+ "base_uri": "https://localhost:8080/"
1056
+ },
1057
+ "id": "29xxp1eHTgJ9",
1058
+ "outputId": "c6d597a2-01da-4d25-a32f-3a551212c5b4"
1059
+ },
1060
+ "outputs": [
1061
+ {
1062
+ "name": "stdout",
1063
+ "output_type": "stream",
1064
+ "text": [
1065
+ "from peft import LoraConfig, TaskType, get_peft_model\n",
1066
+ "from transformers import AutoModelForCausalLM\n",
1067
+ "peft_config = LoraConfig(\n",
1068
+ " model_name_or_path=\"facebook/wav2vec2-base-960h\",\n",
1069
+ " num_labels=1,\n",
1070
+ " num_features=1,\n",
1071
+ " num_hidden_layers=1,\n",
1072
+ " num_attention_heads=1,\n",
1073
+ " num_hidden_layers_per_attention_head=1,\n",
1074
+ " num_attention_heads_per_hidden_layer=1,\n",
1075
+ " hidden_size=1024,\n",
1076
+ " hidden_dropout_prob=0.1,\n",
1077
+ " hidden_act=\"gelu\",\n",
1078
+ " hidden_act_dropout_prob=0.1,\n",
1079
+ " hidden\n"
1080
+ ]
1081
+ }
1082
+ ],
1083
+ "source": [
1084
+ "prefix = \"\"\"from peft import LoraConfig, TaskType, get_peft_model\n",
1085
+ "from transformers import AutoModelForCausalLM\n",
1086
+ "peft_config = LoraConfig(\n",
1087
+ "\"\"\"\n",
1088
+ "suffix =\"\"\"\"\"\"\n",
1089
+ "\n",
1090
+ "print(get_code_completion(prefix, suffix))"
1091
+ ]
1092
+ },
1093
+ {
1094
+ "cell_type": "markdown",
1095
+ "metadata": {
1096
+ "id": "Pwy2ZC7U8Ema"
1097
+ },
1098
+ "source": [
1099
+ "While it is Python syntax, you can see that the original model has no understanding of what a `LoraConfig` should be doing."
1100
+ ]
1101
+ },
1102
+ {
1103
+ "cell_type": "markdown",
1104
+ "metadata": {
1105
+ "id": "CATYE8pp2drQ"
1106
+ },
1107
+ "source": [
1108
+ "To learn how this kind of fine-tuning compares to full fine-tuning, and how to use a model like this as your copilot in VS Code via Inference Endpoints, or locally, check out the [\"Personal Copilot: Train Your Own Coding Assistant\" blog post](https://huggingface.co/blog/personal-copilot). This notebook complements the original blog post.\n"
1109
+ ]
1110
+ }
1111
+ ],
1112
+ "metadata": {
1113
+ "accelerator": "GPU",
1114
+ "colab": {
1115
+ "gpuType": "A100",
1116
+ "machine_shape": "hm",
1117
+ "provenance": []
1118
+ },
1119
+ "kernelspec": {
1120
+ "display_name": "Python 3",
1121
+ "name": "python3"
1122
+ },
1123
+ "language_info": {
1124
+ "name": "python"
1125
+ }
1126
+ },
1127
+ "nbformat": 4,
1128
+ "nbformat_minor": 0
1129
+ }
src/styles.css ADDED
@@ -0,0 +1 @@
 
 
1
+ /* css styles */