{ "cells": [ { "cell_type": "markdown", "id": "8ec2fef2", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Introduction to Large Language Models\n", "* **Created by:** Eric Martinez\n", "* **For:** Software Engineering 2\n", "* **At:** University of Texas Rio-Grande Valley" ] }, { "cell_type": "markdown", "id": "60bddee7", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Overview of LLMs and their capabilities" ] }, { "cell_type": "markdown", "id": "0f2f6448", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "An LLM is a machine learning model designed to understand and generate human-like text. They are trained on vast amounts of text data and can perform a wide range of tasks, such as translation, summarization, and question-answering.\n", "\n", "Some capabilities of LLMs include natural language understanding, question answering, instruction-following, text and code generation, sentiment analysis, and more." ] }, { "cell_type": "markdown", "id": "51ff20a2", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "4b4ccc93", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* What is it: ML model trained to understand and generate human-like text.\n" ] }, { "cell_type": "markdown", "id": "0ddf4ce0", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Capabilities: Natural Language Understanding, Q&A, text/code generation, etc." ] }, { "cell_type": "markdown", "id": "6b1ceda6", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## LLM Components" ] }, { "cell_type": "markdown", "id": "0228b8de", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "LLMs are trained to predict the next word in a sentence, given the context of the previous words. This task is known as language modeling.\n", "\n", "Jeremy Howard, along with Sebastian Ruder, developed the ULMFiT (Universal Language Model Fine-tuning) approach, which leverages transfer learning for NLP tasks that contributed towards the current state-of-the-art.\n", "\n", "ULMFiT was introduced during a free online course called Fast.AI, where Jeremy Howard demonstrated its effectiveness in various NLP tasks. The approach gained significant attention and contributed to the development of more advanced LLMs.\n", "\n", "Transformers are a type of neural network architecture that uses self-attention mechanisms to process input data in parallel, rather than sequentially. This allows for faster training and improved performance on long-range dependencies in text.\n", "\n", "Transformers have led to breakthroughs in NLP, such as the development of BERT, GPT, and other state-of-the-art models.\n", "\n", "Before transformers, NLP capabilities were limited by the inability to effectively capture long-range dependencies and the reliance on recurrent neural networks (RNNs) and convolutional neural networks (CNNs).\n", "\n", "Transfer learning is the process of using a pre-trained model as a starting point and fine-tuning it for a specific task. In the context of LLMs, transfer learning allows models to leverage vast amounts of pre-existing knowledge, leading to improved performance and reduced training time.\n", "\n", "Advanced Reading:\n", "* [Universal Language Model Fine-tuning for Text Classification](https://arxiv.org/abs/1801.06146)\n", "* [Attention Is All You Need (Transformers)](https://arxiv.org/pdf/1706.03762.pdf)\n", "* [Blog Post: The Illustrated Transformer](https://jalammar.github.io/illustrated-transformer/)\n" ] }, { "cell_type": "markdown", "id": "1af28557", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "f5048a1a", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Task: Next-word prediction" ] }, { "cell_type": "markdown", "id": "511e8868", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Breakthrough: Fine-tuning a Pretrained Model (ULMFiT)" ] }, { "cell_type": "markdown", "id": "b432aa38", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Transformer Architecture: Used in most state-of-the-art models such as BERT, GPT, LLAMA" ] }, { "cell_type": "markdown", "id": "b23fc05a", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Transfer Learning: 'Fine-tuning' improves performance, reduces training time, and key to techniques like RLHF" ] }, { "cell_type": "markdown", "id": "e0470859", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* RLHF: Key technique in improving quality of output and aligning with human values" ] }, { "cell_type": "markdown", "id": "6defd4dd", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## LLMs vs Other ML Models" ] }, { "cell_type": "markdown", "id": "40d26a40", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "LLMs are specifically designed for natural language processing tasks, whereas other machine learning models may be designed for tasks such as image recognition or reinforcement learning.\n", "\n", "Large-scale LLMs like OpenAI's GPT models have significantly more parameters and are trained on much larger datasets, resulting in more powerful and versatile NLP capabilities than traditional NLP approaches.\n", "\n", "Advanced Reading\n", "* [Language Models are Few-Shot Learners (GPT-3)](https://arxiv.org/pdf/2005.14165.pdf)" ] }, { "cell_type": "markdown", "id": "7d70af8c", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "8a24f57b", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* NLP Focus" ] }, { "cell_type": "markdown", "id": "d7917944", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Larger Model (Parameters)" ] }, { "cell_type": "markdown", "id": "b7ef42a8", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Larger Datasets" ] }, { "cell_type": "markdown", "id": "1e65278a", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## LLM Advancements" ] }, { "cell_type": "markdown", "id": "8f56ceff", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "OpenAI's latest GPT models have billions of parameters and are trained on massive datasets, making them some of the most powerful NLP models to date.\n", "\n", "Access to these powerful GPT models is now available through APIs, which has democratized access to high-quality NLP tools and enabled a wide range of applications." ] }, { "cell_type": "markdown", "id": "43922b38", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "6b626a71", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* GPT models" ] }, { "cell_type": "markdown", "id": "132094b3", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* API Access" ] }, { "cell_type": "markdown", "id": "da9fd574", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Wide Applications" ] }, { "cell_type": "markdown", "id": "06c5d488", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Use cases" ] }, { "cell_type": "markdown", "id": "51c6c497", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* AI assistants, Chatbots" ] }, { "cell_type": "markdown", "id": "e21f0e4f", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Programming Assistance" ] }, { "cell_type": "markdown", "id": "c5dc8f7f", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Healthcare" ] }, { "cell_type": "markdown", "id": "9d63a81a", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Education" ] }, { "cell_type": "markdown", "id": "67484971", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Interfacing with Data: Analytics, Search, Recommendation" ] }, { "cell_type": "markdown", "id": "712ea3fc", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Sales / Marketing / Ads" ] }, { "cell_type": "markdown", "id": "50be5fc0", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Limitations & Challenges" ] }, { "cell_type": "markdown", "id": "b0414a80", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Current LLMs are susceptible to hallucination. Hallucination refers to instances where the model generates text that appears coherent and plausible but is not grounded in reality or factual information. Hallucination can lead to misinformation, slander, and other harmful consequences.\n", "\n", "LLMs can inherit biases from the data they are trained on, which can lead to biased outputs and potentially harmful consequences in downstream applications.\n", "\n", "Ethical concerns surrounding LLMs include the potential for misuse, such as generating fake news or other malicious content, as well as the potential to exacerbate existing societal issues.\n", "\n", "Training, fine-tuning, and inference with LLMs can be computationally expensive, requiring powerful hardware and potentially limiting their accessibility and scalability." ] }, { "cell_type": "markdown", "id": "67e9554c", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "4cc30f2b", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Hallucination" ] }, { "cell_type": "markdown", "id": "66fa69c9", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Biases" ] }, { "cell_type": "markdown", "id": "74156832", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Ethical concerns" ] }, { "cell_type": "markdown", "id": "3dc318d8", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Computational requirements" ] }, { "cell_type": "markdown", "id": "32a17b1a", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Steering LLMs" ] }, { "cell_type": "markdown", "id": "fa44968a", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Prompting involves carefully crafting input text to guide the LLM's output, which can help achieve desired results and mitigate potential issues.\n", "\n", "Training your own LLM allows for greater control over the model's behavior and output, but requires significant computational resources and expertise.\n", "\n", "Fine-tuning involves adjusting an existing LLM to better suit a specific task or domain, which can help improve performance and steer the model's output." ] }, { "cell_type": "markdown", "id": "d37085c1", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "f62b8788", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Prompting" ] }, { "cell_type": "markdown", "id": "0a4fb0fd", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Training" ] }, { "cell_type": "markdown", "id": "d0083066", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Fine-tuning" ] }, { "cell_type": "markdown", "id": "346d386f", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Alignment and Improvement" ] }, { "cell_type": "markdown", "id": "c06e5f0c", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "#### Alignment & Improvement: What is alignment?" ] }, { "cell_type": "markdown", "id": "374bcd31", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "998c6055", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Definition: Alignment refers to the process of ensuring that an LLM's behavior and output align with human values, intentions, and expectations." ] }, { "cell_type": "markdown", "id": "db3f66be", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Importance: Ensures that LLMs are useful, safe, and do not produce harmful or unintended consequences." ] }, { "cell_type": "markdown", "id": "675964b5", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Challenges: Alignment is challenging due to the diverse range of morals, ethics, and sensibilities across different countries, regions, and demographics." ] }, { "cell_type": "markdown", "id": "973ccd98", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "#### Alignment & Improvement: Model Quality" ] }, { "cell_type": "markdown", "id": "269783ab", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "ca7f6466", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Definition: Quality output in terms of LLMs refers to text that is coherent, relevant, accurate, and adheres to the desired task, values, and intentions." ] }, { "cell_type": "markdown", "id": "0f8bdb07", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Reinforcement Learning from Human Feedback (RLHF): technique used to align models and improve their quality by incorporating human feedback into the training process." ] }, { "cell_type": "markdown", "id": "d211fcfd", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Challenges of Human Feedback: Varying morals, ethics, and sensibilities of human raters." ] }, { "cell_type": "markdown", "id": "4d3744ba", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Things to Consider if using RLHF: Carefully selecting training data, incorporating diverse perspectives, and iteratively refining the model." ] }, { "cell_type": "markdown", "id": "fc90716a", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Alignment & Improvement: Evaluation Metrics" ] }, { "cell_type": "markdown", "id": "2a28178a", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Evaluation metrics are quantitative measures used to assess the performance of LLMs on specific tasks or objectives. Common evaluation metrics for LLMs include perplexity, BLEU score, ROUGE score, F1 score, and accuracy, among others.\n", "\n", "Metric Definitions:\n", "* Perplexity measures how well an LLM predicts the next word in a sequence, with lower perplexity indicating better performance.\n", "* BLEU score is used to evaluate the quality of machine-generated translations by comparing them to human-generated reference translations.\n", "* ROUGE score is used to evaluate the quality of text summarization by comparing the generated summary to a reference summary.\n", "* F1 score is a measure of a model's accuracy on a classification task, considering both precision and recall.\n", "* Accuracy is the proportion of correct predictions made by the model out of the total number of predictions.\n", "\n", "Evaluation metrics provide a quantitative way to measure the performance of LLMs, allowing developers to identify areas for improvement and track progress over time. By comparing the performance of different models or training configurations, developers can identify the most effective approaches and optimize their models accordingly. Evaluation metrics can also be used to guide the fine-tuning process, by providing feedback on the model's performance on specific tasks or domains.\n", "\n", "In the context of reinforcement learning from human feedback (RLHF), evaluation metrics can be used to quantify the alignment of the model with human values and intentions, guiding the iterative refinement process. It is important to note that evaluation metrics should be chosen carefully, as they may not always capture the full range of desired qualities in LLM outputs. Developers should consider using a combination of metrics and human evaluation to ensure a comprehensive assessment of model performance." ] }, { "cell_type": "markdown", "id": "d46a8b7d", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "116fced1", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Definition: quantitative measures used to assess the performance of LLMs on specific tasks or objectives." ] }, { "cell_type": "markdown", "id": "ff6f1cb2", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Common metrics: Perplexity, BLEU, ROUGE, F1, Accuracy, and others" ] }, { "cell_type": "markdown", "id": "5d2d5f8b", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Role of Evaluation Metrics: track improvement, compare, guide fine-tuning, quantify alignment" ] }, { "cell_type": "markdown", "id": "33f8e063", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Pitfalls of Evaluation Metrics: they may not actually represent or capture human alignment or values, should be used in combination with human evaluation" ] }, { "cell_type": "markdown", "id": "2338336f", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Advanced Reading:\n", "* [Training language models to follow instructions with human feedback](https://arxiv.org/pdf/2203.02155.pdf)\n", "* [Alignnment of Language Agents](https://arxiv.org/pdf/2103.14659.pdf)\n", "* [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/pdf/2204.05862.pdf)" ] }, { "cell_type": "markdown", "id": "f0033365", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Open-Source vs Closed Source LLMs" ] }, { "cell_type": "markdown", "id": "b4173a3d", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Open-Source LLMs" ] }, { "cell_type": "markdown", "id": "e348c298", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "885a3c33", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Definition: models whose code, architecture, and weights are publicly available, allowing anyone to use, modify, and contribute to their development." ] }, { "cell_type": "markdown", "id": "c1fc8e0c", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Pros: increased transparency, collaboration, and accessibility." ] }, { "cell_type": "markdown", "id": "8e188f94", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Cons: potential misuse and difficulty in controlling the distribution of powerful models." ] }, { "cell_type": "markdown", "id": "a1d5c6e6", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Societal Risks: potential for misuse, rogue agents, the spread of harmful content, and the exacerbation of existing biases and inequalities." ] }, { "cell_type": "markdown", "id": "a6617268", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Closed-Source LLMs" ] }, { "cell_type": "markdown", "id": "1ef87775", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "240cb8c4", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Definition: models whose code, architecture, and weights are proprietary and not publicly available." ] }, { "cell_type": "markdown", "id": "36fd54dd", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Pros: greater control over distribution and usage, as well as the potential for higher-quality models due to focused development efforts." ] }, { "cell_type": "markdown", "id": "27b9e65f", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Cons: cost, minimal insight into architecture training process, minimal customization, etc." ] }, { "cell_type": "markdown", "id": "436c7647", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Societal Risks: potential for monopolistic control, reduced innovation, and limited access to powerful models." ] }, { "cell_type": "markdown", "id": "ea26615e", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Ethical Considerations as LLM Engineers" ] }, { "cell_type": "markdown", "id": "5666aa28", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "**Key Points:**" ] }, { "cell_type": "markdown", "id": "5cb124cf", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Awareness and care at handling: misinformation, harmful output, biased output" ] }, { "cell_type": "markdown", "id": "e8ff59bf", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Awareness of implications of automation solutions on job market and economy" ] }, { "cell_type": "markdown", "id": "7fdb64a1", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Awareness and care at handling: security, prompt injection, and rogue agents" ] }, { "cell_type": "markdown", "id": "eb560488", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Consider benefits and risks, include diverse perspectives" ] }, { "cell_type": "markdown", "id": "77e07c1f", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "* Engineer solutions that pro-actively address morality, ethics, and safety" ] } ], "metadata": { "celltoolbar": "Raw Cell Format", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" } }, "nbformat": 4, "nbformat_minor": 5 }