text
stringlengths
0
21.4k
Speaking of ChatGPT, you could ask yourself now why it's not called ChatLLM. As it turns out, language modeling is not the end of the story - in fact it's just the beginning. So what does the GPT in ChatGPT stand for?
We have actually just learned what the G stands for, namely "generative" - meaning that it was trained on a language generation pretext, which we have discussed. But what about the P and the T?
We'll gloss over the T here, which stands for "transformer" - not the one from the movies , but one that's simply the type of neural network architecture that is being used. This shouldn't really bother us here, but if you are curious and you only want to know its main strength, it's that the transformer architecture works so well because it can focus its attention on the parts of the input sequence that are most relevant at any time. You could argue that this is similar to how humans work. We, too, need to focus our attention on what's most relevant to the task and ignore the rest.
Now to the P, which stands for "pre-training". We discuss next why we suddenly start speaking about pre-training and not just training any longer.
The reason is that Large Language Models like ChatGPT are actually trained in phases.
Pre-training
The first stage is pre-training, which is exactly what we've gone through just now. This stage requires massive amounts of data to learn to predict the next word. In that phase, the model learns not only to master the grammar and syntax of language, but it also acquires a great deal of knowledge about the world, and even some other emerging abilities that we will speak about later.
But now I have a couple of questions for you: First, what might be the problem with this kind of pre-training? Well, there are certainly a few, but the one I am trying to point to here has to do with what the LLM has really learned.
Namely, it has learned mainly to ramble on about a topic. It may even be doing an incredibly good job, but what it doesn't do is respond well to the kind of inputs you would generally want to give an AI, such as a question or an instruction. The problem is that this model has not learned to be, and so is not behaving as, an assistant.
For example, if you ask a pre-trained LLM "What is your fist name?" it may respond with "What is your last name?" simply because this is the kind of data it has seen during pre-training, as in many empty forms, for example. It's only trying to complete the input sequence.
It doesn't do well with following instructions simply because this kind of language structure, i.e., instruction followed by a response, is not very commonly seen in the training data. Maybe Quora or StackOverflow would be the closest representation of this sort of structure.
At this stage, we say that the LLM is not aligned with human intentions. Alignment is an important topic for LLMs, and we'll learn how we can fix this to a large extent, because as it turns out, those pre-trained LLMs are actually quite steerable. So even though initially they don't respond well to instructions, they can be taught to do so.
Instruction fine-tuning and RLHF
This is where instruction tuning comes in. We take the pre-trained LLM with its current abilities and do essentially what we did before - i.e., learn to predict one word at a time - but now we do this using only high-quality instruction and response pairs as our training data.
That way, the model un-learns to simply be a text completer and learns to become a helpful assistant that follows instructions and responds in a way that is aligned with the user's intention. The size of this instruction dataset is typically a lot smaller than the pre-training set. This is because the high-quality instruction-response pairs are much more expensive to create as they are typically sourced from humans. This is very different from the inexpensive self-supervised labels we used in pre-training. This is why this stage is also called supervised instruction fine-tuning.
There is also a third stage that some LLMs like ChatGPT go through, which is reinforcement learning from human feedback. We won't go into details here, but the purpose is similar to instruction fine-tuning. RLHF also helps alignment and ensures that the LLM's output reflects human values and preferences. There is some early research that indicates that this stage is critical for reaching or surpassing human-level performance. In fact, combining the fields of reinforcement learning and language modeling is being shown to be especially promising and is likely to lead to some massive improvements over the LLMs we currently have.
So now let's test our understanding on some common use cases.
First, why can an LLM perform summarization of a longer piece of text?
To understand why, we need to think about the training data. As it so happens, people often make summarizations - on the internet, in research papers, books, and more. As a result, an LLM trained on that data learns how to do that too. It learns to attend to the main points and compress them into a short text.
Note that when a summary is generated, the full text is part of the input sequence of the LLM. This is similar to, say, a research paper that has a conclusion while the full text appears just before.
As a result, that skill has probably been learned during pre-training already, although surely instruction fine-tuning helped improve that skill even further. We can assume that this phase included some summarization examples too.
Second, why can a LLM answer common knowledge questions?
As mentioned, the ability to act as an assistant and respond appropriately is due to instruction fine-tuning and RLHF. But all the knowledge to answer questions itself was already acquired during pre-training.
Of course, that now raises another big question: What if the LLM doesn't know the answer? Unfortunately, it may just make one up in that case. To understand why, we need to think about the training data again, and the training objective.
You might have heard about the term "hallucination" in the context of LLMs, which refers to the phenomenon of LLMs making up facts when they shouldn't.
Why does that happen? Well, the LLM learns only to generate text, not factually true text. Nothing in its training gives the model any indicator of the truth or reliability of any of the training data. However, that is not even the main issue here, it's that generally text out there on the internet and in books sounds confident, so the LLM of course learns to sound that way, too, even if it is wrong. In this way, an LLM has little indication of uncertainty.
That being said, this is an active area of research, from which we can expect that LLMs will be less prone to hallucinations over time. For example, during instruction tuning we can try and teach the LLM to abstain from hallucinating to some extent, but only time will tell whether we can fully solve this issue.
You may be surprised that we can actually try to solve this problem here together right now. We have the knowledge we need to figure out a solution that at least partially helps and is already used widely today.
Suppose that you ask the LLM the following question: Who is the current president of Colombia? There's a good chance an LLM may respond with the wrong name. This could be because of two reasons:
* The first is what we have already brought up: The LLM may just hallucinate and simply respond with a wrong or even fake name.
* The second one I will mention only in passing: LLMs are trained only on data up to a certain cut-off date, and that can be as early as last year. Because of that, the LLM cannot even know the current president with certainty, because things could have changed since the data was created.
So how can we solve both these problems? The answer lies in providing the model some relevant context. The rationale here is that everything that's in the LLM's input sequence is readily available for it to process, while any implicit knowledge it has acquired in pre-training is more difficult and precarious for it to retrieve.
Suppose we were to include the Wikipedia article on Colombia's political history as context for the LLM. In that case it would much more likely to answer correctly because it can simply extract the name from the context.
In the image above you can see what a typical prompt for an LLM with additional context may look like.
This process is called grounding the LLM in the context, or in the real world if you like, rather than allowing it to generate freely.
And that's exactly how Bing Chat and other search-based LLMs work. They first extract relevant context from the web using a search engine and then pass all that information to the LLM, alongside the user's initial question. See the illustration above for a visual of how this is accomplished.
We've now reached a point where you pretty much understand the main mechanisms of the state-of-the art LLMs.
You may be thinking "this is actually not that magical" because all that is happening is the predicting of words, one at a time. It's pure statistics, after all. Or is it?
Let's back up a bit. The magical part of all this is how remarkably well it works. In fact, everyone, even the researchers at OpenAI, were surprised at how far this sort of language modeling can go. One of the key drivers in the last few years has simply been the massive scaling up of neural networks and data sets, which has caused performance to increase along with them. For example, GPT-4, reportedly a model with more than one trillion parameters in total, can pass the bar exam or AP Biology with a score in the top 10 percent of test takers.
Surprisingly, those large LLMs even show certain emerging abilities, i.e., abilities to solve tasks and to do things that they were not explicitly trained to do.
In this last part of the article, we'll discuss some of these emerging abilities and I'll show you some tricks for how you can use them to solve problems.
A ubiquitous emerging ability is, just as the name itself suggests, that LLMs can perform entirely new tasks that they haven't encountered in training, which is called zero-shot. All it takes is some instructions on how to solve the task.
To illustrate this ability with a silly example, you can ask an LLM to translate a sentence from German to English while responding only with words that start with "f".
For instance, when asked to translate a sentence using only words that start with "f", an LLM translated "Die Katze schl盲ft gerne in der Box" with "Feline friend finds fluffy fortress", which is a pretty cool translation, I think.
For more complex tasks, you may quickly realize that zero-shot prompting often requires very detailed instructions, and even then, performance is often far from perfect.
To make another connection to human intelligence, if someone tells you to perform a new task, you would probably ask for some examples or demonstrations of how the task is performed. LLMs can benefit from the same.
As an example, let's say you want a model to translate different currency amounts into a common format. You could describe what you want in details or just give a brief instruction and some example demonstrations. The image above shows a sample task.
Using this prompt, the model should do well on the last example, which is "Steak: 24.99 USD", and respond with $24.99.
Note how we simply left out the solution to the last example. Remember that an LLM is still a text-completer at heart, so keep a consistent structure. You should almost force the model to respond with just what you want, as we did in the example above.
To summarize, a general tip is to provide some examples if the LLM is struggling with the task in a zero-shot manner. You will find that often helps the LLM understand the task, making the performance typically better and more reliable.
Another interesting ability of LLMs is also reminiscent of human intelligence. It is especially useful if the task is more complex and requires multiple steps of reasoning to solve.
Let's say I ask you "Who won the World Cup in the year before Lionel Messi was born?" What would you do? You would probably solve this step by step by writing down any intermediate solutions needed in order to arrive at the correct answer. And that's exactly what LLMs can do too.
It has been found that simply telling an LLM to "think step by step" can increase its performance substantially in many tasks.
Why does this work? We know everything we need to answer this. The problem is that this kind of unusual composite knowledge is probably not directly in the LLM's internal memory. However, all the individual facts might be, like Messi's birthday, and the winners of various World Cups.
Allowing the LLM to build up to the final answer helps because it gives the model time to think out loud - a working memory so to say - and to solve the simpler sub-problems before giving the final answer.
The key here is to remember that everything to the left of a to-be-generated word is context that the model can rely on. So, as shown in the image above, by the time the model says "Argentina", Messi's birthday and the year of the Word Cup we inquired about are already in the LLM's working memory, which makes it easier to answer correctly.
Conclusion
Before I wrap things up, I want to answer a question I asked earlier in the article. Is the LLM really just predicting the next word or is there more to it? Some researchers are arguing for the latter, saying that to become so good at next-word-prediction in any context, the LLM must actually have acquired a compressed understanding of the world internally. Not, as others argue, that the model has simply learned to memorize and copy patterns seen during training, with no actual understanding of language, the world, or anything else.
There is probably no clear right or wrong between those two sides at this point; it may just be a different way of looking at the same thing. Clearly these LLMs are proving to be very useful and show impressive knowledge and reasoning capabilities, and maybe even show some sparks of general intelligence. But whether or to what extent that resembles human intelligence is still to be determined, and so is how much further language modeling can improve the state of the art.
Large language models are a subset of machine learning focused on creating content that mimics human capabilities. LLMs are called "large" because they are trained with vast amounts of text data and contain billions or even trillions of parameters.
These models, powered by trillions of words and extensive computational resources, exhibit remarkable language understanding, reasoning, and problem-solving abilities. Foundation models, or base models, vary in size and complexity, with their capabilities expanding alongside the number of parameters.
LLMs leverage massive training data, deep neural network architectures, and transformers to understand and generate human-like text, making them powerful tools for a wide range of applications in natural language processing, such as summarization, code generation, and chatbots.
LLMs are trained using a technique called unsupervised learning. The training involves feeding the model examples of text and teaching it to predict the next word in a sequence based on the preceding words. During training, the model adjusts its internal parameters to minimize the difference between its predictions and the actual next words in the training data. This process requires deep neural network architecture and substantial computational resources, especially as models become larger and trained on more data.
Once trained, LLMs can understand the prompt - an input to the model - make inferences to generate text, answer questions, summarize or expand information, translate languages, generate code, and even compute mathematical functions - output of the model.
Interacting with LLMs requires crafting prompts that the model uses to generate text outputs, known as completions. When given a prompt or a question, the model uses what it has learned to generate a coherent response relevant to the input. The output quality depends on both the training data and the specific prompt given to the model.
As foundation models, pre trained LLMs, are scaled from hundreds of millions to billions or even hundreds of billions of parameters, there's a notable increase in their subjective understanding of language. This deeper understanding improves their ability to process information, reason, and tackle complex tasks. Interestingly, while larger models excel across a broad range of tasks due to their vast capabilities, smaller models have shown that they can be fine-tuned to achieve exceptional performance in specific, focused tasks.
LLMs can also be fine-tuned on specific datasets or for tasks. This process involves additional training on a smaller, specialized dataset, allowing the model to perform better on tasks like medical diagnosis, legal analysis, or customer service. This fine-tuning process optimizes these models for specialized applications, demonstrating the versatility and potential of AI models. The ability to balance between the generalist approach of large models and the specialist skills of smaller, fine-tuned models underscores the adaptability and wide-ranging potential of foundation models in the field of artificial intelligence.
While LLMs do not learn from new data after their initial training phase, they can be designed to incorporate user feedback and adjustments to their prompts to improve interactions over time.
Transformer
The Transformer, introduced in the paper "Attention Is All You Need" by Vaswani et al., is the key architecture implementation for LLMs. Transformer's unique structure, focusing on self-attention mechanisms, allows the model to weigh the importance of different words in a sentence, regardless of their positional distance from each other.
Transformer architecture has fundamentally changed the landscape of natural language processing by significantly improving understanding of context and generating coherent, contextually relevant text. Transformers are particularly good at handling sequences of data, like sentences, because they can pay attention to all parts of the input data simultaneously, allowing them to effectively capture context and relationships between words.
Transformer Basics
The Transformer architecture, through self-attention, effectively handles long-range dependencies in text, making it superior for tasks that require a deep understanding of context. This is a departure from earlier models that processed text sequentially, which struggled with long sentences due to limitations like vanishing gradients and difficulty in capturing distant word relationships. Because of their efficiency and effectiveness in capturing complex linguistic patterns, transformers have become the foundation for most modern NLP tasks, including text generation, translation, summarization, and more.
Architecture
Embeddings: Input words from sentences are converted into vectors using embeddings. This process captures the semantic meaning of each word in a high-dimensional space.
Encoder and Decoder: The original Transformer model consists of encoders and decoders. Encoders process the input text, while decoders generate output text. In models like GPT, only the decoder architecture generates text.
Self-Attention Mechanism: This is a key feature of the Transformer. It allows the model to focus on different parts of the input sentence as it processes each word. The model calculates a score for each word in a sentence that signifies how much focus to place on other parts of the sentence when predicting the next word. This mechanism enables the model to generate coherent and contextually relevant text.
Positional Encoding: Since the Transformer doesn't inherently process sequential data in order, it uses positional encodings to understand the order of words in a sentence. This information is added to the input embeddings to give the model sequence context.
Multi-Head Attention: This component splits the attention mechanism into multiple "heads," allowing the model to simultaneously focus on different parts of the sentence for a more comprehensive understanding.
Feed-Forward Neural Networks: Each layer in the Transformer architecture contains a fully connected feed-forward network that applies the same operation to each position separately and identically. This network is responsible for transforming the representation at each position into a new space.
Layer Normalization and Residual Connections: These components help stabilize the learning process and allow for deeper models by preventing the vanishing gradient problem.
Attention Mechanics
The Transformer architecture and its attention mechanism significantly advance natural language processing. They enable models to process words in relation to all other words in a sentence rather than sequentially, which enhances their ability to understand context and generate more coherent text.
Self-Attention Mechanism
Self-attention, the core of the Transformer, enables the model to weigh the importance of other words when understanding each word in a sentence.
1. Input Representation: Words are first converted into vectors using embeddings. Positional encodings are added to these vectors to give model information about the position of each word in the sentence.
2. Attention Scores: The model calculates attention scores for each word relative to every word in the sentence. These scores determine how much focus the model should place on other parts of the sentence when processing this word. The scores are calculated using the dot product of the query vector of all words, which are then scaled, typically by the square root of the key vector's dimension.
3. Softmax Layer: The attention scores are passed through a softmax layer, which turns them into probabilities that sum up to 1. This step ensures that the scores are normalized and can be interpreted as the model's confidence in focusing on specific parts of the input.
4. Weighted Sum: Each word's output vector is computed as a weighted sum of its value vectors, with the weights being the softmax scores. This step essentially combines the information from other parts of the sentence, weighted by their relevance to the current word.
5. Multi-Head Attention: Instead of performing this process once, the Transformer does it multiple times in parallel, with each "head" focusing on different parts of the sentence. The outputs from all heads are then concatenated and linearly transformed into the expected dimension. This allows the model to simultaneously capture different types of relationships between words.
The Transformer architecture comprises two main components: the encoder and the decoder, although only the decoder is used in models designed for text generation, like GPT.
Encoder and Decoder
Encoder: The encoder processes the input text. It consists of a stack of identical layers, each containing two sub-layers: a multi-head self-attention mechanism and a position-wise fully connected feed-forward network. Residual connections around each of these sub-layers, followed by layer normalization, help in stabilizing the learning process.
Decoder: The decoder generates the output text based on the encoder's output and previous decoder outputs. It also contains a stack of identical layers, but with an additional multi-head attention layer that focuses on the encoder's output. This structure allows the decoder to focus on relevant parts of the input text, facilitating tasks like translation where alignment between input and output is crucial.
Fine-tuning
Fine-tuning is a mechanism for optimizing machine learning, particularly in the context of large language models on a smaller, specific dataset. Fine-tuning adapts the general capabilities of the model to perform better on tasks or understand content within a particular domain or context.
Pre-training: Initially, LLMs are pre-trained with vast amounts of text data. This phase involves learning general language patterns, grammar, and knowledge from a wide range of sources, allowing the model to understand and generate human-like text. The pre-trained LLM is also known as a foundation model.
Selecting a Fine-tune Dataset: After pre-training, the model is further trained on a smaller, specialized dataset. This dataset is closely related to the specific task or domain the model will be used for, such as legal documents, medical texts, or customer service interactions.
Fine-tuning Process: During fine-tuning, the model's weights are adjusted using the specialized dataset. Although the model has already learned general language capabilities, this step helps it to learn the nuances, vocabulary, and patterns specific to the target domain or task. The learning rate during fine-tuning is typically lower than in pre-training, to avoid overwriting the general knowledge the model has acquired.