URL
stringlengths
30
87
Headline
stringlengths
11
143
Authors
stringlengths
5
190
Publication Date
stringlengths
11
18
Article Text
stringlengths
140
47.6k
https://huggingface.co/blog/ethical-charter-multimodal
Putting ethical principles at the core of the research lifecycle
Lucile Saulnier, Siddharth Karamcheti, Hugo Laurençon, Leo Tronchon, Thomas Wang, Victor Sanh, Amanpreet Singh, Giada Pistilli, Sasha Luccioni, Yacine Jernite, Margaret Mitchell, Douwe Kiela
May 19, 2022
Ethical charter - Multimodal project Purpose of the ethical charter It has been well documented that machine learning research and applications can potentially lead to "data privacy issues, algorithmic biases, automation risks and malicious uses" (NeurIPS 2021 ethics guidelines). The purpose of this short document is to formalize the ethical principles that we (the multimodal learning group at Hugging Face) adopt for the project we are pursuing. By defining these ethical principles at the beginning of the project, we make them core to our machine learning lifecycle.By being transparent about the decisions we're making in the project, who is working on which aspects of the system, and how the team can be contacted, we hope to receive feedback early enough in the process to make meaningful changes, and ground discussions about choices in an awareness of the goals we aim to achieve and the values we hope to incorporate.This document is the result of discussions led by the multimodal learning group at Hugging Face (composed of machine learning researchers and engineers), with the contributions of multiple experts in ethics operationalization, data governance, and personal privacy. Limitations of this ethical charter This document is a work in progress and reflects a state of reflection as of May 2022. There is no consensus nor official definition of "ethical AI" and our considerations are very likely to change over time. In case of updates, we will reflect changes directly in this document while providing the rationale for changes and tracking the history of updates through GitHub. This document is not intended to be a source of truth about best practices for ethical AI. We believe that even though it is imperfect, thinking about the impact of our research, the potential harms we foresee, and strategies we can take to mitigate these harms is going in the right direction for the machine learning community. Throughout the project, we will document how we operationalize the values described in this document, along with the advantages and limitations we observe in the context of the project. Content policy Studying the current state-of-the-art multimodal systems, we foresee several misuses of the technologies we aim at as part of this project. We provide guidelines on some of the use cases we ultimately want to prevent:Promotion of content and activities which are detrimental in nature, such as violence, harassment, bullying, harm, hate, and all forms of discrimination. Prejudice targeted at specific identity subpopulations based on gender, race, age, ability status, LGBTQA+ orientation, religion, education, socioeconomic status, and other sensitive categories (such as sexism/misogyny, casteism, racism, ableism, transphobia, homophobia).Violation of regulations, privacy, copyrights, human rights, cultural rights, fundamental rights, laws, and any other form of binding documents.Generating personally identifiable information.Generating false information without any accountability and/or with the purpose of harming and triggering others.Incautious usage of the model in high-risk domains - such as medical, legal, finance, and immigration - that can fundamentally damage people’s lives. Values for the project Be transparent: We are transparent and open about the intent, sources of data, tools, and decisions. By being transparent, we expose the weak points of our work to the community and thus are responsible and can be held accountable.Share open and reproducible work: Openness touches on two aspects: the processes and the results. We believe it is good research practice to share precise descriptions of the data, tools, and experimental conditions. Research artifacts, including tools and model checkpoints, must be accessible - for use within the intended scope - to all without discrimination (e.g., religion, ethnicity, sexual orientation, gender, political orientation, age, ability). We define accessibility as ensuring that our research can be easily explained to an audience beyond the machine learning research community.Be fair: We define fairness as the equal treatment of all human beings. Being fair implies monitoring and mitigating unwanted biases that are based on characteristics such as race, gender, disabilities, and sexual orientation. To limit as much as possible negative outcomes, especially outcomes that impact marginalized and vulnerable groups, reviews of unfair biases - such as racism for predictive policing algorithms - should be conducted on both the data and the model outputs.Be self-critical: We are aware of our imperfections and we should constantly lookout for ways to better operationalize ethical values and other responsible AI decisions. For instance, this includes better strategies for curating and filtering training data. We should not overclaim or entertain spurious discourses and hype.Give credit: We should respect and acknowledge people's work through proper licensing and credit attribution.We note that some of these values can sometimes be in conflict (for instance being fair and sharing open and reproducible work, or respecting individuals’ privacy and sharing datasets), and emphasize the need to consider risks and benefits of our decisions on a case by case basis.
https://huggingface.co/blog/deep-rl-q-part1
An Introduction to Q-Learning Part 1
Thomas Simonini
May 18, 2022
Unit 2, part 1 of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.In the first chapter of this class, we learned about Reinforcement Learning (RL), the RL process, and the different methods to solve an RL problem. We also trained our first lander agent to land correctly on the Moon 🌕 and uploaded it to the Hugging Face Hub.So today, we're going to dive deeper into one of the Reinforcement Learning methods: value-based methods and study our first RL algorithm: Q-Learning.We'll also implement our first RL agent from scratch: a Q-Learning agent and will train it in two environments:Frozen-Lake-v1 (non-slippery version): where our agent will need to go from the starting state (S) to the goal state (G) by walking only on frozen tiles (F) and avoiding holes (H).An autonomous taxi will need to learn to navigate a city to transport its passengers from point A to point B.This unit is divided into 2 parts:In the first part, we'll learn about the value-based methods and the difference between Monte Carlo and Temporal Difference Learning.And in the second part, we'll study our first RL algorithm: Q-Learning, and implement our first RL Agent.This unit is fundamental if you want to be able to work on Deep Q-Learning (unit 3): the first Deep RL algorithm that was able to play Atari games and beat the human level on some of them (breakout, space invaders…).So let's get started!What is RL? A short recapThe two types of value-based methodsThe State-Value functionThe Action-Value functionThe Bellman Equation: simplify our value estimationMonte Carlo vs Temporal Difference LearningMonte Carlo: learning at the end of the episodeTemporal Difference Learning: learning at each stepWhat is RL? A short recapIn RL, we build an agent that can make smart decisions. For instance, an agent that learns to play a video game. Or a trading agent that learns to maximize its benefits by making smart decisions on what stocks to buy and when to sell.But, to make intelligent decisions, our agent will learn from the environment by interacting with it through trial and error and receiving rewards (positive or negative) as unique feedback.Its goal is to maximize its expected cumulative reward (because of the reward hypothesis).The agent's decision-making process is called the policy π: given a state, a policy will output an action or a probability distribution over actions. That is, given an observation of the environment, a policy will provide an action (or multiple probabilities for each action) that the agent should take.Our goal is to find an optimal policy π*, aka., a policy that leads to the best expected cumulative reward.And to find this optimal policy (hence solving the RL problem), there are two main types of RL methods:Policy-based methods: Train the policy directly to learn which action to take given a state.Value-based methods: Train a value function to learn which state is more valuable and use this value function to take the action that leads to it.And in this chapter, we'll dive deeper into the Value-based methods.The two types of value-based methodsIn value-based methods, we learn a value function that maps a state to the expected value of being at that state.The value of a state is the expected discounted return the agent can get if it starts at that state and then acts according to our policy.If you forgot what discounting is, you can read this section.But what does it mean to act according to our policy? After all, we don't have a policy in value-based methods, since we train a value function and not a policy.Remember that the goal of an RL agent is to have an optimal policy π.To find it, we learned that there are two different methods:Policy-based methods: Directly train the policy to select what action to take given a state (or a probability distribution over actions at that state). In this case, we don't have a value function.The policy takes a state as input and outputs what action to take at that state (deterministic policy).And consequently, we don't define by hand the behavior of our policy; it's the training that will define it.Value-based methods: Indirectly, by training a value function that outputs the value of a state or a state-action pair. Given this value function, our policy will take action.But, because we didn't train our policy, we need to specify its behavior. For instance, if we want a policy that, given the value function, will take actions that always lead to the biggest reward, we'll create a Greedy Policy.Given a state, our action-value function (that we train) outputs the value of each action at that state, then our greedy policy (that we defined) selects the action with the biggest state-action pair value.Consequently, whatever method you use to solve your problem, you will have a policy, but in the case of value-based methods you don't train it, your policy is just a simple function that you specify (for instance greedy policy) and this policy uses the values given by the value-function to select its actions.So the difference is:In policy-based, the optimal policy is found by training the policy directly.In value-based, finding an optimal value function leads to having an optimal policy.In fact, most of the time, in value-based methods, you'll use an Epsilon-Greedy Policy that handles the exploration/exploitation trade-off; we'll talk about it when we talk about Q-Learning in the second part of this unit.So, we have two types of value-based functions:The State-Value functionWe write the state value function under a policy π like this:For each state, the state-value function outputs the expected return if the agent starts at that state, and then follow the policy forever after (for all future timesteps if you prefer).If we take the state with value -7: it's the expected return starting at that state and taking actions according to our policy (greedy policy), so right, right, right, down, down, right, right.The Action-Value functionIn the Action-value function, for each state and action pair, the action-value function outputs the expected return if the agent starts in that state and takes action, and then follows the policy forever after.The value of taking action an in state s under a policy π is:We see that the difference is:In state-value function, we calculate the value of a state StS_tSt​ In action-value function, we calculate the value of the state-action pair ( St,AtS_t, A_tSt​,At​ ) hence the value of taking that action at that state.Note: We didn't fill all the state-action pairs for the example of Action-value functionIn either case, whatever value function we choose (state-value or action-value function), the value is the expected return.However, the problem is that it implies that to calculate EACH value of a state or a state-action pair, we need to sum all the rewards an agent can get if it starts at that state.This can be a tedious process, and that's where the Bellman equation comes to help us.The Bellman Equation: simplify our value estimationThe Bellman equation simplifies our state value or state-action value calculation.With what we learned from now, we know that if we calculate the V(St)V(S_t)V(St​) (value of a state), we need to calculate the return starting at that state and then follow the policy forever after. (Our policy that we defined in the following example is a Greedy Policy, and for simplification, we don't discount the reward).So to calculate V(St)V(S_t)V(St​), we need to make the sum of the expected rewards. Hence:To calculate the value of State 1: the sum of rewards if the agent started in that state and then followed the greedy policy (taking actions that leads to the best states values) for all the time steps.Then, to calculate the V(St+1)V(S_{t+1})V(St+1​), we need to calculate the return starting at that state St+1S_{t+1}St+1​.To calculate the value of State 2: the sum of rewards **if the agent started in that state, and then followed the **policy for all the time steps.So you see, that's a pretty tedious process if you need to do it for each state value or state-action value.Instead of calculating the expected return for each state or each state-action pair, we can use the Bellman equation.The Bellman equation is a recursive equation that works like this: instead of starting for each state from the beginning and calculating the return, we can consider the value of any state as:The immediate reward Rt+1R_{t+1}Rt+1​ + the discounted value of the state that follows ( gamma∗V(St+1)gamma * V(S_{t+1}) gamma∗V(St+1​) ) .For simplification here we don’t discount so gamma = 1.If we go back to our example, the value of State 1= expected cumulative return if we start at that state.To calculate the value of State 1: the sum of rewards if the agent started in that state 1 and then followed the policy for all the time steps.Which is equivalent to V(St)V(S_{t})V(St​) = Immediate reward Rt+1R_{t+1}Rt+1​ + Discounted value of the next state gamma∗V(St+1)gamma * V(S_{t+1})gamma∗V(St+1​) For simplification, here we don't discount, so gamma = 1.The value of V(St+1)V(S_{t+1}) V(St+1​) = Immediate reward Rt+2R_{t+2}Rt+2​ + Discounted value of the next state ( gamma∗V(St+2)gamma * V(S_{t+2})gamma∗V(St+2​) ).And so on.To recap, the idea of the Bellman equation is that instead of calculating each value as the sum of the expected return, which is a long process. This is equivalent to the sum of immediate reward + the discounted value of the state that follows.Monte Carlo vs Temporal Difference LearningThe last thing we need to talk about before diving into Q-Learning is the two ways of learning.Remember that an RL agent learns by interacting with its environment. The idea is that using the experience taken, given the reward it gets, will update its value or policy.Monte Carlo and Temporal Difference Learning are two different strategies on how to train our value function or our policy function. Both of them use experience to solve the RL problem.On one hand, Monte Carlo uses an entire episode of experience before learning. On the other hand, Temporal Difference uses only a step ( St,At,Rt+1,St+1S_t, A_t, R_{t+1}, S_{t+1}St​,At​,Rt+1​,St+1​ ) to learn.We'll explain both of them using a value-based method example.Monte Carlo: learning at the end of the episodeMonte Carlo waits until the end of the episode, calculates GtG_tGt​ (return) and uses it as a target for updating V(St)V(S_t)V(St​).So it requires a complete entire episode of interaction before updating our value function.If we take an example:We always start the episode at the same starting point.The agent takes actions using the policy. For instance, using an Epsilon Greedy Strategy, a policy that alternates between exploration (random actions) and exploitation.We get the reward and the next state.We terminate the episode if the cat eats the mouse or if the mouse moves > 10 steps.At the end of the episode, we have a list of State, Actions, Rewards, and Next StatesThe agent will sum the total rewards GtG_tGt​ (to see how well it did).It will then update V(st)V(s_t)V(st​) based on the formulaThen start a new game with this new knowledgeBy running more and more episodes, the agent will learn to play better and better.For instance, if we train a state-value function using Monte Carlo:We just started to train our Value function, so it returns 0 value for each stateOur learning rate (lr) is 0.1 and our discount rate is 1 (= no discount)Our mouse explores the environment and takes random actionsThe mouse made more than 10 steps, so the episode ends .We have a list of state, action, rewards, next_state, we need to calculate the return GtG{t}GtGt=Rt+1+Rt+2+Rt+3...G_t = R_{t+1} + R_{t+2} + R_{t+3} ...Gt​=Rt+1​+Rt+2​+Rt+3​...Gt=Rt+1+Rt+2+Rt+3…G_t = R_{t+1} + R_{t+2} + R_{t+3}…Gt​=Rt+1​+Rt+2​+Rt+3​… (for simplicity we don’t discount the rewards).Gt=1+0+0+0+0+0+1+1+0+0G_t = 1 + 0 + 0 + 0+ 0 + 0 + 1 + 1 + 0 + 0Gt​=1+0+0+0+0+0+1+1+0+0Gt=3G_t= 3Gt​=3We can now update V(S0)V(S_0)V(S0​):New V(S0)=V(S0)+lr∗[Gt—V(S0)]V(S_0) = V(S_0) + lr * [G_t — V(S_0)]V(S0​)=V(S0​)+lr∗[Gt​—V(S0​)]New V(S0)=0+0.1∗[3–0]V(S_0) = 0 + 0.1 * [3 – 0]V(S0​)=0+0.1∗[3–0]New V(S0)=0.3V(S_0) = 0.3V(S0​)=0.3Temporal Difference Learning: learning at each stepTemporal difference, on the other hand, waits for only one interaction (one step) St+1S_{t+1}St+1​to form a TD target and update V(St)V(S_t)V(St​) using Rt+1R_{t+1}Rt+1​ and gamma∗V(St+1)gamma * V(S_{t+1})gamma∗V(St+1​).The idea with TD is to update the V(St)V(S_t)V(St​) at each step.But because we didn't play during an entire episode, we don't have GtG_tGt​ (expected return). Instead, we estimate GtG_tGt​ by adding Rt+1R_{t+1}Rt+1​ and the discounted value of the next state.This is called bootstrapping. It's called this because TD bases its update part on an existing estimate V(St+1)V(S_{t+1})V(St+1​) and not a complete sample GtG_tGt​.This method is called TD(0) or one-step TD (update the value function after any individual step).If we take the same example,We just started to train our Value function, so it returns 0 value for each state.Our learning rate (lr) is 0.1, and our discount rate is 1 (no discount).Our mouse explore the environment and take a random action: going to the leftIt gets a reward Rt+1=1R_{t+1} = 1Rt+1​=1 since it eats a piece of cheeseWe can now update V(S0)V(S_0)V(S0​):New V(S0)=V(S0)+lr∗[R1+gamma∗V(S1)−V(S0)]V(S_0) = V(S_0) + lr * [R_1 + gamma * V(S_1) - V(S_0)]V(S0​)=V(S0​)+lr∗[R1​+gamma∗V(S1​)−V(S0​)]New V(S0)=0+0.1∗[1+1∗0–0]V(S_0) = 0 + 0.1 * [1 + 1 * 0–0]V(S0​)=0+0.1∗[1+1∗0–0]New V(S0)=0.1V(S_0) = 0.1V(S0​)=0.1So we just updated our value function for State 0.Now we continue to interact with this environment with our updated value function.If we summarize:With Monte Carlo, we update the value function from a complete episode, and so we use the actual accurate discounted return of this episode.With TD learning, we update the value function from a step, so we replace GtG_tGt​ that we don't have with an estimated return called TD target.So now, before diving on Q-Learning, let's summarise what we just learned:We have two types of value-based functions:State-Value function: outputs the expected return if the agent starts at a given state and acts accordingly to the policy forever after.Action-Value function: outputs the expected return if the agent starts in a given state, takes a given action at that state and then acts accordingly to the policy forever after.In value-based methods, we define the policy by hand because we don't train it, we train a value function. The idea is that if we have an optimal value function, we will have an optimal policy.There are two types of methods to learn a policy for a value function:With the Monte Carlo method, we update the value function from a complete episode, and so we use the actual accurate discounted return of this episode.With the TD Learning method, we update the value function from a step, so we replace Gt that we don't have with an estimated return called TD target.So that’s all for today. Congrats on finishing this first part of the chapter! There was a lot of information.That’s normal if you still feel confused with all these elements. This was the same for me and for all people who studied RL.Take time to really grasp the material before continuing. And since the best way to learn and avoid the illusion of competence is to test yourself. We wrote a quiz to help you find where you need to reinforce your study. Check your knowledge here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit2/quiz1.mdIn the second part , we’ll study our first RL algorithm: Q-Learning, and implement our first RL Agent in two environments:Frozen-Lake-v1 (non-slippery version): where our agent will need to go from the starting state (S) to the goal state (G) by walking only on frozen tiles (F) and avoiding holes (H).An autonomous taxi will need to learn to navigate a city to transport its passengers from point A to point B.And don't forget to share with your friends who want to learn 🤗 !Finally, we want to improve and update the course iteratively with your feedback. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9Keep learning, stay awesome,
https://huggingface.co/blog/sasha-luccioni-interview
Machine Learning Experts - Sasha Luccioni
Britney Muller
May 17, 2022
🤗 Welcome to Machine Learning Experts - Sasha Luccioni 🚀 If you're interested in learning how ML Experts, like Sasha, can help accelerate your ML roadmap visit: hf.co/support.Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is Sasha Luccioni. Sasha is a Research Scientist at Hugging Face where she works on the ethical and societal impacts of Machine Learning models and datasets. Sasha is also a co-chair of the Carbon Footprint WG of the Big Science Workshop, on the Board of WiML, and a founding member of the Climate Change AI (CCAI) organization which catalyzes impactful work applying machine learning to the climate crisis. You’ll hear Sasha talk about how she measures the carbon footprint of an email, how she helped a local soup kitchen leverage the power of ML, and how meaning and creativity fuel her work.Very excited to introduce this brilliant episode to you! Here’s my conversation with Sasha Luccioni:Note: Transcription has been slightly modified/reformatted to deliver the highest-quality reading experience. Thank you so much for joining us today, we are so excited to have you on! Sasha: I'm really excited to be here. Diving right in, can you speak to your background and what led you to Hugging Face? Sasha: Yeah, I mean if we go all the way back, I started studying linguistics. I was super into languages and both of my parents were mathematicians. But I thought, I don't want to do math, I want to do language. I started doing NLP, natural language processing, during my undergrad and got super into it. My Ph.D. was in computer science, but I maintained a linguistic angle. I started out in humanities and then got into computer science. Then after my Ph.D., I spent a couple of years working in applied AI research. My last job was in finance, and then one day I decided that I wanted to do good and socially positive AI research, so I quit my job. I decided that no amount of money was worth working on AI for AI's sake, I wanted to do more. So I spent a couple of years working with Yoshua Bengio, meanwhile working on AI for good projects, AI for climate change projects, and then I was looking for my next role. I wanted to be in a place that I trusted was doing the right things and going in the right direction. When I met Thom and Clem, I knew that Hugging Face was a place for me and that it would be exactly what I was looking for. Love that you wanted to something that felt meaningful! Sasha: Yeah, when I hear people on Sunday evening being like “Monday's tomorrow…” I'm like “Tomorrow's Monday! That's great!” And it's not that I'm a workaholic, I definitely do other stuff, and have a family and everything, but I'm literally excited to go to work to do really cool stuff. Think that's important. I know people can live without it, but I can't. What are you most excited about that you're working on now? Sasha: I think the Big Science project is definitely super inspiring. For the last couple of years, I've been seeing these large language models, and I was always like, but how do they work? And where's the code, where's their data, and what's going on in there? How are they developed and who was involved? It was all like a black box thing, and I'm so happy that we're finally making it a glass box. And there are so many people involved and so many really interesting perspectives. And I'm chairing the carbon footprint working group, so we're working on different aspects of environmental impacts and above and beyond just counting CO2 emissions, but other things like the manufacturing costs. At some point, we even consider how much CO2 an email generates, things like that, so we're definitely thinking of different perspectives. Also about the data, I'm involved in a lot of the data working groups at Big Science, and it's really interesting because typically it’s been like we're gonna get the most data we can, stuff it in a language model and it's gonna be great. And it's gonna learn all this stuff, but what's actually in there, there's so much weird stuff on the internet, and things that you don't necessarily want your model to be seeing. So we're really looking into mindfulness, data curation, and multilingualism as well to make sure that it's not just a hundred percent English or 99% English. So it's such a great initiative, and it makes me excited to be involved. Love the idea of evaluating the carbon footprint of an email!? Sasha: Yeah, people did it, depending on the attachment or not, but it was just because we found this article of, I think it was a theoretical physics project and they did that, they did everything. They did video calls, travel commutes, emails, and the actual experiments as well. And they made this pie chart and it was cool because there were 37 categories in the pie chart, and we really wanted to do that. But I don't know if we want to go into that level of detail, but we were going to do a survey and ask participants on average, how many hours did they spend working on Big Science or training in language models and things like that. So we didn’t want just the number of GPU hours for training the model, but also people's implication and participation in the project. Can you speak a little bit more about the environmental impact of AI? Sasha: Yeah, it's a topic I got involved in three years ago now. The first article that came out was by Emma Strubell and her colleagues and they essentially trained a large language model with hyperparameter tuning. So essentially looking at all the different configurations and then the figure they got was like that AI model emitted as much carbon as five cars in their lifetimes. Which includes gas and everything, like the average kind of consumption. And with my colleagues we were like, well that doesn't sound right, it can't be all models, right? And so we really went off the deep end into figuring out what has an impact on emissions, and how we can measure emissions. So first we just created this online calculator where someone could enter what hardware they use, how long they trained for, and where on their location or a cloud computing instance. And then it would give them an estimate of the carbon involved that they admitted. Essentially that was our first attempt, a calculator, and then we helped create a package called code carbon which actually does that in real-time. So it's gonna run in parallel to whatever you're doing training a model and then at the end spit out an estimate of the carbon emissions.Lately we've been going further and further. I just had an article that I was a co-author on that got accepted, about how to proactively reduce emissions. For example, by anticipating times when servers are not as used as other times, like doing either time delaying or picking the right region because if you train in, I don't know, Australia, it's gonna be a coal-based grid, and so it's gonna be highly polluting. Whereas in Quebec or Montreal where I'm based, it's a hundred percent hydroelectricity. So just by making that choice, you can reduce your emissions by around a hundredfold. And so just small things like that, like above and beyond estimating, we also want people to start reducing their emissions. It's the next step. It’s never crossed my mind that geographically where you compute has a different emissions cost. Sasha: Oh yeah, and I'm so into energy grids now. Every time I go somewhere I'm like, so what's the energy coming from? How are you generating it? And so it's really interesting, there are a lot of historical factors and a lot of cultural factors. For example; France is mostly nuclear, mostly energy, and Canada has a lot of hydroelectric energy. Some places have a lot of wind or tidal, and so it's really interesting just to understand when you turn on a lamp, where that electricity is coming from and at what cost to the environment. Because when I was growing up, I would always turn off the lights, and unplug whatever but not anything more than that. It was just good best practices. You turn off the light when you're not in a room, but after that, you can really go deeper depending on where you live, your energy's coming from different sources. And there is more or less pollution, but we just don't see that we don't see how energy is produced, we just see the light and we're like oh, this is my lamp. So it's really important to start thinking about that. It's so easy not to think about that stuff, which I could see being a barrier for machine learning engineers who might not have that general awareness. Sasha: Yeah, exactly. And I mean usually, it's just by habit, right? I think there's a default option when you're using cloud instances, often it's like the closest one to you or the one with the most GPUs available or whatever. There's a default option, and people are like okay, fine, whatever and click the default. It's this nudge theory aspect. I did a master's in cognitive science and just by changing the default option, you can change people's behavior to an incredible degree. So whether you put apples or chocolate bars near the cash register, or small stuff like that. And so if the default option, all of a sudden was the low carbon one, we could save so many emissions just because people are just like okay, fine, I'm gonna train a model in Montreal, I don't care. It doesn't matter, as long as you have access to the hardware you need, you don't care where it is. But in the long run, it really adds up. What are some of the ways that machine learning teams and engineers could be a bit more proactive in aspects like that? Sasha: So I've noticed that a lot of people are really environmentally conscious. Like they'll bike to work or they'll eat less meat and things like that. They'll have this kind of environmental awareness, but then disassociate it from their work because we're not aware of our impact as machine learning researchers or engineers on the environment. And without sharing it necessarily, just starting to measure, for example, carbon emissions. And starting to look at what instances you're picking, if you have a choice. For example, I know that Google Cloud and AWS have started putting low carbon as a little tag so you can pick it because the information is there. And starting to make these little steps, and connecting the dots between environment and tech. These are dots that are not often connected because tech is so like the cloud, it's nice to be distributed, and you don't really see it. And so by grounding it more, you see the impact it can have on the environment. That's a great point. And I've listened to a couple talks and podcasts of yours, where you've mentioned how machine learning can be used to help offset the environmental impact of models. Sasha: Yeah, we wrote a paper a couple of years ago that was a cool experience. It's almost a hundred pages, it's called Tackling Climate Change with Machine Learning. And there are like 25 authors, but there are all these different sections ranging from electricity to city planning to transportation to forestry and agriculture. We essentially have these chapters of the paper where we talk about the problems that exist. For example, renewable energy is variable in a lot of cases. So if you have solar panels, they won't produce energy at night. That's kind of like a given. And then wind power is dependent on the wind. And so a big challenge in implementing renewable energy is that you have to respond to the demand. You need to be able to give people power at night, even if you're on solar energy. And so typically you have either diesel generators or this backup system that often cancels out the environmental effect, like the emissions that you're saving, but what machine learning can do, you're essentially predicting how much energy will be needed. So based on previous days, based on the temperature, based on events that happen, you can start being like okay, well we're gonna be predicting half an hour out or an hour out or 6 hours or 24 hours. And you can start having different horizons and doing time series prediction.Then instead of powering up a diesel generator which is cool because you can just power them up, and in a couple of seconds they're up and running. What you can also do is have batteries, but batteries you need to start charging them ahead of time. So say you're six hours out, you start charging your batteries, knowing that either there's a cloud coming or that night's gonna fall, so you need that energy stored ahead. And so there are things that you could do that are proactive that can make a huge difference. And then machine learning is good at that, it’s good at predicting the future, it’s good at finding the right features, and things like that. So that's one of the go-to examples. Another one is remote sensing. So we have a lot of satellite data about the planet and see either deforestation or tracking wildfires. In a lot of cases, you can detect wildfires automatically based on satellite imagery and deploy people right away. Because they're often in remote places that you don't necessarily have people living in. And so there are all these different cases in which machine learning could be super useful. We have the data, we have the need, and so this paper is all about how to get involved and whatever you're good at, whatever you like doing, and how to apply machine learning and use it in the fight against climate change. For people listening that are interested in this effort, but perhaps work at an organization where it's not prioritized, what tips do you have to help incentivize teams to prioritize environmental impact? Sasha: So it's always a question of cost and benefit or time, you know, the time that you need to put in. And sometimes people just don't know that there are different tools that exist or approaches. And so if people are interested or even curious to learn about it. I think that's the first up because even when I first started thinking of what I can do, I didn't know that all these things existed. People have been working on this for like a fairly long time using different data science techniques. For example, we created a website called climatechange.ai, and we have interactive summaries that you can read about how climate change can help and detect methane or whatever. And I think that just by sprinkling this knowledge can help trigger some interesting thought processes or discussions. I've participated in several round tables at companies that are not traditionally climate change-oriented but are starting to think about it. And they're like okay well we put a composting bin in the kitchen, or we did this and we did that. So then from the tech side, what can we do? It's really interesting because there are a lot of low-hanging fruits that you just need to learn about. And then it's like oh well, I can do that, I can by default use this cloud computing instance and that's not gonna cost me anything. And you need to change a parameter somewhere. What are some of the more common mistakes you see machine learning engineers or teams make when it comes to implementing these improvements? Sasha: Actually, machine learning people or AI people, in general, have this stereotype from other communities that we think AI's gonna solve everything. We just arrived and we're like oh, we're gonna do AI. And it's gonna solve all your problems no matter what you guys have been doing for 50 years, AI's gonna do it. And I haven't seen that attitude that much, but we know what AI can do, we know what machine learning can do, and we have a certain kind of worldview. It's like when you have a hammer, everything's a nail, so it’s kind of something like that. And I participated in a couple of hackathons and just like in general, people want to make stuff or do stuff to fight climate change. It's often like oh, this sounds like a great thing AI can do, and we're gonna do it without thinking of how it's gonna be used or how it's gonna be useful or how it's gonna be. Because it's like yeah, sure, AI can do all this stuff, but then at the end of the day, someone's gonna use it.For example, if you create something for scanning satellite imagery and detecting wildfire, the information that your model outputs has to be interpretable. Or you need to add that little extra step of sending a new email or whatever it is. Otherwise, we train a model, it's great, it's super accurate, but then at the end of the day, nobody's gonna use it just because it's missing a tiny little connection to the real world or the way that people will use it. And that's not sexy, people are like yeah, whatever, I don't even know how to write a script that sends an email. I don't either. But still, just doing that little extra step, that's so much less technologically complex than what you've done so far. Just adding that little thing will make a big difference and it can be in terms of UI, or it can be in terms of creating an app. It's like the machine learning stuff that's actually crucial for your project to be used.And I've participated in organizing workshops where people submit ideas that are super great on paper that have great accuracy rates, but then they just stagnate in paper form or article form because you still need to have that next step. I remember this one presentation of a machine learning algorithm that could reduce flight emissions of airplanes by 3 to 7% by calculating the wind speed, etc. Of course, that person should have done a startup or a product or pitched this to Boeing or whatever, otherwise it was just a paper that they published in this workshop that I was organizing, and then that was it. And scientists or engineers don't necessarily have those skills necessary to go see an airplane manufacturer with this thing, but it's frustrating. And at the end of the day, to see these great ideas, this great tech that just fizzles. So sad. That's such a great story though and how there are opportunities like that. Sasha: Yeah, and I think scientists, so often, don't necessarily want to make money, they just want to solve problems often. And so you don't necessarily even need to start a startup, you could just talk to someone or pitch this to someone, but you have to get out of your comfort zone. And the academic conferences you go to, you need to go to a networking event in the aviation industry and that's scary, right? And so there are often these barriers between disciplines that I find very sad. I actually like going to a business or random industry networking event because this is where connections can get made, that can make the biggest changes. It's not in the industry-specific conferences because everyone's talking about the same technical style that of course, they're making progress and making innovations. But then if you're the only machine learning expert in a room full of aviation experts, you can do so much. You can spark all these little sparks, and after you're gonna have people reducing emissions of flights. That's powerful. Wondering if you could add some more context as to why finding meaning in your work is so important? Sasha: Yeah, there's this concept that my mom read about in some magazine ages ago when I was a kid. It's called Ikigai, and it's a Japanese concept, it's like how to find the reason or the meaning of life. It's kind of how to find your place in the universe. And it was like you need to find something that has these four elements. Like what you love doing, what you're good at, what the world needs and then what can be a career. I was always like this is my career, but she was always like no because even if you love doing this, but you can't get paid for it, then it's a hard life as well. And so she always asked me this when I was picking my courses at university or even my degree, she'll always be like okay, well is that aligned with things you love and things you're good at? And some things she'd be like yeah, but you're not good at that though. I mean you could really want to do this, but maybe this is not what you're good at.So I think that it's always been my driving factor in my career. And I feel that it helps feel that you're useful and you're like a positive force in the world. For example, when I was working at Morgan Stanley, I felt that there were interesting problems like I was doing really well, no questions asked, the salary was amazing. No complaints there, but there was missing this what the world needs aspect that was kind of like this itch I couldn’t scratch essentially. But given this framing, this itchy guy, I was like oh, that's what's missing in my life. And so I think that people in general, not only in machine learning, it's good to think about not only what you're good at, but also what you love doing, what motivates you, why you would get out of bed in the morning and of course having this aspect of what the world needs. And it doesn't have to be like solving world hunger, it can be on a much smaller scale or on a much more conceptual scale.For example, what I feel like we're doing at Hugging Face is really that machine learning needs more open source code, more model sharing, but not because it's gonna solve any one particular problem, because it can contribute to a spectrum of problems. Anything from reproducibility to compatibility to product, but there's like the world needs this to some extent. And so I think that really helped me converge on Hugging Face as being maybe the world doesn't necessarily need better social networks because a lot of people doing AI research in the context of social media or these big tech companies. Maybe the world doesn't necessarily need that, maybe not right now, maybe what the world needs is something different. And so this kind of four-part framing really helped me find meaning in my career and my life in general, trying to find all these four elements. What other examples or applications do you find and see potential meaning in AI machine learning? Sasha: I think that an often overlooked aspect is accessibility and I guess democratization, but like making AI easier for non-specialists. Because can you imagine if I don't know anyone like a journalist or a doctor or any profession you can think of could easily train or use an AI model. Because I feel like yeah, for sure we do AI in medicine and healthcare, but it's from a very AI machine learning angle. But if we had more doctors who were empowered to create more tools or any profession like a baker… I actually have a friend who has a bakery here in Montreal and he was like yeah, well can AI help me make better bread? And I was like probably, yeah. I'm sure that if you do some kind of experimentation and he's like oh, I can install a camera in my oven. And I was like oh yeah, you could do that I guess. I mean we were talking about it and you know, actually, bread is pretty fickle, you need the right humidity, and it actually takes a lot of experimentation and a lot of know-how from ‘boulangers’ [‘bakers’]. And the same thing for croissants, his croissants are so good and he's like yeah, well you need to really know the right butter, etc. And he was like I want to make an AI model that will help bake bread. And I was like I don't even know how to help you start, like where do you start doing that?So accessibility is such an important part. For example, the internet has become so accessible nowadays. Anyone can navigate, and initially, it was a lot less so I think that AI still has some road to travel in order to become a more accessible and democratic tool. And you've talked before about the power of data and how it's not talked about enough. Sasha: Yeah, four or five years ago, I went to Costa Rica with my husband on a trip. We were just looking on a map and then I found this research center that was at the edge of the world. It was like being in the middle of nowhere. We had to take a car on a dirt road, then a first boat then a second boat to get there. And they're in the middle of the jungle and they essentially study the jungle and they have all these camera traps that are automatically activated, that are all over the jungle. And then every couple of days they have to hike from camera to camera to switch out the SD cards. And then they take these SD cards back to the station and then they have a laptop and they have to go through every picture it took. And of course, there are a lot of false positives because of wind or whatever, like an animal moving really fast, so there's literally maybe like 5% of actual good images. And I was like why aren't they using it to track biodiversity? And they'd no, we saw a Jaguar on blah, blah, blah at this location because they have a bunch of them.Then they would try to track if a Jaguar or another animal got killed, if it had babies, or if it looked injured; like all of these different things. And then I was like, I'm sure a part of that could be automated, at least the filtering process of taking out the images that are essentially not useful, but they had graduate students or whatever doing it. But still, there are so many examples like this domain in all areas. And just having these little tools, I'm not saying that because I think we're not there yet, completely replacing scientists in this kind of task, but just small components that are annoying and time-consuming, then machine learning can help bridge that gap. Wow. That is so interesting! Sasha: It's actually really, camera trap data is a really huge part of tracking biodiversity. It's used for birds and other animals. It's used in a lot of cases and actually, there's been Kaggle competitions for the last couple of years around camera trap data. And essentially during the year, they have camera traps in different places like Kenya has a bunch and Tanzania as well. And then at the end of the year, they have this big Kaggle competition of recognizing different species of animals. Then after that they deployed the models, and then they update them every year. So it's picking up, but there's just a lot of data, as you said. So each ecosystem is unique and so you need a model that's gonna be trained on exactly. You can't take a model from Kenya and make it work in Costa Rica, that's not gonna work. You need data, you need experts to train the model, and so there are a lot of elements that need to converge in order for you to be able to do this. Kind of like AutoTrain, Hugging Face has one, but even simpler where biodiversity researchers in Costa Rica could be like these are my images, help me figure out which ones are good quality and the types of animals that are on them. And they could just drag and drop the images like a web UI or something. And then they had this model that's like, here are the 12 images of Jaguars, this one is injured, this one has a baby, etc. Do you have insights for teams that are trying to solve for things like this with machine learning, but just lack the necessary data? Sasha: Yeah, I guess another anecdote, I have a lot of these anecdotes, but at some point we wanted to organize an AI for social good hackathon here in Montreal like three or three or four years ago. And then we were gonna contact all these NGOs, like soup kitchens, homeless shelters in Montreal. And we started going to these places and then we're like okay, where's your data? And they're like, “What data?” I'm like, “Well don't you keep track of how many people you have in your homeless shelter or if they come back,” and they're like “No.” And then they're like, “But on the other hand, we have these problems of either people disappearing and we don't know where they are or people staying for a long time. And then at a certain point we're supposed to not let them stand.” They had a lot of issues, for example, in the food kitchen, they had a lot of wasted food because they had trouble predicting how many people would arrive. And sometimes they're like yeah, we noticed that in October, usually there are fewer people, but we don't really have any data to support that.So we completely canceled the hackathon, then instead we did, I think we call them data literacy or digital literacy workshops. So essentially we went to these places if they were interested and we gave one or two-hour workshops about how to use a spreadsheet and figure out what they wanted to track. Because sometimes they didn't even know what kind of things they wanted to save or wanted to really have a trace of. So we did a couple of them in some places like we would come back every couple of months and check in. And then a year later we had a couple, especially a food kitchen, we actually managed to make a connection between them, and I don't remember what the company name was anymore, but they essentially did this supply chain management software thing. And so the kitchen was actually able to implement a system where they would track like we got 10 pounds of tomatoes, this many people showed up today, and this is the waste of food we have. Then a year later we were able to do a hackathon to help them reduce food waste.So that was really cool because we really saw a year and some before they had no trace of anything, they just had intuitions, which were useful, but weren't formal. And then a year later we were able to get data and integrate it into their app, and then they would have a thing saying be careful, your tomatoes are gonna go bad soon because it's been three days since you had them. Or in cases where it's like pasta, it would be six months or a year, and so we implemented a system that would actually give alerts to them. And it was super simple in terms of technology, there was not even much AI in there, but just something that would help them keep track of different categories of food. And so it was a really interesting experience because I realized that yeah, you can come in and be like we're gonna help you do whatever, but if you don't have much data, what are you gonna do? Exactly, that's so interesting. That's so amazing that you were able to jump in there and provide that first step; the educational piece of that puzzle to get them set up on something like that. Sasha: Yeah, it's been a while since I organized any hackathons. But I think these community involvement events are really important because they help people learn stuff like we learn that you can't just like barge in and use AI, digital literacy is so much more important and they just never really put the effort into collecting the data, even if they needed it. Or they didn't know what could be done and things like that. So taking this effort or five steps back and helping improve tech skills, generally speaking, is a really useful contribution that people don't really realize is an option, I guess. What industries are you most excited to see machine learning be applied to? Sasha: Climate change! Yeah, the environment is kind of my number one. Education has always been something that I've really been interested in and I've kind of always been waiting. I did my Ph.D. in education and AI, like how AI can be used in education. I keep waiting for it to finally hit a certain peak, but I guess there are a lot of contextual elements and stuff like that, but I think AI, machine learning, and education can be used in so many different ways. For example, what I was working on during my Ph.D. was how to help pick activities, like learning activities and exercises that are best suited for learners. Instead of giving all kids or adults or whatever the same exercise to help them focus on their weak knowledge points, weak skills, and focusing on those. So instead of like a one size fits all approach. And not replacing the teacher, but tutoring more, like okay, you learn a concept in school, and help you work on it. And you have someone figure this one out really fast and they don't need those exercises, but someone else could need more time to practice. And I think that there is so much that can be done, but I still don't see it really being used, but I think it's potentially really impactful. All right, so we're going to dive into rapid-fire questions. If you could go back and do one thing differently at the start of your machine learning career, what would it be? Sasha: I would spend more time focusing on math. So as I said, my parents are mathematicians and they would always give me extra math exercises. And they would always be like math is universal, math, math, math. So when you get force-fed things in your childhood, you don't necessarily appreciate them later, and so I was like no, languages. And so for a good part of my university studies, I was like no math, only humanities. And so I feel like if I had been a bit more open from the beginning and realized the potential of math, even in linguistics or a lot of things, I think I would've come to where I'm at much faster than spending three years being like no math, no math. I remember in grade 12, my final year of high school, my parents made me sign up for a math competition, like an Olympiad and I won it. Then I remember I had a medal and I put it on my mom and I'm like “Now leave me alone, I'm not gonna do any more math in my life.” And she was like “Yeah, yeah.” And then after that, when I was picking my Ph.D. program, she's like “Oh I see there are math classes, eh? because you're doing machine learning, eh?”, and I was like “No,” but yeah, I should have gotten over my initial distaste for math a lot quicker. That's so funny, and it’s interesting to hear that because I often hear people say you need to know less and less math, the more advanced some of these ML libraries and programs get. Sasha: Definitely, but I think having a good base, I'm not saying you have to be a super genius, but having this intuition. Like when I was working with Yoshua for example, he's a total math genius and just the facility of interpreting results or understanding behaviors of a machine learning model just because math is so second nature. Whereas for me I have to be like, okay, so I'm gonna write this equation with the loss function. I'm gonna try to understand the consequences, etc., and it's a bit less automatic, but it's a skill that you can develop. It's not necessarily theoretical, it could also be experimental knowledge. But just having this really solid math background helps you get there quicker, you couldn't really skip a few steps. That was brilliant. And you can ask your parents for help? Sasha: No, I refuse to ask my parents for help, no way. Plus since they're like theoretical mathematicians, they think machine learning is just for people who aren't good at math and who are lazy or whatever. And so depending on whatever area you're in, there's pure mathematicians, theoretical mathematics, applied mathematicians, there's like statisticians, and there are all these different camps. And so I remember my little brother also was thinking of going to machine learning, and my dad was like no, stay in theoretical math, that's where all the geniuses are. He was like “No, machine learning is where math goes to die,” and I was like “Dad, I’m here!” And he was like “Well I'd rather your brother stayed in something more refined,” and I was like “That's not fair.” So yeah, there are a lot of empirical aspects in machine learning, and a lot of trial and error, like you're tuning hyperparameters and you don't really know why. And so I think formal mathematicians, unless there's like a formula, they don't think ML is real or legit. So besides maybe a mathematical foundation, what advice would you give to someone looking to get into machine learning? Sasha: I think getting your hands dirty and starting out with I don't know, Jupyter Notebooks or coding exercises, things like that. Especially if you do have specific angles or problems you want to get into or just ideas in general, and so starting to try. I remember I did a summer school in machine learning when I was at the beginning of my Ph.D., I think. And then it was really interesting, but then all these examples were so disconnected. I don't remember what the data was, like cats versus dogs, I don't know, but like, why am I gonna use that? And then they're like part of the exercise was to find something that you want to use, like a classifier essentially to do. Then I remember I got pictures of flowers or something, and I got super into it. I was like yeah, see, it confuses this flower and that flower because they're kind of similar. I understand I need more images, and I got super into it and that's when it clicked in my head, it's not only this super abstract classification. Or like oh yeah, I remember we were using this data app called MNIST which is super popular because it's like handwritten digits and they're really small, and the network goes fast. So people use it a lot in the beginning of machine learning courses. And I was like who cares, I don't want to classify digits, like whatever, right? And then when they let us pick our own images, all of a sudden it gets a lot more personal, interesting, and captivating. So I think that if people are stuck in a rut, they can really focus on things that interest them. For example, get some climate change data and start playing around with it and it really makes the process more pleasant. I love that, find something that you're interested in. Sasha: Exactly. And one of my favorite projects I worked on was classifying butterflies. We trained neural networks to classify butterflies based on pictures people took and it was so much fun. You learn so much, and then you're also solving a problem that you understand how it's gonna be used, and so it was such a great thing to be involved in. And I wish that everyone had found this kind of interest in the work they do because you really feel like you're making a difference, and it's cool, it's fun and it's interesting, and you want to do more. For example, this project was done in partnership with the Montreal insectarium, which is a museum for insects. And I kept in touch with a lot of these people and then they just renovated the insectarium and they're opening it after like three years of renovation this weekend. They also invited me and my family to the opening, and I'm so excited to go there. You could actually handle insects, they’re going to have stick bugs, and they're gonna have a big greenhouse where there are butterflies everywhere. And in that greenhouse, I mean you have to install the app, but you can take pictures of butterflies, then it uses our AI network to identify them. And I'm so excited to go there to use the app and to see my kids using it and to see this whole thing. Because of the old version, they would give you this little pamphlet with pictures of butterflies and you have to go find them. I just can't wait to see the difference between that static representation and this actual app that you could use to take pictures of butterflies. Oh my gosh. And how cool to see something that you created being used like that. Sasha: Exactly. And even if it's not like fighting climate change, I think it can make a big difference in helping people appreciate nature and biodiversity and taking things from something that's so abstract and two-dimensional to something that you can really get involved in and take pictures of. I think that makes a huge difference in terms of our perception and our connection. It helps you make a connection between yourself and nature, for example. So should people be afraid of AI taking over the world? Sasha: I think that we're really far from it. I guess it depends on what you mean by taking over the world, but I think that we should be a lot more mindful of what's going on right now. Instead of thinking to the future and being like oh terminator, whatever, and to kind of be aware of how AI's being used in our phones and our lives, and to be more cognizant of that. Technology or events in general, we have more influence on them than we think by using Alexa, for example, we're giving agency, we're giving not only material or funds to this technology. And we can also participate in it, for example, oh well I'm gonna opt out of my data being used for whatever if I am using this technology. Or I'm gonna read the fine print and figure out what it is that AI is doing in this case, and being more involved in general. So I think that people are really seeing AI as a very distant potential mega threat, but it's actually a current threat, but on a different scale. It's like a different perception. It's like instead of thinking of this AGI or whatever, start thinking about the small things in our lives that AI is being used for, and then engage with them. And then there's gonna be less chance that AGI is gonna take over the world if you make the more mindful choices about data sharing, about consent, about using technology in certain ways. Like if you find out that your police force in your city is using facial recognition technology, you can speak up about that. That's part of your rights as a citizen in many places. And so it's by engaging yourself, you can have an influence on the future by engaging in the present. What are you interested in right now? It could be anything, a movie, a recipe, a podcast, etc.? Sasha: So during the pandemic, or the lockdowns and stuff like that, I got super into plants. I bought so many plants and now we're preparing a garden with my children. So this is the first time I've done this, we've planted seeds like tomatoes, peppers, and cucumbers. I usually just buy them at the groceries when they're already ready, but this time around I was like, no, I want to teach my kids. But I also want to learn what the whole process is. And so we planted them maybe 10 days ago and they're starting to grow. And we're watering them every day, and I think that this is also part of this process of learning more about nature and the conditions that can help plants thrive and stuff like that. So last summer already, we built not just a square essentially that we fill in with dirt, but this year we're trying to make it better. I want to have several levels and stuff like that, so I'm really looking forward to learning more about growing your own food. That is so cool. I feel like that's such a grounding activity. Sasha: Yeah, and it's like the polar opposite of what I do. It's great not doing something on my computer, but just going outside and having dirty fingernails. I remember being like who would want to do gardening, it’s so boring, now I'm super into gardening. I can't wait for the weekend to go gardening. Yeah, that's great. There's something so rewarding about creating something that you can see touch, feel, and smell as opposed to pushing pixels. Sasha: Exactly, sometimes you spend a whole day grappling with this program that has bugs in it and it's not working. You're so frustrating, and then you go outside and you're like, but I have cherry tomatoes, it's all good. What are some of your favorite machine learning papers? Sasha: My favorite currently, papers by a researcher or by Abeba Birhane who's a researcher in AI ethics. It's like a completely different way of looking at things. So for example, she wrote a paper that just got accepted to FAcct, which is fairness in ethics conference in AI. Which was about values and how the way we do machine learning research is actually driven by the things that we value and the things that, for example, if I value a network that has high accuracy, for example, performance, I might be less willing to focus on efficiency. So for example, I'll train a model for a long time, just because I want it to be really accurate. Or like if I want to have something new, like this novelty value, I'm not gonna read the literature and see what people have been doing for whatever 10 years, I'm gonna be like I'm gonna reinvent this.So she and her co-authors write this really interesting paper about the connection between values that are theoretical, like a kind of metaphysical, and the way that they're instantiated in machine learning. And I found that it was really interesting because typically we don't see it that way. Typically it's like oh, well we have to establish state-of-the-art, we have to establish accuracy and do this and that, and then like site-related work, but it's like a checkbox, you just have to do it. And then they think a lot more in-depth about why we're doing this, and then what are some ultra ways of doing things. For example, doing a trade off between efficiency and accuracy, like if you have a model that's slightly less accurate, but that's a lot more efficient and trains faster, that could be a good way of democratizing AI because people need less computational resources to train a model. And so there are all these different connections that they make that I find it really cool. Wow, we'll definitely be linking to that paper as well, so people can check that out. Yeah, very cool. Anything else you'd like to share? Maybe things you're working on or that you would like people to know about? Sasha: Yeah, something I'm working on outside of Big Science is on evaluation and how we evaluate models. Well kind of to what Ababa talks about in her paper, but even from just a pure machine learning perspective, what are the different ways that we can evaluate models and compare them on different aspects, I guess. Not only accuracy but efficiency and carbon emissions and things like that. So there's a project that started a month or ago on how to evaluate in a way that's not only performance-driven, but takes into account different aspects essentially. And I think that this has been a really overlooked aspect of machine learning, like people typically just once again and just check off like oh, you have to evaluate this and that and that, and then submit the paper. There are also these interesting trade-offs that we could be doing and things that we could be measuring that we're not.For example, if you have a data set and you have an average accuracy, is the accuracy the same again in different subsets of the data set, like are there for example, patterns that you can pick up on that will help you improve your model, but also make it fairer? I guess the typical example is like image recognition, does it do the same in different… Well the famous Gender Shades paper about the algorithm did better on white men than African American women, but you could do that about anything. Not only gender and race, but you could do that for images, color or types of objects or angles. Like is it good for images from above or images from street level. There are all these different ways of analyzing accuracy or performance that we haven't really looked at because it's typically more time-consuming. And so we want to make tools to help people delve deeper into the results and understand their models better. Where can people find you online? Sasha: I'm on Twitter @SashaMTL, and that's about it. I have a website, I don't update it enough, but Twitter I think is the best. Perfect. We can link to that too. Sasha, thank you so much for joining me today, this has been so insightful and amazing. I really appreciate it. Sasha: Thanks, Britney. Thank you for listening to Machine Learning Experts! If you or someone you know is interested in direct access to leading ML experts like Sasha who are ready to help accelerate your ML project, go to hf.co/support to learn more. ❤️
https://huggingface.co/blog/fellowship
Announcing the Hugging Face Fellowship Program
Merve Noyan, Omar Espejel
May 17, 2022
The Fellowship is a network of exceptional people from different backgrounds who contribute to the Machine Learning open-source ecosystem 🚀. The goal of the program is to empower key contributors to enable them to scale their impact while inspiring others to contribute as well. How the Fellowship works 🙌🏻 This is Hugging Face supporting the amazing work of contributors! Being a Fellow works differently for everyone. The key question here is:❓ What would contributors need to have more impact? How can Hugging Face support them so they can do that project they have always wanted to do?Fellows of all backgrounds are welcome! The progress of Machine Learning depends on grassroots contributions. Each person has a unique set of skills and knowledge that can be used to democratize the field in a variety of ways. Each Fellow achieves impact differently and that is perfect 🌈. Hugging Face supports them to continue creating and sharing the way that fits their needs the best. What are the benefits of being part of the Fellowship? 🤩 The benefits will be based on the interests of each individual. Some examples of how Hugging Face supports Fellows:💾 Computing and resources🎁 Merchandise and assets.✨ Official recognition from Hugging Face. How to become a Fellow Fellows are currently nominated by members of the Hugging Face team or by another Fellow. How can prospects get noticed? The main criterion is that they have contributed to the democratization of open-source Machine Learning.How? In the ways that they prefer. Here are some examples of the first Fellows:María Grandury - Created the largest Spanish-speaking NLP community and organized a Hackathon that achieved 23 Spaces, 23 datasets, and 33 models that advanced the SOTA for Spanish (see the Organization in the Hub). 👩🏼‍🎤Manuel Romero - Contributed over 300 models to the Hugging Face Hub. He has trained multiple SOTA models in Spanish. 🤴🏻Aritra Roy Gosthipathy: Contributed new architectures for TensorFlow to the Transformers library, improved Keras tooling, and helped create the Keras working group (for example, see his Vision Transformers tutorial). 🦹🏻 Vaibhav Srivastav - Advocacy in the field of speech. He has led the ML4Audio working group (see the recordings) and paper discussion sessions. 🦹🏻Bram Vanroy - Helped many contributors and the Hugging Face team from the beginning. He has reported several issues and merged pull requests in the Transformers library since September 2019. 🦸🏼 Christopher Akiki - Contributed to sprints, workshops, Big Science, and cool demos! Check out some of his recent projects like his TF-coder and the income stats explorer. 🦹🏻‍♀️Ceyda Çınarel - Contributed to many successful Hugging Face and Spaces models in various sprints. Check out her ButterflyGAN Space or search for reaction GIFs with CLIP. 👸🏻Additionally, there are strategic areas where Hugging Face is looking for open-source contributions. These areas will be added and updated frequently on the Fellowship Doc with specific projects. Prospects should not hesitate to write in the #looking-for-collaborators channel in the Hugging Face Discord if they want to undertake a project in these areas, support or be considered as a Fellow. Additionally, refer to the Where and how can I contribute? question below.If you are currently a student, consider applying to the Student Ambassador Program. The application deadline is June 13, 2022.Hugging Face is actively working to build a culture that values ​​diversity, equity, and inclusion. Hugging Face intentionally creates a community where people feel respected and supported, regardless of who they are or where they come from. This is critical to building the future of open Machine Learning. The Fellowship will not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Frequently Asked Questions I am just starting to contribute. Can I be a fellow?Fellows are nominated based on their open-source and community contributions. If you want to participate in the Fellowship, the best way to start is to begin contributing! If you are a student, the Student Ambassador Program might be more suitable for you (the application deadline is June 13, 2022).Where and how can I contribute?It depends on your interests. Here are some ideas of areas where you can contribute, but you should work on things that get you excited!Share exciting models with the community through the Hub. These can be for Computer Vision, Reinforcement Learning, and any other ML domain!Create tutorials and projects using different open-source libraries—for example, Stable-Baselines 3, fastai, or Keras.Organize local sprints to promote open source Machine Learning in different languages or niches. For example, the Somos NLP Hackathon focused on Spanish speakers. The HugGAN sprint focused on generative models.Translate the Hugging Face Course, the Transformers documentation or the Educational Toolkit.Doc with specific projects where contributions would be valuable. The Hugging Face team will frequently update the doc with new projects.Please share in the #looking-for-contributors channel on the Hugging Face Discord if you want to work on a particular project.Will I be an employee of Hugging Face?No, the Fellowship does not mean you are an employee of Hugging Face. However, feel free to mention in any forum, including LinkedIn, that you are a Hugging Face Fellow. Hugging Face is growing and this could be a good path for a bigger relationship in the future 😎. Check the Hugging Face job board for updated opportunities. Will I receive benefits during the Fellowship?Yes, the benefits will depend on the particular needs and projects that each Fellow wants to undertake.Is there a deadline?No. Admission to the program is ongoing and contingent on the nomination of a current Fellow or member of the Hugging Face team. Please note that being nominated may not be enough to be admitted as a Fellow.
https://huggingface.co/blog/gradio-blocks
Gradio 3.0 is Out!
Abubakar Abid
May 16, 2022
Machine learning demos are an increasingly vital part of releasing a model. Demos allow anyone — not just ML engineers — to try out a model in the browser, give feedback on predictions, and build trust in the model if it performs well. More than 600,000 ML demos have been built with the Gradio library since its first version in 2019, and today, we are thrilled to announce Gradio 3.0: a ground-up redesign of the Gradio library 🥳 What's New in Gradio 3.0? 🔥 A complete redesign of the frontend, based on the feedback we're hearing from Gradio users:We've switched to modern technologies (like Svelte) to build the Gradio frontend. We're seeing much smaller payloads and much faster page loads as a result! We've also embranced a much cleaner design that will allow Gradio demos to fit in visually in more settings (such as being embedded in blog posts).We've revamped our existing components, like Dataframe to be more user-friendly (try dragging-and-dropping a CSV file into a Dataframe) as well as added new components, such as the Gallery, to allow you to build the right UI for your model.We've added a TabbedInterface class which allows you to group together related demos as multiple tabs in one web appCheck out all the components you can use on our (redesigned) docs 🤗!🔥 We've created a new low-level language called Gradio Blocks that lets you build complex custom web apps, right in Python:Why did we create Blocks? Gradio demos are very easy to build, but what if you want more control over the layout of your demo, or more flexibility on how the data flows? For example, you might want to:Change the layout of your demo instead of just having all of the inputs on the left and outputs on the rightHave multi-step interfaces, in which the output of one model becomes the input to the next model, or have more flexible data flows in generalChange a component's properties (for example, the choices in a Dropdown) or its visibilty based on user inputThe low-level Blocks API allows you to do all of this, right in Python.Here's an example of a Blocks demo that creates two simple demos and uses tabs to group them together:import numpy as npimport gradio as grdef flip_text(x): return x[::-1]def flip_image(x): return np.fliplr(x)with gr.Blocks() as demo: gr.Markdown("Flip text or image files using this demo.") with gr.Tabs(): with gr.TabItem("Flip Text"): text_input = gr.Textbox() text_output = gr.Textbox() # this demo runs whenever the input textbox changes text_input.change(flip_text, inputs=text_input, outputs=text_output) with gr.TabItem("Flip Image"): with gr.Row():image_input = gr.Image()image_output = gr.Image() button = gr.Button("Flip") # this demo runs whenever the button is clicked button.click(flip_image, inputs=image_input, outputs=image_output) demo.launch()Once you run launch(), the following demo will appear:For a step-by-step introduction to Blocks, check out the dedicated Blocks Guide The Gradio Blocks Party We're very excited about Gradio Blocks -- and we'd love for you to try it out -- so we are organizing a competition, the Gradio Blocks Party (😉), to see who can build the best demos with Blocks. By building these demos, we can make state-of-the-art machine learning accessible, not just to engineers, but anyone who can use an Internet browser!Even if you've never used Gradio before, this is the perfect time to start, because the Blocks Party is running until the end of May. We'll be giving out 🤗 merch and other prizes at the end of the Party for demos built using Blocks.Learn more about Blocks Party here: https://huggingface.co/spaces/Gradio-Blocks/README
https://huggingface.co/blog/ml-director-insights-2
Director of Machine Learning Insights [Part 2: SaaS Edition]
Britney Muller
May 13, 2022
If you or your team are interested in building ML solutions faster visit hf.co/support today!👋 Welcome to Part 2 of our Director of Machine Learning Insights [Series]. Check out Part 1 here.Directors of Machine Learning have a unique seat at the AI table spanning the perspective of various roles and responsibilities. Their rich knowledge of ML frameworks, engineering, architecture, real-world applications and problem-solving provides deep insights into the current state of ML. For example, one director will note how using new transformers speech technology decreased their team’s error rate by 30% and how simple thinking can help save a lot of computational power.Ever wonder what directors at Salesforce or ZoomInfo currently think about the state of Machine Learning? What their biggest challenges are? And what they're most excited about? Well, you're about to find out!In this second SaaS focused installment, you’ll hear from a deep learning for healthcare textbook author who also founded a non-profit for mentoring ML talent, a chess fanatic cybersecurity expert, an entrepreneur whose business was inspired by Barbie’s need to monitor brand reputation after a lead recall, and a seasoned patent and academic paper author who enjoys watching his 4 kids make the same mistakes as his ML models. 🚀 Let’s meet some top Machine Learning Directors in SaaS and hear what they have to say about Machine Learning:Omar RahmanBackground: Omar leads a team of Machine Learning and Data Engineers in leveraging ML for defensive security purposes as part of the Cybersecurity team. Previously, Omar has led data science and machine learning engineering teams at Adobe and SAP focusing on bringing intelligent capabilities to marketing cloud and procurement applications. Omar holds a Master’s degree in Electrical Engineering from Arizona State University. Fun Fact: Omar loves to play chess and volunteers his free time to guide and mentor graduate students in AI.Salesforce: World's #1 customer relationship management software.1. How has ML made a positive impact on SaaS?ML has benefited SaaS offerings in many ways. a. Improving automation within applications: For example, a service ticket router using NLP (Natural Language Processing) to understand the context of the service request and routing it to the appropriate team within the organization.b. Reduction in code complexity: Rules-based systems tend to get unwieldy as new rules are added, thereby increasing maintenance costs. For example, An ML-based language translation system is more accurate and robust with much fewer lines of code as compared to previous rules-based systems.c. Better forecasting results in cost savings. Being able to forecast more accurately helps in reducing backorders in the supply chain as well as cost savings due to a reduction in storage costs.2. What are the biggest ML challenges within SaaS?a. Productizing ML applications require a lot more than having a model. Being able to leverage the model for serving results, detecting and adapting to changes in statistics of data, etc. creates significant overhead in deploying and maintaining ML systems.b. In most large organizations, data is often siloed and not well maintained resulting in significant time spent in consolidating data, pre-processing, data cleaning activities, etc., thereby resulting in a significant amount of time and effort needed to create ML-based applications.3. What’s a common mistake you see people make trying to integrate ML into SaaS?Not focussing enough on the business context and the problem being solved, rather trying to use the latest and greatest algorithms and newly open-sourced libraries. A lot can be achieved by simple traditional ML techniques.4. What excites you most about the future of ML?Generalized artificial intelligence capabilities, if built and managed well, have the capability to transform humanity in more ways than one can imagine. My hope is that we will see great progress in the areas of healthcare and transportation. We already see the benefits of AI in radiology resulting in significant savings in manpower thereby enabling humans to focus on more complex tasks. Self-driving cars and trucks are already transforming the transportation sector.Cao (Danica) XiaoBackground: Cao (Danica) Xiao is the Senior Director and Head of Data Science and Machine Learning at Amplitude. Her team focuses on developing and deploying self-serving machine learning models and products based on multi-sourced user data to solve critical business challenges regarding digital production analytics and optimization. Besides, she is a passionate machine learning researcher with over 95+ papers published in leading CS venues. She is also a technology leader with extensive experience in machine learning roadmap creation, team building, and mentoring.Prior to Amplitude, Cao (Danica) was the Global Head of Machine Learning in the Analytics Center of Excellence of IQVIA. Before that, she was a research staff member at IBM Research and research lead at MIT-IBM Watson AI Lab. She got her Ph.D. degree in machine learning from the University of Washington, Seattle. Recently, she also co-authored a textbook on deep learning for healthcare and founded a non-profit organization for mentoring machine learning talents.Fun Fact: Cao is a cat-lover and is a mom to two cats: one Singapura girl and one British shorthair boy.Amplitude: A cloud-based product-analytics platform that helps customers build better products.1. How has ML made a positive impact on SaaS?ML plays a game-changing role in turning massive noisy machine-generated or user-generated data into answers to all kinds of business questions including personalization, prediction, recommendation, etc. It impacts a wide spectrum of industry verticals via SaaS.2. What are the biggest ML challenges within SaaS?Lack of data for ML model training that covers a broader range of industry use cases. While being a general solution for all industry verticals, still need to figure out how to handle the vertical-specific needs arising from business, or domain shift issue that affects ML model quality.3. What’s a common mistake you see people make trying to integrate ML into a SaaS product?Not giving users the flexibility to incorporate their business knowledge or other human factors that are critical to business success. For example, for a self-serve product recommendation, it would be great if users could control the diversity of recommended products.4. What excites you most about the future of ML?ML has seen tremendous success. It also evolves rapidly to address the current limitations (e.g., lack of data, domain shift, incorporation of domain knowledge).More ML technologies will be applied to solve business or customer needs. For example, interpretable ML for users to understand and trust the ML model outputs; counterfactual prediction for users to estimate the alternative outcome should they make a different business decision.Raphael CohenBackground: Raphael has a Ph.D. in the field of understanding health records and genetics, has authored 20 academic papers and has 8 patents. Raphael is also a leader in Data Science and Research with a background in NLP, Speech, healthcare, sales, customer journeys, and IT. Fun Fact: Raphael has 4 kids and enjoys seeing them learn and make the same mistakes as some of his ML models. ZoomInfo: Intelligent sales and marketing technology backed by the world's most comprehensive business database.1. How has ML made a positive impact on SaaSMachine Learning has facilitated the transcription of conversational data to help people unlock new insights and understandings. People can now easily view the things they talked about, summarized goals, takeaways, who spoke the most, who asked the best questions, what the next steps are, and more. This is incredibly useful for many interactions like email and video conferencing (which are more common now than ever).With Chorus.ai we transcribe conversations as they are being recorded in real-time. We use an algorithm called Wave2Vec to do this. 🤗 Hugging Face recently released their own Wave2Vec version created for training that we derived a lot of value from. This new generation of transformers speech technology is incredibly powerful, it has decreased our error rate by 30%. Once we transcribe a conversation we can look into the content - this is where NLP comes in and we rely heavily on Hugging Face Transformers to allow us to depict around 20 categories of topics inside recordings and emails; for example, are we talking about pricing, signing a contract, next steps, all of these topics are sent through email or discussed and it’s easy to now extract that info without having to go back through all of your conversations. This helps make people much better at their jobs.2. What are the biggest ML challenges within SaaS?The biggest challenge is understanding when to make use of ML. What problems can we solve with ML and which shouldn’t we? A lot of times we have a breakthrough with an ML model but a computationally lighter heuristic model is better suited to solve the problem we have. This is where a strong AI strategy comes into play. —Understand how you want your final product to work and at what efficiency. We also have the question of how to get the ML models you’ve built into production with a low environmental/computational footprint? Everyone is struggling with this; how to keep models in production in an efficient way without burning too many resources.A great example of this was when we moved to the Wav2Vec framework, which required us to break down our conversational audio into 15sec segments that get fed into this huge model. During this, we discovered that we were feeding the model a lot of segments that were pure silence. This is common when someone doesn’t show up or one person is waiting for another to join a meeting. Just by adding another very light model to tell us when not to send the silent segments into this big complicated ML model, we are able to save a lot of computational power/energy. This is an example of where engineers can think of other easier ways to speed up and save on model production. There’s an opportunity for more engineers to be savvier and better optimize models without burning too many resources. 3. What’s a common mistake you see people make trying to integrate ML into SaaS?Is my solution the smartest solution? Is there a better way to break this down and solve it more efficiently? When we started identifying speakers we went directly with an ML method and this wasn’t as accurate as the video conference provider data. Since then we learned that the best way to do this is to start with the metadata of who speaks from the conference provider and then overlay that with a smart embedding model. We lost precious time during this learning curve. We shouldn’t have used this large ML solution if we stopped to understand there are other data sources we should invest in that will help us accelerate more efficiently. Think outside the box and don’t just take something someone built and think I have an idea of how to make this better. Where can we be smarter by understanding the problem better?4. What excites you most about the future of ML?I think we are in the middle of another revolution. For us, seeing our error rates drop by 30% by our Wave2Vec model was amazing. We had been working for years only getting 1% drops at each time and then within 3 months' time we saw such a huge improvement and we know that’s only the beginning. In academia, bigger and smarter things are happening. These pre-trained models are allowing us to do things we could never imagine before. This is very exciting! We are also seeing a lot of tech from NLP entering other domains like speech and vision and being able to power them. Another thing I’m really excited about is generating models! We recently worked with a company called Bria.ai and they use these amazing GANs to create images. So you take a stock photo and you can turn it into a different photo by saying “remove glasses”, “add glasses” or “add hair” and it does so perfectly. The idea is that we can use this to generate data. We can take images of people from meetings not smiling and we can make them smile in order to build a data set for smile detection. This will be transformative. You can take 1 image and turn it into 100 images. This will also apply to speech generation which could be a powerful application within the service industry.Any final thoughts?–It’s challenging to put models into production. Believe data science teams need engineering embedded with them. Engineers should be part of the AI team. This will be an important structural pivot in the future.Martin OstrovskyBackground: Martin is passionate about AI, ML, and NLP and is responsible for guiding the strategy and success of all Repustate products by leading the cross-functional team responsible for developing and improving them. He sets the strategy, roadmap, and feature definition for Repustate’s Global Text Analytics API, Sentiment Analysis, Deep Search, and Named Entity Recognition solutions. He has a Bachelor's degree in Computer Science from York University and earned his Master of Business Administration from the Schulich School of Business.Fun Fact: The first application of ML I used was for Barbie toys. My professor at Schulich Business School mentioned that Barbie needed to monitor their brand reputation due to a recall of the toys over concerns of excessive lead in them. Hiring people to manually go through each social post and online article seemed just so inefficient and ineffective to me. So I proposed to create a machine learning algorithm that would monitor what people thought of them from across all social media and online channels. The algorithm worked seamlessly. And that’s how I decided to name my company, Repustate - the “state” of your “repu”tation. 🤖Repustate: A leading provider of text analytics services for enterprise companies. 1. Favorite ML business application?My favorite ML application is cybersecurity.Cybersecurity remains the most critical part for any company (government or non-government) with regard to data. Machine Learning helps identify cyber threats, fight cyber-crime, including cyberbullying, and allows for a faster response to security breaches. ML algorithms quickly analyze the most likely vulnerabilities and potential malware and spyware applications based on user data. They can spot distortion in endpoint entry patterns and identify it as a potential data breach.2. What is your biggest ML challenge?The biggest ML challenge is audio to text transcription in the Arabic Language. There are quite a few systems that can decipher Arabic but they lack accuracy. Arabic is the official language of 26 countries and has 247 million native speakers and 29 million non-native speakers. It is a complex language with a rich vocabulary and many dialects.The sentiment mining tool needs to read data directly in Arabic if you want accurate insights from Arabic text because otherwise nuances are lost in translations. Translating text to English or any other language can completely change the meaning of words in Arabic, including even the root word. That’s why the algorithm needs to be trained on Arabic datasets and use a dedicated Arabic part-of-speech tagger. Because of these challenges, most companies fail to provide accurate Arabic audio to text translation to date.3. What’s a common mistake you see people make trying to integrate ML?The most common mistake that companies make while trying to integrate ML is insufficient data in their training datasets. Most ML models cannot distinguish between good data and insufficient data. Therefore, training datasets are considered relevant and used as a precedent to determine the results in most cases. This challenge isn’t limited to small- or medium-sized businesses; large enterprises have the same challenge.No matter what the ML processes are, companies need to ensure that the training datasets are reliable and exhaustive for their desired outcome by incorporating a human element into the early stages of machine learning.However, companies can create the required foundation for successful machine learning projects with a thorough review of accurate, comprehensive, and constant training data. 4. Where do you see ML having the biggest impact in the next 5-10 years?In the next 5-10 years, ML will have the biggest impact on transforming the healthcare sector.Networked hospitals and connected care:With predictive care, command centers are all set to analyze clinical and location data to monitor supply and demand across healthcare networks in real-time. With ML, healthcare professionals will be able to spot high-risk patients more quickly and efficiently, thus removing bottlenecks in the system. You can check the spread of contractible diseases faster, take better measures to manage epidemics, identify at-risk patients more accurately, especially for genetic diseases, and more.Better staff and patient experiences:Predictive healthcare networks are expected to reduce wait times, improve staff workflows, and take on the ever-growing administrative burden. By learning from every patient, diagnosis, and procedure, ML is expected to create experiences that adapt to hospital staff as well as the patient. This improves health outcomes and reduces clinician shortages and burnout while enabling the system to be financially sustainable.🤗 Thank you for joining us in this second installment of ML Director Insights. Stay tuned for more insights from ML Directors in Finance, Healthcare and e-Commerce. Big thanks to Omar Rahman, Cao (Danica) Xiao, Raphael Cohen, and Martin Ostrovsky for their brilliant insights and participation in this piece. We look forward to watching each of your continued successes and will be cheering you on each step of the way. 🎉 If you or your team are interested in accelerating your ML roadmap with Hugging Face Experts please visit hf.co/support to learn more.
https://huggingface.co/blog/ambassadors
Student Ambassador Program’s call for applications is open!
Violette Lepercq
May 13, 2022
Student Ambassador Program’s call for applications is open!Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesStudent Ambassador Program’s call for applications is open!
https://huggingface.co/blog/optimum-inference
Accelerated Inference with Optimum and Transformers Pipelines
Philipp Schmid
May 10, 2022
Inference has landed in Optimum with support for Hugging Face Transformers pipelines, including text-generation using ONNX Runtime.The adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for Computer Vision, Speech, and Time-Series. 💬 🖼 🎤 ⏳Companies are now moving from the experimentation and research phase to the production phase in order to use Transformer models for large-scale workloads. But by default BERT and its friends are relatively slow, big, and complex models compared to traditional Machine Learning algorithms.To solve this challenge, we created Optimum – an extension of Hugging Face Transformers to accelerate the training and inference of Transformer models like BERT.In this blog post, you'll learn:1. What is Optimum? An ELI52. New Optimum inference and pipeline features3. End-to-End tutorial on accelerating RoBERTa for Question-Answering including quantization and optimization4. Current Limitations5. Optimum Inference FAQ6. What’s next?Let's get started! 🚀1. What is Optimum? An ELI5Hugging Face Optimum is an open-source library and an extension of Hugging Face Transformers, that provides a unified API of performance optimization tools to achieve maximum efficiency to train and run models on accelerated hardware, including toolkits for optimized performance on Graphcore IPU and Habana Gaudi. Optimum can be used for accelerated training, quantization, graph optimization, and now inference as well with support for transformers pipelines.2. New Optimum inference and pipeline featuresWith release of Optimum 1.2, we are adding support for inference and transformers pipelines. This allows Optimum users to leverage the same API they are used to from transformers with the power of accelerated runtimes, like ONNX Runtime.Switching from Transformers to Optimum InferenceThe Optimum Inference models are API compatible with Hugging Face Transformers models. This means you can just replace your AutoModelForXxx class with the corresponding ORTModelForXxx class in Optimum. For example, this is how you can use a question answering model in Optimum:from transformers import AutoTokenizer, pipeline-from transformers import AutoModelForQuestionAnswering+from optimum.onnxruntime import ORTModelForQuestionAnswering-model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2") # pytorch checkpoint+model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2") # onnx checkpointtokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")optimum_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)question = "What's my name?"context = "My name is Philipp and I live in Nuremberg."pred = optimum_qa(question, context)In the first release, we added support for ONNX Runtime but there is more to come!These new ORTModelForXX can now be used with the transformers pipelines. They are also fully integrated into the Hugging Face Hub to push and pull optimized checkpoints from the community. In addition to this, you can use the ORTQuantizer and ORTOptimizer to first quantize and optimize your model and then run inference on it.Check out End-to-End Tutorial on accelerating RoBERTa for question-answering including quantization and optimization for more details.3. End-to-End tutorial on accelerating RoBERTa for Question-Answering including quantization and optimizationIn this End-to-End tutorial on accelerating RoBERTa for question-answering, you will learn how to:Install Optimum for ONNX RuntimeConvert a Hugging Face Transformers model to ONNX for inferenceUse the ORTOptimizer to optimize the modelUse the ORTQuantizer to apply dynamic quantizationRun accelerated inference using Transformers pipelinesEvaluate the performance and speedLet’s get started 🚀This tutorial was created and run on an m5.xlarge AWS EC2 Instance.3.1 Install Optimum for OnnxruntimeOur first step is to install Optimum with the onnxruntime utilities.pip install "optimum[onnxruntime]==1.2.0"This will install all required packages for us including transformers, torch, and onnxruntime. If you are going to use a GPU you can install optimum with pip install optimum[onnxruntime-gpu].3.2 Convert a Hugging Face Transformers model to ONNX for inference**Before we can start optimizing we need to convert our vanilla transformers model to the onnx format. To do this we will use the new ORTModelForQuestionAnswering class calling the from_pretrained() method with the from_transformers attribute. The model we are using is the deepset/roberta-base-squad2 a fine-tuned RoBERTa model on the SQUAD2 dataset achieving an F1 score of 82.91 and as the feature (task) question-answering.from pathlib import Pathfrom transformers import AutoTokenizer, pipelinefrom optimum.onnxruntime import ORTModelForQuestionAnsweringmodel_id = "deepset/roberta-base-squad2"onnx_path = Path("onnx")task = "question-answering"# load vanilla transformers and convert to onnxmodel = ORTModelForQuestionAnswering.from_pretrained(model_id, from_transformers=True)tokenizer = AutoTokenizer.from_pretrained(model_id)# save onnx checkpoint and tokenizermodel.save_pretrained(onnx_path)tokenizer.save_pretrained(onnx_path)# test the model with using transformers pipeline, with handle_impossible_answer for squad_v2optimum_qa = pipeline(task, model=model, tokenizer=tokenizer, handle_impossible_answer=True)prediction = optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.")print(prediction)# {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'}We successfully converted our vanilla transformers to onnx and used the model with the transformers.pipelines to run the first prediction. Now let's optimize it. 🏎If you want to learn more about exporting transformers model check-out the documentation: Export 🤗 Transformers Models3.3 Use the ORTOptimizer to optimize the modelAfter we saved our onnx checkpoint to onnx/ we can now use the ORTOptimizer to apply graph optimization such as operator fusion and constant folding to accelerate latency and inference.from optimum.onnxruntime import ORTOptimizerfrom optimum.onnxruntime.configuration import OptimizationConfig# create ORTOptimizer and define optimization configurationoptimizer = ORTOptimizer.from_pretrained(model_id, feature=task)optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations# apply the optimization configuration to the modeloptimizer.export(onnx_model_path=onnx_path / "model.onnx",onnx_optimized_model_output_path=onnx_path / "model-optimized.onnx",optimization_config=optimization_config,)To test performance we can use the ORTModelForQuestionAnswering class again and provide an additional file_name parameter to load our optimized model. (This also works for models available on the hub).from optimum.onnxruntime import ORTModelForQuestionAnswering# load quantized modelopt_model = ORTModelForQuestionAnswering.from_pretrained(onnx_path, file_name="model-optimized.onnx")# test the quantized model with using transformers pipelineopt_optimum_qa = pipeline(task, model=opt_model, tokenizer=tokenizer, handle_impossible_answer=True)prediction = opt_optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.")print(prediction)# {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'}We will evaluate the performance changes in step 3.6 Evaluate the performance and speed in detail.3.4 Use the ORTQuantizer to apply dynamic quantizationAfter we have optimized our model we can accelerate it even more by quantizing it using the ORTQuantizer. The ORTOptimizer can be used to apply dynamic quantization to decrease the size of the model size and accelerate latency and inference.We use the avx512_vnni since the instance is powered by an intel cascade-lake CPU supporting avx512.from optimum.onnxruntime import ORTQuantizerfrom optimum.onnxruntime.configuration import AutoQuantizationConfig# create ORTQuantizer and define quantization configurationquantizer = ORTQuantizer.from_pretrained(model_id, feature=task)qconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=True)# apply the quantization configuration to the modelquantizer.export(onnx_model_path=onnx_path / "model-optimized.onnx",onnx_quantized_model_output_path=onnx_path / "model-quantized.onnx",quantization_config=qconfig,)We can now compare this model size as well as some latency performanceimport os# get model file sizesize = os.path.getsize(onnx_path / "model.onnx")/(1024*1024)print(f"Vanilla Onnx Model file size: {size:.2f} MB")size = os.path.getsize(onnx_path / "model-quantized.onnx")/(1024*1024)print(f"Quantized Onnx Model file size: {size:.2f} MB")# Vanilla Onnx Model file size: 473.31 MB# Quantized Onnx Model file size: 291.77 MBWe decreased the size of our model by almost 50% from 473MB to 291MB. To run inference we can use the ORTModelForQuestionAnswering class again and provide an additional file_name parameter to load our quantized model. (This also works for models available on the hub).# load quantized modelquantized_model = ORTModelForQuestionAnswering.from_pretrained(onnx_path, file_name="model-quantized.onnx")# test the quantized model with using transformers pipelinequantized_optimum_qa = pipeline(task, model=quantized_model, tokenizer=tokenizer, handle_impossible_answer=True)prediction = quantized_optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.")print(prediction)# {'score': 0.9246969819068909, 'start': 11, 'end': 18, 'answer': 'Philipp'}Nice! The model predicted the same answer.3.5 Run accelerated inference using Transformers pipelinesOptimum has built-in support for transformers pipelines. This allows us to leverage the same API that we know from using PyTorch and TensorFlow models. We have already used this feature in steps 3.2,3.3 & 3.4 to test our converted and optimized models. At the time of writing this, we are supporting ONNX Runtime with more to come in the future. An example of how to use the transformers pipelines can be found below.from transformers import AutoTokenizer, pipelinefrom optimum.onnxruntime import ORTModelForQuestionAnsweringtokenizer = AutoTokenizer.from_pretrained(onnx_path)model = ORTModelForQuestionAnswering.from_pretrained(onnx_path)optimum_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)prediction = optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.")print(prediction)# {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'}In addition to this we added a pipelines API to Optimum to guarantee more safety for your accelerated models. Meaning if you are trying to use optimum.pipelines with an unsupported model or task you will see an error. You can use optimum.pipelines as a replacement for transformers.pipelines.from transformers import AutoTokenizerfrom optimum.onnxruntime import ORTModelForQuestionAnsweringfrom optimum.pipelines import pipelinetokenizer = AutoTokenizer.from_pretrained(onnx_path)model = ORTModelForQuestionAnswering.from_pretrained(onnx_path)optimum_qa = pipeline("question-answering", model=model, tokenizer=tokenizer, handle_impossible_answer=True)prediction = optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.")print(prediction)# {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'}3.6 Evaluate the performance and speedDuring this End-to-End tutorial on accelerating RoBERTa for Question-Answering including quantization and optimization, we created 3 different models. A vanilla converted model, an optimized model, and a quantized model.As the last step of the tutorial, we want to take a detailed look at the performance and accuracy of our model. Applying optimization techniques, like graph optimizations or quantization not only impact performance (latency) those also might have an impact on the accuracy of the model. So accelerating your model comes with a trade-off.Let's evaluate our models. Our transformers model deepset/roberta-base-squad2 was fine-tuned on the SQUAD2 dataset. This will be the dataset we use to evaluate our models.from datasets import load_metric,load_datasetmetric = load_metric("squad_v2")dataset = load_dataset("squad_v2")["validation"]print(f"length of dataset {len(dataset)}")#length of dataset 11873We can now leverage the map function of datasets to iterate over the validation set of squad 2 and run prediction for each data point. Therefore we write a evaluate helper method which uses our pipelines and applies some transformation to work with the squad v2 metric.This can take quite a while (1.5h)def evaluate(example):default = optimum_qa(question=example["question"], context=example["context"])optimized = opt_optimum_qa(question=example["question"], context=example["context"])quantized = quantized_optimum_qa(question=example["question"], context=example["context"])return {'reference': {'id': example['id'], 'answers': example['answers']},'default': {'id': example['id'],'prediction_text': default['answer'], 'no_answer_probability': 0.},'optimized': {'id': example['id'],'prediction_text': optimized['answer'], 'no_answer_probability': 0.},'quantized': {'id': example['id'],'prediction_text': quantized['answer'], 'no_answer_probability': 0.},}result = dataset.map(evaluate)# COMMENT IN to run evaluation on 2000 subset of the dataset# result = dataset.shuffle().select(range(2000)).map(evaluate)Now lets compare the resultsdefault_acc = metric.compute(predictions=result["default"], references=result["reference"])optimized = metric.compute(predictions=result["optimized"], references=result["reference"])quantized = metric.compute(predictions=result["quantized"], references=result["reference"])print(f"vanilla model: exact={default_acc['exact']}% f1={default_acc['f1']}%")print(f"optimized model: exact={optimized['exact']}% f1={optimized['f1']}%")print(f"quantized model: exact={quantized['exact']}% f1={quantized['f1']}%")# vanilla model: exact=79.07858165585783% f1=82.14970024570314%# optimized model: exact=79.07858165585783% f1=82.14970024570314%# quantized model: exact=78.75010528088941% f1=81.82526107204629%Our optimized & quantized model achieved an exact match of 78.75% and an f1 score of 81.83% which is 99.61% of the original accuracy. Achieving 99% of the original model is very good especially since we used dynamic quantization.Okay, let's test the performance (latency) of our optimized and quantized model.But first, let’s extend our context and question to a more meaningful sequence length of 128.context="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value."question="As what is Philipp working?"To keep it simple, we are going to use a python loop and calculate the avg/mean latency for our vanilla model and for the optimized and quantized model.from time import perf_counterimport numpy as npdef measure_latency(pipe):latencies = []# warm upfor _ in range(10):_ = pipe(question=question, context=context)# Timed runfor _ in range(100):start_time = perf_counter()_ = pipe(question=question, context=context)latency = perf_counter() - start_timelatencies.append(latency)# Compute run statisticstime_avg_ms = 1000 * np.mean(latencies)time_std_ms = 1000 * np.std(latencies)return f"Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f}"print(f"Vanilla model {measure_latency(optimum_qa)}")print(f"Optimized & Quantized model {measure_latency(quantized_optimum_qa)}")# Vanilla model Average latency (ms) - 117.61 +\- 8.48# Optimized & Quantized model Average latency (ms) - 64.94 +\- 3.65We managed to accelerate our model latency from 117.61ms to 64.94ms or roughly 2x while keeping 99.61% of the accuracy. Something we should keep in mind is that we used a mid-performant CPU instance with 2 physical cores. By switching to GPU or a more performant CPU instance, e.g. ice-lake powered you can decrease the latency number down to a few milliseconds.4. Current LimitationsWe just started supporting inference in https://github.com/huggingface/optimum so we would like to share current limitations as well. All of those limitations are on the roadmap and will be resolved in the near future.Remote Models > 2GB: Currently, only models smaller than 2GB can be loaded from the Hugging Face Hub. We are working on adding support for models > 2GB / multi-file models.Seq2Seq tasks/model: We don’t have support for seq2seq tasks, like summarization and models like T5 mostly due to the limitation of the single model support. But we are actively working to solve it, to provide you with the same experience you are familiar with in transformers.Past key values: Generation models like GPT-2 use something called past key values which are precomputed key-value pairs of the attention blocks and can be used to speed up decoding. Currently the ORTModelForCausalLM is not using past key values.No cache: Currently when loading an optimized model (*.onnx), it will not be cached locally.5. Optimum Inference FAQWhich tasks are supported?You can find a list of all supported tasks in the documentation. Currently support pipelines tasks are feature-extraction, text-classification, token-classification, question-answering, zero-shot-classification, text-generationWhich models are supported?Any model that can be exported with transformers.onnx and has a supported task can be used, this includes among others BERT, ALBERT, GPT2, RoBERTa, XLM-RoBERTa, DistilBERT ....Which runtimes are supported?Currently, ONNX Runtime is supported. We are working on adding more in the future. Let us know if you are interested in a specific runtime.How can I use Optimum with Transformers?You can find an example and instructions in our documentation.How can I use GPUs?To be able to use GPUs you simply need to install optimum[onnxruntine-gpu] which will install the required GPU providers and use them by default.How can I use a quantized and optimized model with pipelines?You can load the optimized or quantized model using the new ORTModelForXXX classes using the from_pretrained method. You can learn more about it in our documentation.6. What’s next?What’s next for Optimum you ask? A lot of things. We are focused on making Optimum the reference open-source toolkit to work with transformers for acceleration & optimization. To be able to achieve this we will solve the current limitations, improve the documentation, create more content and examples and push the limits for accelerating and optimizing transformers.Some important features on the roadmap for Optimum amongst the current limitations are:Support for speech models (Wav2vec2) and speech tasks (automatic speech recognition)Support for vision models (ViT) and vision tasks (image classification)Improve performance by adding support for OrtValue and IOBindingEasier ways to evaluate accelerated modelsAdd support for other runtimes and providers like TensorRT and AWS-NeuronThanks for reading! If you are as excited as I am about accelerating Transformers, make them efficient and scale them to billions of requests. You should apply, we are hiring.🚀If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.
https://huggingface.co/blog/series-c
We Raised $100 Million for Open & Collaborative Machine Learning 🚀
Hugging Face
May 9, 2022
We Raised $100 Million for Open & Collaborative Machine Learning 🚀Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesWe Raised $100 Million for Open & Collaborative Machine Learning 🚀
https://huggingface.co/blog/fastai
Welcome fastai to the Hugging Face Hub
Omar Espejel
May 6, 2022
Making neural nets uncool again... and sharing themFew have done as much as the fast.ai ecosystem to make Deep Learning accessible. Our mission at Hugging Face is to democratize good Machine Learning. Let's make exclusivity in access to Machine Learning, including pre-trained models, a thing of the past and let's push this amazing field even further.fastai is an open-source Deep Learning library that leverages PyTorch and Python to provide high-level components to train fast and accurate neural networks with state-of-the-art outputs on text, vision, and tabular data. However, fast.ai, the company, is more than just a library; it has grown into a thriving ecosystem of open source contributors and people learning about neural networks. As some examples, check out their book and courses. Join the fast.ai Discord and forums. It is a guarantee that you will learn by being part of their community!Because of all this, and more (the writer of this post started his journey thanks to the fast.ai course), we are proud to announce that fastai practitioners can now share and upload models to Hugging Face Hub with a single line of Python.👉 In this post, we will introduce the integration between fastai and the Hub. Additionally, you can open this tutorial as a Colab notebook.We want to thank the fast.ai community, notably Jeremy Howard, Wayde Gilliam, and Zach Mueller for their feedback 🤗. This blog is heavily inspired by the Hugging Face Hub section in the fastai docs.Why share to the Hub?The Hub is a central platform where anyone can share and explore models, datasets, and ML demos. It has the most extensive collection of Open Source models, datasets, and demos.Sharing on the Hub amplifies the impact of your fastai models by making them available for others to download and explore. You can also use transfer learning with fastai models; load someone else's model as the basis for your task.Anyone can access all the fastai models in the Hub by filtering the hf.co/models webpage by the fastai library, as in the image below.In addition to free model hosting and exposure to the broader community, the Hub has built-in version control based on git (git-lfs, for large files) and model cards for discoverability and reproducibility. For more information on navigating the Hub, see this introduction.Joining Hugging Face and installationTo share models in the Hub, you will need to have a user. Create it on the Hugging Face website.The huggingface_hub library is a lightweight Python client with utility functions to interact with the Hugging Face Hub. To push fastai models to the hub, you need to have some libraries pre-installed (fastai>=2.4, fastcore>=1.3.27 and toml). You can install them automatically by specifying ["fastai"] when installing huggingface_hub, and your environment is good to go:pip install huggingface_hub["fastai"]Creating a fastai LearnerHere we train the first model in the fastbook to identify cats 🐱. We fully recommended reading the entire fastbook.# Training of 6 lines in chapter 1 of the fastbook.from fastai.vision.all import *path = untar_data(URLs.PETS)/'images'def is_cat(x): return x[0].isupper()dls = ImageDataLoaders.from_name_func(path, get_image_files(path), valid_pct=0.2, seed=42,label_func=is_cat, item_tfms=Resize(224))learn = vision_learner(dls, resnet34, metrics=error_rate)learn.fine_tune(1)Sharing a Learner to the HubA Learner is a fastai object that bundles a model, data loaders, and a loss function. We will use the words Learner and Model interchangeably throughout this post.First, log in to the Hugging Face Hub. You will need to create a write token in your Account Settings. Then there are three options to log in:Type huggingface-cli login in your terminal and enter your token.If in a python notebook, you can use notebook_login.from huggingface_hub import notebook_loginnotebook_login()Use the token argument of the push_to_hub_fastai function.You can input push_to_hub_fastai with the Learner you want to upload and the repository id for the Hub in the format of "namespace/repo_name". The namespace can be an individual account or an organization you have write access to (for example, 'fastai/stanza-de'). For more details, refer to the Hub Client documentation.from huggingface_hub import push_to_hub_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "espejelomar/identify-my-cat"push_to_hub_fastai(learner=learn, repo_id=repo_id)The Learner is now in the Hub in the repo named espejelomar/identify-my-cat. An automatic model card is created with some links and next steps. When uploading a fastai Learner (or any other model) to the Hub, it is helpful to edit its model card (image below) so that others better understand your work (refer to the Hugging Face documentation).if you want to learn more about push_to_hub_fastai go to the Hub Client Documentation. There are some cool arguments you might be interested in 👀. Remember, your model is a Git repository with all the advantages that this entails: version control, commits, branches...Loading a Learner from the Hugging Face HubLoading a model from the Hub is even simpler. We will load our Learner, "espejelomar/identify-my-cat", and test it with a cat image (🦮?). This code is adapted fromthe first chapter of the fastbook.First, upload an image of a cat (or possibly a dog?). The Colab notebook with this tutorial uses ipywidgets to interactively upload a cat image (or not?). Here we will use this cute cat 🐅:Now let's load the Learner we just shared in the Hub and test it.from huggingface_hub import from_pretrained_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "espejelomar/identify-my-cat"learner = from_pretrained_fastai(repo_id)It works 👇!_,_,probs = learner.predict(img)print(f"Probability it's a cat: {100*probs[1].item():.2f}%")Probability it's a cat: 100.00%The Hub Client documentation includes addtional details on from_pretrained_fastai.Blurr to mix fastai and Hugging Face Transformers (and share them)![Blurr is] a library designed for fastai developers who want to train and deploy Hugging Face transformers - Blurr Docs.We will:Train a blurr Learner with the high-level Blurr API. It will load the distilbert-base-uncased model from the Hugging Face Hub and prepare a sequence classification model.Share it to the Hub with the namespace fastai/blurr_IMDB_distilbert_classification using push_to_hub_fastai.Load it with from_pretrained_fastai and try it with learner_blurr.predict().Collaboration and open-source are fantastic!First, install blurr and train the Learner.git clone https://github.com/ohmeow/blurr.gitcd blurrpip install -e ".[dev]"import torchimport transformersfrom fastai.text.all import *from blurr.text.data.all import *from blurr.text.modeling.all import *path = untar_data(URLs.IMDB_SAMPLE)model_path = Path("models")imdb_df = pd.read_csv(path / "texts.csv")learn_blurr = BlearnerForSequenceClassification.from_data(imdb_df, "distilbert-base-uncased", dl_kwargs={"bs": 4})learn_blurr.fit_one_cycle(1, lr_max=1e-3)Use push_to_hub_fastai to share with the Hub.from huggingface_hub import push_to_hub_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "fastai/blurr_IMDB_distilbert_classification"push_to_hub_fastai(learn_blurr, repo_id)Use from_pretrained_fastai to load a blurr model from the Hub.from huggingface_hub import from_pretrained_fastai# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"repo_id = "fastai/blurr_IMDB_distilbert_classification"learner_blurr = from_pretrained_fastai(repo_id)Try it with a couple sentences and review their sentiment (negative or positive) with learner_blurr.predict().sentences = ["This integration is amazing!","I hate this was not available before."]probs = learner_blurr.predict(sentences)print(f"Probability that sentence '{sentences[0]}' is negative is: {100*probs[0]['probs'][0]:.2f}%")print(f"Probability that sentence '{sentences[1]}' is negative is: {100*probs[1]['probs'][0]:.2f}%")Again, it works!Probability that sentence 'This integration is amazing!' is negative is: 29.46%Probability that sentence 'I hate this was not available before.' is negative is: 70.04%What's next?Take the fast.ai course (a new version is coming soon), follow Jeremy Howard and fast.ai on Twitter for updates, and start sharing your fastai models on the Hub 🤗. Or load one of the models that are already in the Hub.📧 Feel free to contact us via the Hugging Face Discord and share if you have an idea for a project. We would love to hear your feedback 💖.Would you like to integrate your library to the Hub?This integration is made possible by the huggingface_hub library. If you want to add your library to the Hub, we have a guide for you! Or simply tag someone from the Hugging Face team.A shout out to the Hugging Face team for all the work on this integration, in particular @osanseviero 🦙.Thank you fastlearners and hugging learners 🤗.
https://huggingface.co/blog/deep-rl-intro
An Introduction to Deep Reinforcement Learning
Thomas Simonini, Omar Sanseviero
May 4, 2022
Chapter 1 of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.Welcome to the most fascinating topic in Artificial Intelligence: Deep Reinforcement Learning.Deep RL is a type of Machine Learning where an agent learns how to behave in an environment by performing actions and seeing the results.Since 2013 and the Deep Q-Learning paper, we’ve seen a lot of breakthroughs. From OpenAI five that beat some of the best Dota2 players of the world, to the Dexterity project, we live in an exciting moment in Deep RL research.OpenAI Five, an AI that beat some of the best Dota2 players in the worldMoreover, since 2018, you have now, access to so many amazing environments and libraries to build your agents.That’s why this is the best moment to start learning, and with this course you’re in the right place.Yes, because this article is the first unit of Deep Reinforcement Learning Class, a free class from beginner to expert where you’ll learn the theory and practice using famous Deep RL libraries such as Stable Baselines3, RL Baselines3 Zoo and RLlib.In this free course, you will:📖 Study Deep Reinforcement Learning in theory and practice.🧑‍💻 Learn to use famous Deep RL libraries such as Stable Baselines3, RL Baselines3 Zoo, and RLlib.🤖 Train agents in unique environments such as SnowballFight, Huggy the Doggo 🐶, and classical ones such as Space Invaders and PyBullet.💾 Publish your trained agents in one line of code to the Hub. But also download powerful agents from the community.🏆 Participate in challenges where you will evaluate your agents against other teams.🖌️🎨 Learn to share your environments made with Unity and Godot.So in this first unit, you’ll learn the foundations of Deep Reinforcement Learning. And then, you'll train your first lander agent to land correctly on the Moon 🌕 and upload it to the Hugging Face Hub, a free, open platform where people can share ML models, datasets and demos.It’s essential to master these elements before diving into implementing Deep Reinforcement Learning agents. The goal of this chapter is to give you solid foundations.If you prefer, you can watch the 📹 video version of this chapter :So let’s get started! 🚀What is Reinforcement Learning?The big pictureA formal definitionThe Reinforcement Learning FrameworkThe RL ProcessThe reward hypothesis: the central idea of Reinforcement LearningMarkov PropertyObservations/States SpaceAction SpaceRewards and the discountingType of tasksExploration/ Exploitation tradeoffThe two main approaches for solving RL problemsThe Policy π: the agent’s brainPolicy-Based MethodsValue-based methodsThe “Deep” in Reinforcement LearningWhat is Reinforcement Learning?To understand Reinforcement Learning, let’s start with the big picture.The big pictureThe idea behind Reinforcement Learning is that an agent (an AI) will learn from the environment by interacting with it (through trial and error) and receiving rewards (negative or positive) as feedback for performing actions.Learning from interaction with the environment comes from our natural experiences.For instance, imagine putting your little brother in front of a video game he never played, a controller in his hands, and letting him alone.Your brother will interact with the environment (the video game) by pressing the right button (action). He got a coin, that’s a +1 reward. It’s positive, he just understood that in this game he must get the coins.But then, he presses right again and he touches an enemy, he just died -1 reward.By interacting with his environment through trial and error, your little brother understood that he needed to get coins in this environment but avoid the enemies.Without any supervision, the child will get better and better at playing the game.That’s how humans and animals learn, through interaction. Reinforcement Learning is just a computational approach of learning from action.A formal definitionIf we take now a formal definition:Reinforcement learning is a framework for solving control tasks (also called decision problems) by building agents that learn from the environment by interacting with it through trial and error and receiving rewards (positive or negative) as unique feedback.⇒ But how Reinforcement Learning works?The Reinforcement Learning FrameworkThe RL ProcessThe RL Process: a loop of state, action, reward and next stateSource: Reinforcement Learning: An Introduction, Richard Sutton and Andrew G. BartoTo understand the RL process, let’s imagine an agent learning to play a platform game:Our Agent receives state S0S_0S0​ from the Environment — we receive the first frame of our game (Environment).Based on that state S0S_0S0​, the Agent takes action A0A_0A0​ — our Agent will move to the right.Environment goes to a new state S1S_1S1​ — new frame.The environment gives some reward R1R_1R1​ to the Agent — we’re not dead (Positive Reward +1).This RL loop outputs a sequence of state, action, reward and next state.The agent's goal is to maximize its cumulative reward, called the expected return.The reward hypothesis: the central idea of Reinforcement Learning⇒ Why is the goal of the agent to maximize the expected return?Because RL is based on the reward hypothesis, which is that all goals can be described as the maximization of the expected return (expected cumulative reward).That’s why in Reinforcement Learning, to have the best behavior, we need to maximize the expected cumulative reward.Markov PropertyIn papers, you’ll see that the RL process is called the Markov Decision Process (MDP).We’ll talk again about the Markov Property in the following units. But if you need to remember something today about it, Markov Property implies that our agent needs only the current state to decide what action to take and not the history of all the states and actions they took before.Observations/States SpaceObservations/States are the information our agent gets from the environment. In the case of a video game, it can be a frame (a screenshot). In the case of the trading agent, it can be the value of a certain stock, etc.There is a differentiation to make between observation and state:State s: is a complete description of the state of the world (there is no hidden information). In a fully observed environment.In chess game, we receive a state from the environment since we have access to the whole check board information.In chess game, we receive a state from the environment since we have access to the whole check board information.With a chess game, we are in a fully observed environment, since we have access to the whole check board information.Observation o: is a partial description of the state. In a partially observed environment.In Super Mario Bros, we only see a part of the level close to the player, so we receive an observation.In Super Mario Bros, we only see a part of the level close to the player, so we receive an observation.In Super Mario Bros, we are in a partially observed environment. We receive an observation since we only see a part of the level.In reality, we use the term state in this course but we will make the distinction in implementations.To recap: Action Space The Action space is the set of all possible actions in an environment.The actions can come from a discrete or continuous space:Discrete space: the number of possible actions is finite.Again, in Super Mario Bros, we have only 4 directions and jump possibleIn Super Mario Bros, we have a finite set of actions since we have only 4 directions and jump.Continuous space: the number of possible actions is infinite.A Self Driving Car agent has an infinite number of possible actions since it can turn left 20°, 21,1°, 21,2°, honk, turn right 20°…To recap:Taking this information into consideration is crucial because it will have importance when choosing the RL algorithm in the future.Rewards and the discountingThe reward is fundamental in RL because it’s the only feedback for the agent. Thanks to it, our agent knows if the action taken was good or not.The cumulative reward at each time step t can be written as:The cumulative reward equals to the sum of all rewards of the sequence.Which is equivalent to:The cumulative reward = rt+1 (rt+k+1 = rt+0+1 = rt+1)+ rt+2 (rt+k+1 = rt+1+1 = rt+2) + ...However, in reality, we can’t just add them like that. The rewards that come sooner (at the beginning of the game) are more likely to happen since they are more predictable than the long-term future reward.Let’s say your agent is this tiny mouse that can move one tile each time step, and your opponent is the cat (that can move too). Your goal is to eat the maximum amount of cheese before being eaten by the cat.As we can see in the diagram, it’s more probable to eat the cheese near us than the cheese close to the cat (the closer we are to the cat, the more dangerous it is).Consequently, the reward near the cat, even if it is bigger (more cheese), will be more discounted since we’re not really sure we’ll be able to eat it.To discount the rewards, we proceed like this:We define a discount rate called gamma. It must be between 0 and 1. Most of the time between 0.99 and 0.95.The larger the gamma, the smaller the discount. This means our agent cares more about the long-term reward.On the other hand, the smaller the gamma, the bigger the discount. This means our agent cares more about the short term reward (the nearest cheese).2. Then, each reward will be discounted by gamma to the exponent of the time step. As the time step increases, the cat gets closer to us, so the future reward is less and less likely to happen.Our discounted cumulative expected rewards is: Type of tasks A task is an instance of a Reinforcement Learning problem. We can have two types of tasks: episodic and continuing. Episodic task In this case, we have a starting point and an ending point (a terminal state). This creates an episode: a list of States, Actions, Rewards, and new States.For instance, think about Super Mario Bros: an episode begin at the launch of a new Mario Level and ending when you’re killed or you reached the end of the level.Beginning of a new episode. Continuing tasks These are tasks that continue forever (no terminal state). In this case, the agent must learn how to choose the best actions and simultaneously interact with the environment.For instance, an agent that does automated stock trading. For this task, there is no starting point and terminal state. The agent keeps running until we decide to stop them.Exploration/ Exploitation tradeoffFinally, before looking at the different methods to solve Reinforcement Learning problems, we must cover one more very important topic: the exploration/exploitation trade-off.Exploration is exploring the environment by trying random actions in order to find more information about the environment.Exploitation is exploiting known information to maximize the reward.Remember, the goal of our RL agent is to maximize the expected cumulative reward. However, we can fall into a common trap.Let’s take an example:In this game, our mouse can have an infinite amount of small cheese (+1 each). But at the top of the maze, there is a gigantic sum of cheese (+1000).However, if we only focus on exploitation, our agent will never reach the gigantic sum of cheese. Instead, it will only exploit the nearest source of rewards, even if this source is small (exploitation).But if our agent does a little bit of exploration, it can discover the big reward (the pile of big cheese).This is what we call the exploration/exploitation trade-off. We need to balance how much we explore the environment and how much we exploit what we know about the environment.Therefore, we must define a rule that helps to handle this trade-off. We’ll see in future chapters different ways to handle it.If it’s still confusing, think of a real problem: the choice of a restaurant:Source: Berkley AI CourseExploitation: You go every day to the same one that you know is good and take the risk to miss another better restaurant.Exploration: Try restaurants you never went to before, with the risk of having a bad experience but the probable opportunity of a fantastic experience.To recap:The two main approaches for solving RL problems⇒ Now that we learned the RL framework, how do we solve the RL problem?In other terms, how to build an RL agent that can select the actions that maximize its expected cumulative reward?The Policy π: the agent’s brainThe Policy π is the brain of our Agent, it’s the function that tell us what action to take given the state we are. So it defines the agent’s behavior at a given time.Think of policy as the brain of our agent, the function that will tells us the action to take given a stateThink of policy as the brain of our agent, the function that will tells us the action to take given a stateThis Policy is the function we want to learn, our goal is to find the optimal policy π, the policy that* maximizes expected return when the agent acts according to it. We find this π through training.*There are two approaches to train our agent to find this optimal policy π*:Directly, by teaching the agent to learn which action to take, given the state is in: Policy-Based Methods.Indirectly, teach the agent to learn which state is more valuable and then take the action that leads to the more valuable states: Value-Based Methods.Policy-Based MethodsIn Policy-Based Methods, we learn a policy function directly.This function will map from each state to the best corresponding action at that state. Or a probability distribution over the set of possible actions at that state.As we can see here, the policy (deterministic) directly indicates the action to take for each step.We have two types of policy:Deterministic: a policy at a given state will always return the same action.action = policy(state)Stochastic: output a probability distribution over actions.policy(actions | state) = probability distribution over the set of actions given the current stateGiven an initial state, our stochastic policy will output probability distributions over the possible actions at that state.If we recap:Value-based methodsIn Value-based methods, instead of training a policy function, we train a value function that maps a state to the expected value of being at that state.The value of a state is the expected discounted return the agent can get if it starts in that state, and then act according to our policy.“Act according to our policy” just means that our policy is “going to the state with the highest value”.Here we see that our value function defined value for each possible state.Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal.Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal.If we recap:The “Deep” in Reinforcement Learning⇒ What we've talked about so far is Reinforcement Learning. But where does the "Deep" come into play?Deep Reinforcement Learning introduces deep neural networks to solve Reinforcement Learning problems — hence the name “deep”.For instance, in the next article, we’ll work on Q-Learning (classic Reinforcement Learning) and then Deep Q-Learning both are value-based RL algorithms.You’ll see the difference is that in the first approach, we use a traditional algorithm to create a Q table that helps us find what action to take for each state.In the second approach, we will use a Neural Network (to approximate the q value).Schema inspired by the Q learning notebook by UdacityIf you are not familiar with Deep Learning you definitely should watch the fastai Practical Deep Learning for Coders (Free)That was a lot of information, if we summarize:Reinforcement Learning is a computational approach of learning from action. We build an agent that learns from the environment by interacting with it through trial and error and receiving rewards (negative or positive) as feedback.The goal of any RL agent is to maximize its expected cumulative reward (also called expected return) because RL is based on the reward hypothesis, which is that all goals can be described as the maximization of the expected cumulative reward.The RL process is a loop that outputs a sequence of state, action, reward and next state.To calculate the expected cumulative reward (expected return), we discount the rewards: the rewards that come sooner (at the beginning of the game) are more probable to happen since they are more predictable than the long term future reward.To solve an RL problem, you want to find an optimal policy, the policy is the “brain” of your AI that will tell us what action to take given a state. The optimal one is the one who gives you the actions that max the expected return.There are two ways to find your optimal policy:By training your policy directly: policy-based methods.By training a value function that tells us the expected return the agent will get at each state and use this function to define our policy: value-based methods.Finally, we speak about Deep RL because we introduces deep neural networks to estimate the action to take (policy-based) or to estimate the value of a state (value-based) hence the name “deep.”Now that you've studied the bases of Reinforcement Learning, you’re ready to train your first lander agent to land correctly on the Moon 🌕 and share it with the community through the Hub 🔥Start the tutorial here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit1/unit1.ipynbAnd since the best way to learn and avoid the illusion of competence is to test yourself. We wrote a quiz to help you find where you need to reinforce your study. Check your knowledge here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit1/quiz.mdCongrats on finishing this chapter! That was the biggest one, and there was a lot of information. And congrats on finishing the tutorial. You’ve just trained your first Deep RL agent and shared it on the Hub 🥳.That’s normal if you still feel confused with all these elements. This was the same for me and for all people who studied RL.Take time to really grasp the material before continuing. It’s important to master these elements and having a solid foundations before entering the fun part.We published additional readings in the syllabus if you want to go deeper 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit1/README.mdNaturally, during the course, we’re going to use and explain these terms again, but it’s better to understand them before diving into the next chapters.In the next chapter, we’re going to learn about Q-Learning and dive deeper into the value-based methods.And don't forget to share with your friends who want to learn 🤗 !Finally, we want to improve and update the course iteratively with your feedback. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9 Keep learning, stay awesome,
https://huggingface.co/blog/pytorch-fsdp
Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel
Sourab Mangrulkar, Sylvain Gugger
May 2, 2022
In this post we will look at how we can leverage Accelerate Library for training large models which enables users to leverage the latest features of PyTorch FullyShardedDataParallel (FSDP).Motivation 🤗With the ever increasing scale, size and parameters of the Machine Learning (ML) models, ML practitioners are finding it difficult to train or even load such large models on their hardware. On one hand, it has been found that large models learn quickly (data and compute efficient) and are significantly more performant when compared to smaller models [1]; on the other hand, it becomes prohibitive to train such models on most of the available hardware. Distributed training is the key to enable training such large ML models. There have been major recent advances in the field of Distributed Training at Scale. Few the most notable advances are given below:Data Parallelism using ZeRO - Zero Redundancy Optimizer [2]Stage 1: Shards optimizer states across data parallel workers/GPUsStage 2: Shards optimizer states + gradients across data parallel workers/GPUsStage 3: Shards optimizer states + gradients + model parameters across data parallel workers/GPUsCPU Offload: Offloads the gradients + optimizer states to CPU building on top of ZERO Stage 2 [3]Tensor Parallelism [4]: Form of model parallelism wherein sharding parameters of individual layers with huge number of parameters across accelerators/GPUs is done in a clever manner to achieve parallel computation while avoiding expensive communication synchronization overheads. Pipeline Parallelism [5]: Form of model parallelism wherein different layers of the model are put across different accelerators/GPUs and pipelining is employed to keep all the accelerators running simultaneously. Here, for instance, the second accelerator/GPU computes on the first micro-batch while the first accelerator/GPU computes on the second micro-batch. 3D parallelism [3]: Employs Data Parallelism using ZERO + Tensor Parallelism + Pipeline Parallelism to train humongous models in the order of 100s of Billions of parameters. For instance, BigScience 176B parameters Language Model employ this [6].In this post we will look at Data Parallelism using ZeRO and more specifically the latest PyTorch feature FullyShardedDataParallel (FSDP). DeepSpeed and FairScale have implemented the core ideas of the ZERO paper. These have already been integrated in transformers Trainer and accompanied by great blog Fit More and Train Faster With ZeRO via DeepSpeed and FairScale [10]. PyTorch recently upstreamed the Fairscale FSDP into PyTorch Distributed with additional optimizations.Accelerate 🚀: Leverage PyTorch FSDP without any code changesWe will look at the task of Causal Language Modelling using GPT-2 Large (762M) and XL (1.5B) model variants.Below is the code for pre-training GPT-2 model. It is similar to the official causal language modeling example here with the addition of 2 arguments n_train (2000) and n_val (500) to prevent preprocessing/training on entire data in order to perform quick proof of concept benchmarks.run_clm_no_trainer.pySample FSDP config after running the command accelerate config:compute_environment: LOCAL_MACHINEdeepspeed_config: {}distributed_type: FSDPfsdp_config:min_num_params: 2000offload_params: falsesharding_strategy: 1machine_rank: 0main_process_ip: nullmain_process_port: nullmain_training_function: mainmixed_precision: 'no'num_machines: 1num_processes: 2use_cpu: falseMulti-GPU FSDPHere, we experiment on the Single-Node Multi-GPU setting. We compare the performance of Distributed Data Parallel (DDP) and FSDP in various configurations. First, GPT-2 Large(762M) model is used wherein DDP works with certain batch sizes without throwing Out Of Memory (OOM) errors. Next, GPT-2 XL (1.5B) model is used wherein DDP fails with OOM error even on batch size of 1. We observe that FSDP enables larger batch sizes for GPT-2 Large model and it enables training the GPT-2 XL model with decent batch size unlike DDP.Hardware setup: 2X24GB NVIDIA Titan RTX GPUs. Command for training GPT-2 Large Model (762M parameters):export BS=#`try with different batch sizes till you don't get OOM error,#i.e., start with larger batch size and go on decreasing till it fits on GPU`time accelerate launch run_clm_no_trainer.py \--model_name_or_path gpt2-large \--dataset_name wikitext \--dataset_config_name wikitext-2-raw-v1 \--per_device_train_batch_size $BS --per_device_eval_batch_size $BS --num_train_epochs 1 --block_size 12Sample FSDP Run:MethodBatch Size Max ($BS)Approx Train Time (minutes)NotesDDP (Distributed Data Parallel)715DDP + FP1678FSDP with SHARD_GRAD_OP1111FSDP with min_num_params = 1M + FULL_SHARD1512FSDP with min_num_params = 2K + FULL_SHARD1513FSDP with min_num_params = 1M + FULL_SHARD + Offload to CPU2023FSDP with min_num_params = 2K + FULL_SHARD + Offload to CPU2224Table 1: Benchmarking FSDP on GPT-2 Large (762M) modelWith respect to DDP, from Table 1 we can observe that FSDP enables larger batch sizes, up to 2X-3X without and with CPU offload setting, respectively. In terms of train time, DDP with mixed precision is the fastest followed by FSDP using ZERO Stage 2 and Stage 3, respectively. As the task of causal language modelling always has fixed context sequence length (--block_size), the train time speedup with FSDP wasn’t that great. For applications with dynamic batching, FSDP which enables larger batch sizes will likely have considerable speed up in terms of train time. FSDP mixed precision support currently has few issues with transformer. Once this is supported, the training time speed up will further improve considerably.CPU Offloading to enable training humongous models that won’t fit the GPU memoryCommand for training GPT-2 XL Model (1.5B parameters):export BS=#`try with different batch sizes till you don't get OOM error,#i.e., start with larger batch size and go on decreasing till it fits on GPU`time accelerate launch run_clm_no_trainer.py \--model_name_or_path gpt2-xl \--dataset_name wikitext \--dataset_config_name wikitext-2-raw-v1 \--per_device_train_batch_size $BS --per_device_eval_batch_size $BS --num_train_epochs 1 --block_size 12MethodBatch Size Max ($BS)Num GPUsApprox Train Time (Hours)NotesDDP11NAOOM Error RuntimeError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 23.65 GiB total capacity; 22.27 GiB already allocated; 20.31 MiB free; 22.76 GiB reserved in total by PyTorch)DDP12NAOOM Error RuntimeError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 23.65 GiB total capacity; 22.27 GiB already allocated; 20.31 MiB free; 22.76 GiB reserved in total by PyTorch)DDP + FP1611NAOOM Error RuntimeError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 23.65 GiB total capacity; 22.27 GiB already allocated; 20.31 MiB free; 22.76 GiB reserved in total by PyTorch)FSDP with min_num_params = 2K520.6FSDP with min_num_params = 2K + Offload to CPU1013FSDP with min_num_params = 2K + Offload to CPU1421.16Table 2: Benchmarking FSDP on GPT-2 XL (1.5B) modelFrom Table 2, we can observe that DDP (w and w/o fp16) isn’t even able to run with batch size of 1 and results in CUDA OOM error. FSDP with Zero-Stage 3 is able to be run on 2 GPUs with batch size of 5 (effective batch size =10 (5 X 2)). FSDP with CPU offload can further increase the max batch size to 14 per GPU when using 2 GPUs. FSDP with CPU offload enables training GPT-2 1.5B model on a single GPU with a batch size of 10. This enables ML practitioners with minimal compute resources to train such large models, thereby democratizing large model training.Capabilities and limitations of the FSDP IntegrationLet’s dive into the current support that Accelerate provides for FSDP integration and the known limitations.Required PyTorch version for FSDP support: PyTorch Nightly (or 1.12.0 if you read this after it has been released) as the model saving with FSDP activated is only available with recent fixes.Configuration through CLI:Sharding Strategy: [1] FULL_SHARD, [2] SHARD_GRAD_OPMin Num Params: FSDP's minimum number of parameters for Default Auto Wrapping.Offload Params: Decides Whether to offload parameters and gradients to CPU.For more control, users can leverage the FullyShardedDataParallelPlugin wherein they can specify auto_wrap_policy, backward_prefetch and ignored_modules.After creating an instance of this class, users can pass it when creating the Accelerator object.For more information on these options, please refer to the PyTorch FullyShardedDataParallel code.Next, we will see the importance of the min_num_params config. Below is an excerpt from [8] detailing the importance of FSDP Auto Wrap Policy.(Source: link)When using the default_auto_wrap_policy, a layer is wrapped in FSDP module if the number of parameters in that layer is more than the min_num_params . The code for finetuning BERT-Large (330M) model on the GLUE MRPC task is the official complete NLP example outlining how to properly use FSDP feature with the addition of utilities for tracking peak memory usage.fsdp_with_peak_mem_tracking.pyWe leverage the tracking functionality support in Accelerate to log the train and evaluation peak memory usage along with evaluation metrics. Below is the snapshot of the plots from wandb run. We can observe that the DDP takes twice as much memory as FSDP with auto wrap. FSDP without auto wrap takes more memory than FSDP with auto wrap but considerably less than that of DDP. FSDP with auto wrap with min_num_params=2k takes marginally less memory when compared to setting with min_num_params=1M. This highlights the importance of the FSDP Auto Wrap Policy and users should play around with the min_num_params to find the setting which considerably saves memory and isn’t resulting in lot of communication overhead. PyTorch team is working on auto tuning tool for this config as mentioned in [8].Few caveats to be aware ofPyTorch FSDP auto wraps sub-modules, flattens the parameters and shards the parameters in place. Due to this, any optimizer created before model wrapping gets broken and occupies more memory. Hence, it is highly recommended and efficient to prepare model before creating optimizer. Accelerate will automatically wrap the model and create an optimizer for you in case of single model with a warning message.FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizerHowever, below is the recommended way to prepare model and optimizer while using FSDP:model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True)+ model = accelerator.prepare(model)optimizer = torch.optim.AdamW(params=model.parameters(), lr=lr)- model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(model,- optimizer, train_dataloader, eval_dataloader, lr_scheduler- )+ optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(+ optimizer, train_dataloader, eval_dataloader, lr_scheduler+ )In case of a single model, if you have created optimizer with multiple parameter groups and called prepare with them together, then the parameter groups will be lost and the following warning is displayed:FSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening.This is because parameter groups created before wrapping will have no meaning post wrapping due parameter flattening of nested FSDP modules into 1D arrays (which can consume many layers). For instance, below are the named parameters of FSDP model on GPU 0 (When using 2 GPUs. Around 55M (110M/2) params in 1D arrays as this will have the 1st shard of the parameters). Here, if one has applied no weight decay for [bias, LayerNorm.weight] named parameters of unwrapped BERT-Base model, it can’t be applied to the below FSDP wrapped model as there are no named parameters with either of those strings and the parameters of those layers are concatenated with parameters of various other layers. More details mentioned in this issue (The original model parameters' .grads are not set, meaning that they cannot be optimized separately (which is why we cannot support multiple parameter groups)).```{'_fsdp_wrapped_module.flat_param': torch.Size([494209]),'_fsdp_wrapped_module._fpw_module.bert.embeddings.word_embeddings._fsdp_wrapped_module.flat_param': torch.Size([11720448]),'_fsdp_wrapped_module._fpw_module.bert.encoder._fsdp_wrapped_module.flat_param': torch.Size([42527232])}```In case of multiple models, it is necessary to prepare the models before creating optimizers else it will throw an error.Mixed precision is currently not supported with FSDP as we wait for PyTorch to fix support for it.How it works 📝(Source: link)The above workflow gives an overview of what happens behind the scenes when FSDP is activated. Let's first understand how DDP works and how FSDP improves it. In DDP, each worker/accelerator/GPU has a replica of the entire model parameters, gradients and optimizer states. Each worker gets a different batch of data, it goes through the forwards pass, a loss is computed followed by the backward pass to generate gradients. Now, an all-reduce operation is performed wherein each worker gets the gradients from the remaining workers and averaging is done. In this way, each worker now has the same global gradients which are used by the optimizer to update the model parameters. We can see that having full replicas consume a lot of redundant memory on each GPU, which limits the batch size as well as the size of the models. FSDP precisely addresses this by sharding the optimizer states, gradients and model parameters across the data parallel workers. It further facilitates CPU offloading of all those tensors, thereby enabling loading large models which won't fit the available GPU memory. Similar to DDP, each worker gets a different batch of data. During the forward pass, if the CPU offload is enabled, the parameters of the local shard are first copied to the GPU/accelerator. Then, each worker performs all-gather operation for a given FSDP wrapped module/layer(s) to all get the needed parameters, computation is performed followed by releasing/emptying the parameter shards of other workers. This continues for all the FSDP modules. The loss gets computed after the forward pass and during the backward pass, again an all-gather operation is performed to get all the needed parameters for a given FSDP module, computation is performed to get local gradients followed by releasing the shards of other workers. Now, the local gradients are averaged and sharded to each relevant workers using reduce-scatter operation. This allows each worker to update the parameters of its local shard. If CPU offload is activated, the gradients are passed to CPU for updating parameters directly on CPU. Please refer [7, 8, 9] for all the in-depth details on the workings of the PyTorch FSDP and the extensive experimentation carried out using this feature.IssuesIf you encounter any issues with the integration part of PyTorch FSDP, please open an Issue in accelerate.But if you have problems with PyTorch FSDP configuration, and deployment - you need to ask the experts in their domains, therefore, please, open a PyTorch Issue instead.References[1] Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers[2] ZeRO: Memory Optimizations Toward Training Trillion Parameter Models[3] DeepSpeed: Extreme-scale model training for everyone - Microsoft Research[4] Megatron-LM: Training Multi-Billion Parameter Language Models UsingModel Parallelism[5] Introducing GPipe, an Open Source Library for Efficiently Training Large-scale Neural Network Models[6] Which hardware do you need to train a 176B parameters model?[7] Introducing PyTorch Fully Sharded Data Parallel (FSDP) API | PyTorch[8] Getting Started with Fully Sharded Data Parallel(FSDP) — PyTorch Tutorials 1.11.0+cu102 documentation[9] Training a 1 Trillion Parameter Model With PyTorch Fully Sharded Data Parallel on AWS | by PyTorch | PyTorch | Mar, 2022 | Medium[10] Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
https://huggingface.co/blog/opinion-classification-with-kili
Opinion Classification with Kili and HuggingFace AutoTrain
Alper
April 28, 2022
Introduction Understanding your users’ needs is crucial in any user-related business. But it also requires a lot of hard work and analysis, which is quite expensive. Why not leverage Machine Learning then? With much less coding by using Auto ML.In this article, we will leverage HuggingFace AutoTrain and Kili to build an active learning pipeline for text classification. Kili is a platform that empowers a data-centric approach to Machine Learning through quality training data creation. It provides collaborative data annotation tools and APIs that enable quick iterations between reliable dataset building and model training. Active learning is a process in which you add labeled data to the data set and then retrain a model iteratively. Therefore, it is endless and requires humans to label the data. As a concrete example use case for this article, we will build our pipeline by using user reviews of Medium from the Google Play Store. After that, we are going to categorize the reviews with the pipeline we built. Finally, we will apply sentiment analysis to the classified reviews. Then we will analyze the results, understanding the users’ needs and satisfaction will be much easier. AutoTrain with HuggingFace Automated Machine Learning is a term for automating a Machine Learning pipeline. It also includes data cleaning, model selection, and hyper-parameter optimization too. We can use 🤗 transformers for automated hyper-parameter searching. Hyper-parameter optimization is a difficult and time-consuming process.While we can build our pipeline ourselves by using transformers and other powerful APIs, it is also possible to fully automate this with AutoTrain. AutoTrain is built on many powerful APIs like transformers, datasets and inference-api.Cleaning the data, model selection, and hyper-parameter optimization steps are all fully automated in AutoTrain. One can fully utilize this framework to build production-ready SOTA transformer models for a specific task. Currently, AutoTrain supports binary and multi-label text classification, token classification, extractive question answering, text summarization, and text scoring. It also supports many languages like English, German, French, Spanish, Finnish, Swedish, Hindi, Dutch, and more. If your language is not supported by AutoTrain, it is also possible to use custom models with custom tokenizers. Kili Kili is an end-to-end AI training platform for data-centric businesses. Kili provides optimized labeling features and quality management tools to manage your data. You can quickly annotate the image, video, text, pdf, and voice data while controlling the quality of the dataset. It also has powerful APIs for GraphQL and Python which eases data management a lot. It is available either online or on-premise and it enables modern Machine Learning technics either on computer vision or on NLP and OCR. It supports text classification, named entity recognition (NER), relation extraction, and more NLP/OCR tasks. It also supports computer vision tasks like object detection, image transcription, video classification, semantic segmentation, and many more! Kili is a commercial tool but you can also create a free developer account to try Kili’s tools. You can learn more from the pricing page. Project We will work on an example of review classification, along with sentiment analysis, to get insights about a mobile application.We have extracted around 40 thousand reviews of Medium from the Google Play Store. We will annotate the review texts in this dataset step by step. And then we’re going to build a pipeline for review classification. In the modeling, the first model will be prepared with AutoTrain. Then we will also build a model without using AutoTrain. All the code and the dataset can be found on the GitHub repository of the project. Dataset Let’s start by taking a look at the raw dataset,There are 10 columns and 40130 samples in this dataset. The only column we need is content which is the review of the user. Before starting, we need to define some categories.We have defined 4 categories, Subscription: Since medium has a subscription option, anything related to users' opinions about subscription features should belong here.Content: Medium is a sharing platform, there are lots of writings from poetry to advanced artificial intelligence research. Users’ opinions about a variety of topics, the quality of the content should belong here.Interface: Thoughts about UI, searching articles, recommendation engine, and anything related to the interface should belong here. This also includes payment-related issues.User Experience: The user’s general thoughts and opinions about the application. Which should be generally abstract without indicating another category.For the labeling part, we need to create a project in Kili’s platform at first. We can use either the web interface of the platform or APIs. Let's see both.From the web interface:From the project list page, we create a multi-class text classification project.After that, on the project’s page, you can add your data by clicking the Add assets button. Currently, you can add at most 25000 samples, but you can extend this limit if you contact the Kili sales team.After we create our project, we need to add jobs. We can prepare a labeling interface from the Settings pageAlthough we have defined 4 categories, it is inevitable to come across reviews that should have multiple categories or completely weird ones. I will add two more labels (which are not to use in modeling) to catch these cases too.In our example, we added two more labels (Other, Multi-label). We also added a named entity recognition (NER) job just to specify how we decided on a label while labeling. The final interface is shown belowAs you can see from the menu at the left, it is also possible to drop a link that describes your labels on the Instructions page. We can also add other members to our project from Members or add quality measures from the Quality management pages. More information can be found in the documentation.Now, let’s create our project with Python API:At first, we need to import needed libraries(notebooks/kili_project_management.ipynb)import os#we will process the data (which is a csv file)import pandas as pd#API clientfrom kili.client import Kili#Why not use pretty progress bars?from tqdm import tqdmfrom dotenv import load_dotenvload_dotenv()In order to access the platform, we need to authenticate our clientAPI_KEY = os.getenv('KILI_API_KEY')# initialize and authenticate the Kili clientkili = Kili(api_key = API_KEY)Now we can start to prepare our interface, the interface is just a dictionary in Python. We will define our jobs, then fill the labels up. Since all labels also could have children labels, we will pass labels as dictionaries too.labels = ['User experience', 'Subscription', 'Content', 'Other', 'Multi label']entity_dict = { 'User experience': '#cc4125', 'Subscription': '#4543e6', 'Content': '#3edeb6',}project_name = 'User review dataset for topic classification'project_description = "Medium's app reviews fetched from google play store for topic classification"interface = { 'jobs': { 'JOB_0': { 'mlTask': 'CLASSIFICATION', 'instruction': 'Labels', 'required': 1, 'content': {"categories": {},"input": "radio", }, }, 'JOB_1': { 'mlTask': "NAMED_ENTITIES_RECOGNITION", 'instruction': 'Entities', 'required': 1, 'content': {'categories': {},"input": "radio" }, }, }}# fill the interface json with jobsfor label in labels: # converts labels to uppercase and replaces whitespaces with underscores (_) # ex. User experience -> USER_EXPERIENCE # this is the preferred way to fill the interface label_upper = label.strip().upper().replace(' ', '_') # content_dict_0 = interface['jobs']['JOB_0']['content'] categories_0 = content_dict_0['categories'] category = {'name': label, 'children': []} categories_0[label_upper] = categoryfor label, color in entity_dict.items(): label_upper = label.strip().upper().replace(' ', '_') content_dict_1 = interface['jobs']['JOB_1']['content'] categories_1 = content_dict_1['categories'] category = {'name': label, 'children': [], 'color': color} categories_1[label_upper] = category# now we can create our project# this method returns the created project’s idproject_id = kili.create_project(json_interface=interface, input_type='TEXT', title=project_name, description=project_description)['id']We are ready to upload our data to the project. The append_many_to_dataset method can be used to import the data into the platform. By using the Python API, we can import the data by batch of 100 maximum. Here is a simple function to upload the data:def import_dataframe(project_id:str, dataset:pd.DataFrame, text_data_column:str, external_id_column:str, subset_size:int=100) -> bool: """ Arguments: Inputs - project_id (str): specifies the project to load the data, this is also returned when we create our project - dataset (pandas DataFrame): Dataset that has proper columns for id and text inputs - text_data_column (str): specifies which column has the text input data - external_id_column (str): specifies which column has the ids - subset_size (int): specifies the number of samples to import at a time. Cannot be higher than 100 Outputs: None Returns: True or False regards to process succession """ assert subset_size <= 100, "Kili only allows to upload 100 assets at most at a time onto the app" L = len(dataset) # set 25000 as an upload limit, can be changed if L>25000: print('Kili Projects currently supports maximum 25000 samples as default. Importing first 25000 samples...') L=25000 i = 0 while i+subset_size < L: subset = dataset.iloc[i:i+subset_size] externalIds = subset[external_id_column].astype(str).to_list() contents = subset[text_data_column].astype(str).to_list() kili.append_many_to_dataset(project_id=project_id, content_array=contents, external_id_array=externalIds) i += subset_size return TrueIt simply imports the given dataset DataFrame to a project specified by project_id.We can see the arguments from docstring, we just need to pass our dataset along with the corresponding column names. We’ll just use the sample indices we get when we load the data. And then voila, uploading the data is done!dataset_path = '../data/processed/lowercase_cleaned_dataset.csv'df = pd.read_csv(dataset_path).reset_index() # reset index to get the indicesimport_dataframe(project_id, df, 'content', 'index')It wasn’t difficult to use the Python API, the helper methods we used covered many difficulties. We also used another script to check the new samples when we updated the dataset. Sometimes the model performance drop down after the dataset update. This is due to simple mistakes like mislabeling and introducing bias to the dataset. The script simply authenticates and then moves distinct samples of two given dataset versions to To Review. We can change the property of a sample through update_properties_in_assets method:(scripts/move_diff_to_review.py)# Set up the Kili client and argumentsfrom kili.client import Kilifrom dotenv import load_dotenvimport osimport argparseimport pandas as pdload_dotenv()parser = argparse.ArgumentParser()parser.add_argument('--first', required=True, type=str, help='Path to first dataframe')parser.add_argument('--second', required=True, type=str, help='Path to second dataframe')args = vars(parser.parse_args())# set the kili connection upAPI_KEY = os.getenv('KILI_API_KEY')kili = Kili(API_KEY)# read dataframesdf1 = pd.read_csv(args['first'])df2 = pd.read_csv(args['second'])# concating two of them should let us have duplicates of common elements# then we can drop the duplicated elements without keeping any duplicates to get the different elements across the two dataframesdiff_df = pd.concat((df1, df2)).drop_duplicates(keep=False)diff_ids = diff_df['id'].to_list()# The changes should be given as an array that # contains the change for every single sample. # That’s why [‘TO_REVIEW’] * len(diff_df) is passed to status_array argumentkili.update_properties_in_assets(diff_ids, status_array=['TO_REVIEW'] * len(diff_ids))print('SET %d ENTRIES TO BE REVIEWED!' % len(diff_df)) Labeling Now that we have the source data uploaded, the platform has a built-in labeling interface which is pretty easy to use. Available keyboard shortcuts helped while annotating the data. We used the interface without breaking a sweat, there are automatically defined shortcuts and it simplifies the labeling. We can see the shortcuts by clicking the keyboard icon at the right-upper part of the interface, they are also shown by underlined characters in the labeling interface at the right. Some samples were very weird, so we decided to skip them while labeling. In general, the process was way easier thanks to Kili’s built-in platform. Exporting the Labeled Data The labeled data is exported with ease by using Python API. The script below exports the labeled and reviewed samples into a dataframe, then saves it with a given name as a CSV file.(scripts/prepare_dataset.py)import argparseimport osimport pandas as pdfrom dotenv import load_dotenvfrom kili.client import Kiliload_dotenv()parser = argparse.ArgumentParser()parser.add_argument('--output_name', required=True, type=str, default='dataset.csv')parser.add_argument('--remove', required=False, type=str)args = vars(parser.parse_args())API_KEY = os.getenv('KILI_API_KEY')dataset_path = '../data/processed/lowercase_cleaned_dataset.csv'output_path = os.path.join('../data/processed', args['output_name'])def extract_labels(labels_dict): response = labels_dict[-1] # pick the latest version of the sample label_job_dict = response['jsonResponse']['JOB_0'] categories = label_job_dict['categories'] # all samples have a label, we can just pick it by its index label = categories[0]['name'] return labelkili = Kili(API_KEY)print('Authenticated!')# query will return a list that contains matched elements (projects in this case)# since we have only one project with this name, we can just pick the first indexproject = kili.projects( search_query='User review dataset for topic classification')[0]project_id = project['id']# we can customize the returned fields# the fields below are pretty much enough, # labels.jsonResponse carries the labeling datareturned_fields = [ 'id', 'externalId', 'labels.jsonResponse', 'skipped', 'status']# I read the raw dataset too in order to match the samples with externalIddataset = pd.read_csv(dataset_path)# we can fetch the data as a dataframedf = kili.assets(project_id=project_id, status_in=['LABELED', 'REVIEWED'], fields=returned_fields, format='pandas')print('Got the samples!')# we will pass the skipped samplesdf_ns = df[~df['skipped']].copy()# extract the labeled samplesdf_ns.loc[:, 'label'] = df_ns['labels'].apply(extract_labels)# The externalId column is returned as string, let’s convert it to integer# to use as indicesdf_ns.loc[:, 'content'] = dataset.loc[df_ns.externalId.astype(int), 'content']# we can drop the `labels` column nowdf_ns = df_ns.drop(columns=['labels'])# we'll remove the multi-labeled samplesdf_ns = df_ns[df_ns['label'] != 'MULTI_LABEL'].copy()# also remove the samples with label specified in remove argument if it's givenif args['remove']: df_ns = df_ns.drop(index=df_ns[df_ns['label'] == args['remove']].index)print(‘DATA FETCHING DONE')print('DATASET HAS %d SAMPLES' % (len(df_ns)))print('SAVING THE PROCESSED DATASET TO: %s' % os.path.abspath(output_path))df_ns.to_csv(output_path, index=False)print('DONE!')Nice! We now have the labeled data as a csv file. Let's create a dataset repository in HuggingFace and upload the data there!It's really simple, just click your profile picture and select New Dataset option. Then enter the repository name, pick a license if you want and it's done!Now we can upload the dataset from Add file in the Files and versions tab. Dataset viewer is automatically available after you upload the data, we can easily check the samples!It is also possible to upload the dataset to Hugging Face's dataset hub by using datasets package. Modeling Let's use active learning. We iteratively label and fine-tune the model. In each iteration, we label 50 samples in the dataset. The number of samples is shown below:Let’s try out AutoTrain first:First, open the AutoTrainCreate a projectWe can select the dataset repository we created before or upload the dataset again. Then we need to choose the split type, I’ll leave it as Auto.Train the modelsAutoTrain will try different models and select the best models. Then performs hyper-parameter optimization automatically. The dataset is also processed automatically.The price totally depends on your use case. It can be as low as $10 or it can be more expensive than the current value.The training is done after around 20 minutes, the results are pretty good!The best model’s accuracy is almost %89.Now we can use this model to perform the analysis, it only took about 30 minutes to set up the whole thing. Modeling without AutoTrain We will use Ray Tune and Hugging Face’s Trainer API to search hyper-parameters and fine-tune a pre-trained deep learning model. We have selected roBERTa base sentiment classification model which is trained on tweets for fine-tuning. We've fine-tuned the model on google collaboratory and it can be found on the notebooks folder in the GitHub repository.Ray tune is a popular library for hyper-parameter optimization which comes with many SOTA algorithms out of the box. It is also possible to use Optuna and SigOpt.We also used [Async Successive Halving Algorithm (ASHA) as the scheduler and HyperOpt as the search algorithm. Which is pretty much a starting point. You can use different schedulers and search algorithms.What will we do?Import the necessary libraries (a dozen of them) and prepare a dataset classDefine needed functions and methods to process the dataLoad the pre-trained model and tokenizerRun hyper-parameter searchUse the best results for evaluationLet’s start with importing necessary libraries!(all the code is in notebooks/modeling.ipynb and google collaboratory notebook)# general data science/utilization/visualization importsimport jsonimport osimport random# progress barfrom tqdm import tqdm# data manipulation / readingimport numpy as npimport pandas as pd# visualizationimport plotly.express as pximport matplotlib.pyplot as plt# pre-defined evaluation metricsfrom sklearn.metrics import (accuracy_score, f1_score, precision_score, recall_score)from sklearn.model_selection import train_test_split# torch importsimport torchimport torch.nn as nnfrom torch.utils.data import DataLoader, Dataset, random_split# huggingface importsimport transformersfrom datasets import load_metricfrom transformers import (AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments)# ray tune imports for hyperparameter optimizationfrom ray.tune.schedulers import ASHAScheduler, PopulationBasedTrainingfrom ray.tune.suggest.hyperopt import HyperOptSearchWe will set a seed for the libraries we use for reproducibilitydef seed_all(seed): torch.manual_seed(seed) random.seed(seed) np.random.seed(seed)SEED=42seed_all(SEED)Now let’s define our dataset class!class TextClassificationDataset(Dataset): def __init__(self, dataframe): self.labels = dataframe.label.to_list() self.inputs = dataframe.content.to_list() self.labels_to_idx = {k:v for k,v in labels_dict.items()} # copy the labels_dict dictionary def __len__(self): return len(self.inputs) def __getitem__(self, idx): if type(idx)==torch.Tensor: idx = list(idx) input_data = self.inputs[idx] target = self.labels[idx] target = self.labels_to_idx[target] return {'text': input_data, 'label':target}We can download the model easily by specifying HuggingFace hub repository. It is also needed to import the tokenizer for the specified model. We have to provide a function to initialize the model during hyper-parameter optimization. The model will be defined there.The metric to optimize is accuracy, we want this value to be as high as possible. Because of that, we need to load the metric, then define a function to get the predictions and calculate the preferred metric.model_name = 'cardiffnlp/twitter-roberta-base-sentiment'# we will perform the search to optimize the model accuracy,# we need to specify and load the accuracy metric as a first stepmetric = load_metric("accuracy")# since we already entered a model name, we can load the tokenizer# we can also load the model but i'll describe it in the model_init function.tokenizer = AutoTokenizer.from_pretrained(model_name)def model_init(): """ Hyperparameter optimization is performed by newly initialized models, therefore we will need to initialize the model again for every single search run. This function initializes and returns the pre-trained model selected with `model_name` """ return AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=4, return_dict=True, ignore_mismatched_sizes=True)# the function to calculate accuracydef compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) # just pick the indices that has the maximum values return metric.compute(predictions=predictions, references=labels)After defining metric calculation and model initialization function, we can load the data:file_name = "dataset-11.csv"dataset_path = os.path.join('data/processed', file_name)dataset = pd.read_csv(dataset_path)I also defined two dictionaries for mapping labels to indices and indices to labels.idx_to_label = dict(enumerate(dataset.label.unique()))labels_dict = {v:k for k,v in idx_to_label.items()}Now we can define the search algorithm and the scheduler for the hyper-parameter-search. scheduler = ASHAScheduler(metric='objective', mode='max')search_algorithm = HyperOptSearch(metric='objective', mode='max', random_state_seed=SEED)# number of runs for parameter searchingn_trials = 40We also need to tokenize the text data before passing it to the model, we can easily do this by using the loaded tokenizer. Ray Tune works in a black-box setting so I used tokenizer as a default argument for a work-around. Otherwise, an error about tokenizer definition would arise.def tokenize(sample, tokenizer=tokenizer): tokenized_sample = tokenizer(sample['text'], padding=True, truncation=True) tokenized_sample['label'] = sample['label'] return tokenized_sampleAnother utility function that returns stratified and tokenized Torch dataset splits:def prepare_datasets(dataset_df, test_size=.2, val_size=.2): train_set, test_set = train_test_split(dataset_df, test_size=test_size, stratify=dataset_df.label, random_state=SEED) train_set, val_set = train_test_split(train_set, test_size=val_size, stratify=train_set.label, random_state=SEED) # shuffle the dataframes beforehand train_set = train_set.sample(frac=1, random_state=SEED) val_set = val_set.sample(frac=1, random_state=SEED) test_set = test_set.sample(frac=1, random_state=SEED) # convert dataframes to torch datasets train_dataset = TextClassificationDataset(train_set) val_dataset = TextClassificationDataset(val_set) test_dataset = TextClassificationDataset(test_set) # tokenize the datasets tokenized_train_set = train_dataset.map(tokenize) tokenized_val_set = val_dataset.map(tokenize) tokenized_test_set = test_dataset.map(tokenize) # finally return the processed sets return tokenized_train_set, tokenized_val_set, tokenized_test_setNow we can perform the search! Let’s start by processing the data:tokenized_train_set, tokenized_val_set, tokenized_test_set = prepare_datasets(dataset)training_args = TrainingArguments( 'trial_results', evaluation_strategy="steps", disable_tqdm=True, skip_memory_metrics=True,)trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=tokenized_train_set, eval_dataset=tokenized_val_set, model_init=model_init, compute_metrics=compute_metrics )best_run = trainer.hyperparameter_search( direction="maximize", n_trials=n_trials, backend="ray", search_alg=search_algorithm, scheduler=scheduler )We performed the search with 20 and 40 trials respectively, the results are shown below. The weighted average of F1, Recall, and Precision scores for 20 runs.The weighted average of F1, Recall, and Precision scores for 40 runs.The performance spiked up at the third dataset version. At some point in data labeling, I’ve introduced too much bias to the dataset mistakingly. As we can see its performance becomes more reasonable since the sample variance increased later on. The final model is saved at Google Drive and can be downloaded from here, it is also possible to download via the download_models.py script. Final Analysis We can use the fine-tuned model to conduct the final analysis now. All we have to do is load the data, process it, and get the prediction results from the model. Then we can use a pre-trained model for sentiment analysis and hopefully get insights.We use Google Colab for the inference (here) and then exported the results to result.csv. It can be found in results in the GitHub repository. We then analyzed the results in another google collaboratory notebook for an interactive experience. So you can also use it easily and interactively. Let’s check the results now!We can see that the given scores are highly positive. In general, the application is liked by the users.This also matches with the sentiment analysis, most of the reviews are positive and the least amount of reviews are classified as negative.As we can see from above, the model's performance is kind of understandable. Positive scores are dominantly higher than the others, just like the sentimental analysis graph shows. As it comes to the categories defined before, it seems that the model predicts most of the reviews are about users' experiences (excluding experiences related to other categories):We can also see the sentiment predictions over defined categories below:We won't do a detailed analysis of the reviews, a basic understanding of potential problems would suffice. Therefore, it is enough to conclude simple results from the final data:It is understandable that most of the reviews about the subscription are negative. Paid content generally is not welcomed in mobile applications. There are many negative reviews about the interface. This may be a clue for further analysis. Maybe there is a misconception about features, or a feature doesn't work as users thought. People have generally liked the articles and most of them had good experiences.Important note about the plot: we haven't filtered the reviews by application version. When we look at the results of the latest current version (4.5), it seems that the interface of the application confuses the users or has annoying bugs. Conclusion Now we can use the pre-trained model to try to understand the potential shortcomings of the mobile application. Then it would be easier to analyze a specific feature. We used HuggingFace’s powerful APIs and AutoTrain along with Kili’s easy-to-use interface in this example. The modeling with AutoTrain just took 30 minutes, it chose the models and trained them for our use. AutoTrain is definitely much more efficient since I spent more time as I develop the model by myself.All the code, datasets, and scripts can be found in github. You can also try the AutoTrain model.While we can consider this as a valid starting point, we should collect more data and try to build better pipelines. Better pipelines would result in more efficient improvements.
https://huggingface.co/blog/ml-director-insights
Director of Machine Learning Insights [Part 1]
Britney Muller
April 27, 2022
Few seats at the Machine Learning table span both technical skills, problem solving and business acumen like Directors of Machine LearningDirectors of Machine Learning and/or Data Science are often expected to design ML systems, have deep knowledge of mathematics, familiarity with ML frameworks, rich data architecture understanding, experience applying ML to real-world applications, solid communication skills, and often expected to keep on top of industry developments. A tall order!For these reasons, we’ve tapped into this unique group of ML Directors for a series of articles highlighting their thoughts on current ML insights and industry trends ranging from Healthcare to Finance, eCommerce, SaaS, Research, Media, and more. For example, one Director will note how ML can be used to reduce empty deadheading truck driving (which occurs ~20% of the time) down to just 19% would cut carbon emissions by ~100,000 Americans. Note: This is back of napkin math, done by an ex-rocket Scientist however, so we’ll take it. In this first installment, you’ll hear from a researcher (who’s using ground penetrating radar to detect buried landmines), an ex-Rocket Scientist, a Dzongkha fluent amateur gamer (Kuzu = Hello!), an ex-van living Scientist, a high-performance Data Science team coach who’s still very hands-on, a data practitioner who values relationships, family, dogs, and pizza. —All of whom are currently Directors of Machine Learning with rich field insights.🚀 Let’s meet some top Machine Learning Directors and hear what they have to say about Machine Learning’s impact on their prospective industries:Archi MitraBackground: Bringing balance to the promise of ML for business. People over Process. Strategy over Hope. AI Ethics over AI Profits. Brown New Yorker.Fun Fact: I can speak Dzongkha (google it!) and am a supporter of Youth for Seva.Buzzfeed: An American Internet media, news and entertainment company with a focus on digital media.1. How has ML made a positive impact on Media?Privacy first personalization for customers: Every user is unique and while their long-term interests are stable, their short-term interests are stochastic. They expect their relationship with the Media to reflect this. The combination of advancement in hardware acceleration and Deep Learning for recommendations has unlocked the ability to start deciphering this nuance and serve users with the right content at the right time at the right touchpoint.Assistive tools for makers: Makers are the limited assets in media and preserving their creative bandwidth by ML driven human-in-the-loop assistive tools have seen an outsized impact. Something as simple as automatically suggesting an appropriate title, image, video, and/or product that can go along with the content they are creating unlocks a collaborative machine-human flywheel.Tightened testing: In a capital intensive media venture, there is a need to shorten the time between collecting information on what resonates with users and immediately acting on it. With a wide variety of Bayesian techniques and advancements in reinforcement learning, we have been able to drastically reduce not only the time but the cost associated with it.2. What are the biggest ML challenges within Media?Privacy, editorial voice, and equitable coverage: Media is a key pillar in the democratic world now more than ever. ML needs to respect that and operate within constraints that are not strictly considered table stakes in any other domain or industry. Finding a balance between editorially curated content & programming vs ML driven recommendations continues to be a challenge. Another unique challenge to BuzzFeed is we believe that the internet should be free which means we don't track our users like others can.3. What’s a common mistake you see people make trying to integrate ML into Media?Ignoring “the makers” of media: Media is prevalent because it houses a voice that has a deep influence on people. The editors, content creators, writers & makers are the larynx of that voice and the business and building ML that enables them, extends their impact and works in harmony with them is the key ingredient to success.4. What excites you most about the future of ML?Hopefully, small data-driven general-purpose multi-modal multi-task real-time ML systems that create step-function improvements in drug discovery, high precision surgery, climate control systems & immersive metaverse experiences. Realistically, more accessible, low-effort meta-learning techniques for highly accurate text and image generation.Li TanBackground: Li is an AI/ML veteran with 15+ years of experience leading high-profile Data Science teams within industry leaders like Johnson & Johnson, Microsoft, and Amazon.Fun Fact: Li continues to be curious, is always learning, and enjoys hands-on programming.Johnson & Johnson: A Multinational corporation that develops medical devices, pharmaceuticals, and consumer packaged goods.1. How has ML made a positive impact on Pharmaceuticals?AI/ML applications have exploded in the pharmaceuticals space the past few years and are making many long-term positive impacts. Pharmaceuticals and healthcare have many use cases that can leverage AI/ML. Applications range from research, and real-world evidence, to smart manufacturing and quality assurance. The technologies used are also very broad: NLP/NLU, CV, AIIoT, Reinforcement Learning, etc. even things like AlphaFold.2. What are the biggest ML challenges within Pharmaceuticals?The biggest ML challenge within pharma and healthcare is how to ensure equality and diversity in AI applications. For example, how to make sure the training set has good representations of all ethnic groups. Due to the nature of healthcare and pharma, this problem can have a much greater impact compared to applications in some other fields. 3. What’s a common mistake you see people make trying to integrate ML into Pharmaceuticals?Wouldn’t say this is necessarily a mistake, but I see many people gravitate toward extreme perspectives when it comes to AI applications in healthcare; either too conservative or too aggressive. Some people are resistant due to high regulatory requirements. We had to qualify many of our AI applications with strict GxP validation. It may require a fair amount of work, but we believe the effort is worthwhile. On the opposite end of the spectrum, there are many people who think AI/Deep Learning models can outperform humans in many applications and run completely autonomously. As practitioners, we know that currently, neither is true.ML models can be incredibly valuable but still make mistakes. So I recommend a more progressive approach. The key is to have a framework that can leverage the power of AI while having goalkeepers in place. FDA has taken actions to regulate how AI/ML should be used in software as a medical device and I believe that’s a positive step forward for our industry.4. What excites you most about the future of ML?The intersections between AI/ML and other hard sciences and technologies. I’m excited to see what’s to come.Alina ZareBackground: Alina Zare teaches and conducts research in the area of machine learning and artificial intelligence as a Professor in the Electrical and Computer Engineering Department at the University of Florida and Director of the Machine Learning and Sensing Lab. Dr. Zare’s research has focused primarily on developing new machine learning algorithms to automatically understand and process data and imagery. Her research work has included automated plant root phenotyping, sub-pixel hyperspectral image analysis, target detection, and underwater scene understanding using synthetic aperture sonar, LIDAR data analysis, Ground Penetrating Radar analysis, and buried landmine and explosive hazard detection.Fun Fact: Alina is a rower. She joined the crew team in high school, rowed throughout college and grad school, was head coach of the University of Missouri team while she was an assistant professor, and then rowed as a masters rower when she joined the faculty at UF.Machine Learning & Sensing Laboratory: A University of Florida laboratory that develops machine learning methods for autonomously analyzing and understanding sensor data.1. How has ML made a positive impact on ScienceML has made a positive impact in a number of ways from helping to automate tedious and/or slow tasks or providing new ways to examine and look at various questions. One example from my work in ML for plant science is that we have developed ML approaches to automate plant root segmentation and characterization in imagery. This task was previously a bottleneck for plant scientists looking at root imagery. By automating this step through ML we can conduct these analyses at a much higher throughput and begin to use this data to investigate plant biology research questions at scale.2. What are the biggest ML challenges within Scientific research?There are many challenges. One example is when using ML for Science research, we have to think carefully through the data collection and curation protocols. In some cases, the protocols we used for non-ML analysis are not appropriate or effective. The quality of the data and how representative it is of what we expect to see in the application can make a huge impact on the performance, reliability, and trustworthiness of an ML-based system.3. What’s a common mistake you see people make trying to integrate ML into Science?Related to the question above, one common mistake is misinterpreting results or performance to be a function of just the ML system and not also considering the data collection, curation, calibration, and normalization protocols.4. What excites you most about the future of ML?There are a lot of really exciting directions. A lot of my research currently is in spaces where we have a huge amount of prior knowledge and empirically derived models. For example, I have ongoing work using ML for forest ecology research. The forestry community has a rich body of prior knowledge and current purely data-driven ML systems are not leveraging. I think hybrid methods that seamlessly blend prior knowledge with ML approaches will be an interesting and exciting path forward.An example may be understanding how likely two species are to co-occur in an area. Or what species distribution we could expect given certain environmental conditions. These could potentially be used w/ data-driven methods to make predictions in changing conditions.Nathan CahillBackground: Nathan is a passionate machine learning leader with 7 years of experience in research and development, and three years experience creating business value by shipping ML models to prod. He specializes in finding and strategically prioritizing the business' biggest pain points: unlocking the power of data earlier on in the growth curve.Fun Fact: Before getting into transportation and logistics I was engineering rockets at Northrop Grumman. #RocketScienceXpress Technologies: A digital freight matching technology to connect Shippers, Brokers and Carriers to bring efficiency and automation to the Transportation Industry.1. How has ML made a positive impact on Logistics/Transportation?The transportation industry is incredibly fragmented. The top players in the game have less than 1% market share. As a result, there exist inefficiencies that can be solved by digital solutions. For example, when you see a semi-truck driving on the road, there is currently a 20% chance that the truck is driving with nothing in the back. Yes, 20% of the miles a tractor-trailer drives are from the last drop off of their previous load to their next pickup. The chances are that there is another truck driving empty (or "deadheading") in the other direction. With machine learning and optimization this deadhead percent can be reduced significantly, and just taking that number from 20% to 19% percent would cut the equivalent carbon emissions of 100,000 Americans. Note: the carbon emissions of 100k Americans were my own back of the napkin math.2. What are the biggest ML challenges within Logistics?The big challenge within logistics is due to the fact that the industry is so fragmented: there is no shared pool of data that would allow technology solutions to "see" the big picture. For example a large fraction of brokerage loads, maybe a majority, costs are negotiated on a load by load basis making them highly volatile. This makes pricing a very difficult problem to solve. If the industry became more transparent and shared data more freely, so much more would become possible.3. What’s a common mistake you see people make trying to integrate ML into Logistics?I think that the most common mistake I see is people doing ML and Data Science in a vacuum. Most ML applications within logistics will significantly change the dynamics of the problem if they are being used so it's important to develop models iteratively with the business and make sure that performance in reality matches what you expect in training. An example of this is in pricing where if you underprice a lane slightly, your prices may be too competitive which will create an influx of freight on that lane. This, in turn, may cause costs to go up as the brokers struggle to find capacity for those loads, exacerbating the issue.4. What excites you the most about the future of ML?I think the thing that excites me most about ML is the opportunity to make people better at their jobs. As ML begins to be ubiquitous in business, it will be able to help speed up decisions and automate redundant work. This will accelerate the pace of innovation and create immense economic value. I can’t wait to see what problems we solve in the next 10 years aided by data science and ML!Nicolas BertagnolliBackground: Nic is a scientist and engineer working to improve human communication through machine learning. He’s spent the last decade applying ML/NLP to solve data problems in the medical space from uncovering novel patterns in cancer genomes to leveraging billions of clinical notes to reduce costs and improve outcomes. At BEN, Nic innovates intelligent technologies that scale human capabilities to reach people. See his CV, research, and Medium articles here.Fun Fact: Nic lived in a van and traveled around the western United States for three years before starting work at BEN.BEN: An entertainment AI company that places brands inside influencer, streaming, TV, and film content to connect brands with audiences in a way that advertisements cannot.1. How has ML made a positive impact on Marketing?In so many ways! It’s completely changing the landscape. Marketing is a field steeped in tradition based on gut feelings. In the past 20 years, there has been a move to more and more statistically informed marketing decisions but many brands are still relying on the gut instincts of their marketing departments. ML is revolutionizing this. With the ability to analyze data about which advertisements perform well we can make really informed decisions about how and who we market to. At BEN, ML has really helped us take the guesswork out of a lot of the process when dealing with influencer marketing. Data helps shine a light through the fog of bias and subjectivity so that we can make informed decisions. That’s just the obvious stuff! ML is also making it possible to make safer marketing decisions for brands. For example, it’s illegal to advertise alcohol to people under the age of 21. Using machine learning we can identify influencers whose audiences are mainly above 21. This scales our ability to help alcohol brands, and also brands who are worried about their image being associated with alcohol.2. What are the biggest ML challenges within Marketing?As with most things in Machine Learning the problems often aren’t really with the models themselves. With tools like Hugging Face, torch hub, etc. so many great and flexible models are available to work with. The real challenges have to do with collecting, cleaning, and managing the data. If we want to talk about the hard ML-y bits of the job, some of it comes down to the fact that there is a lot of noise in what people view and enjoy. Understanding things like virality are really really hard. Understanding what makes a creator/influencer successful over time is really hard. There is a lot of weird preference information buried in some pretty noisy difficult-to-acquire data. These problems come down to having really solid communication between data, ML, and business teams, and building models which augment and collaborate with humans instead of fully automating away their roles.3. What’s a common mistake you see people make trying to integrate ML into Marketing?I don’t think this is exclusive to marketing but prioritizing machine learning and data science over good infrastructure is a big problem I see often. Organizations hear about ML and want to get a piece of the pie so they hire some data scientists only to find out that they don’t have any infrastructure to service their new fancy pants models. A ton of the value of ML is in the infrastructure around the models and if you’ve got trained models but no infrastructure you’re hosed. One of the really nice things about BEN is we invested heavily in our data infrastructure and built the horse before the cart. Now Data Scientists can build models that get served to our end users quickly instead of having to figure out every step of that pipeline themselves. Invest in data engineering before hiring lots of ML folks.4. What excites you most about the future of ML?There is so much exciting stuff going on. I think the pace and democratization of the field is perhaps what I find most exciting. I remember almost 10 years ago writing my first seq2seq model for language translation. It was hundreds of lines of code, took forever to train and was pretty challenging. Now you can basically build a system to translate any language to any other language in under 100 lines of python code. It’s insane! This trend is most likely to continue and as the ML infrastructure gets better and better it will be easier and easier for people without deep domain expertise to deploy and serve models to other people. Much like in the beginning of the internet, software developers were few and far between and you needed a skilled team to set up a website. Then things like Django, Rails, etc. came out making website building easy but serving it was hard. We’re kind of at this place where building the models is easy but serving them reliably, monitoring them reliably, etc. is still challenging. I think in the next few years the barrier to entry is going to come WAY down here and basically, any high schooler could deploy a deep transformer to some cloud infrastructure and start serving useful results to the general population. This is really exciting because it means we’ll start to see more and more tangible innovation, much like the explosion of online services. So many cool things!Eric GolinkoBackground: Experienced data practitioner and team builder. I’ve worked in many industries across companies of different sizes. I’m a problem solver, by training a mathematician and computer scientist. But, above all, I value relationships, family, dogs, travel and pizza.Fun Fact: Eric adores nachos!E Source: Provides independent market intelligence, consulting, and predictive data science to utilities, major energy users, and other key players in the retail energy marketplace.1. How has ML made a positive impact on the Energy/Utility industry?Access to business insight. Provided a pre-requisite is great data. Utilities have many data relationships within their data portfolio from customers to devices, more specifically, this speaks to monthly billing amounts and enrollment in energy savings programs. Data like that could be stored in a relational database, whereas device or asset data we can think of as the pieces of machinery that make our grid. Bridging those types of data is non-trivial. In addition, third-party data spatial/gis and weather are extremely important. Through the lens of machine learning, we are able to find and explore features and outcomes that have a real impact.2. What are the biggest ML challenges within Utilities?There is a demystification that needs to happen. What machine learning can do and where it needs to be monitored or could fall short. The utility industry has established ways of operating, machine learning can be perceived as a disruptor. Because of this, departments can be slow to adopt any new technology or paradigm. However, if the practitioner is able to prove results, then results create traction and a larger appetite to adopt. Additional challenges are on-premise data and access to the cloud and infrastructure. It’s a gradual process and has a learning curve that requires patience.3. What’s a common mistake you see people make trying to integrate ML into Utilities?Not unique to utilizes, but moving too fast and neglecting good data quality and simple quality checks. Aside from this machine learning is practiced among many groups in some direct or indirect way. A challenge is integrating best development practices across teams. This also means model tracking and being able to persist experiments and continuous discovery.4. What excites you most about the future of ML?I’ve been doing this for over a decade, and I somehow still feel like a novice. I feel fortunate to have been part of teams where I’d be lucky to be called the average member. My feeling is that the next ten years and beyond will be more focused on data engineering to see even a larger number of use cases covered by machine learning.🤗 Thank you for joining us in this first installment of ML Director Insights. Stay tuned for more insights from ML Directors in SaaS, Finance, and e-Commerce. Big thanks to Eric Golinko, Nicolas Bertagnolli, Nathan Cahill, Alina Zare, Li Tan, and Archi Mitra for their brilliant insights and participation in this piece. We look forward to watching each of your continued successes and will be cheering you on each step of the way. 🎉 Lastly, if you or your team are interested in accelerating your ML roadmap with Hugging Face Experts please visit hf.co/support to learn more.
https://huggingface.co/blog/getting-started-habana
Getting Started with Transformers on Habana Gaudi
Julien Simon
April 26, 2022
A couple of weeks ago, we've had the pleasure to announce that Habana Labs and Hugging Face would partner to accelerate Transformer model training.Habana Gaudi accelerators deliver up to 40% better price performance for training machine learning models compared to the latest GPU-based Amazon EC2 instances. We are super excited to bring this price performance advantages to Transformers 🚀In this hands-on post, I'll show you how to quickly set up a Habana Gaudi instance on Amazon Web Services, and then fine-tune a BERT model for text classification. As usual, all code is provided so that you may reuse it in your projects.Let's get started!Setting up an Habana Gaudi instance on AWSThe simplest way to work with Habana Gaudi accelerators is to launch an Amazon EC2 DL1 instance. These instances are equipped with 8 Habana Gaudi processors that can easily be put to work thanks to the Habana Deep Learning Amazon Machine Image (AMI). This AMI comes preinstalled with the Habana SynapseAI® SDK, and the tools required to run Gaudi accelerated Docker containers. If you'd like to use other AMIs or containers, instructions are available in the Habana documentation.Starting from the EC2 console in the us-east-1 region, I first click on Launch an instance and define a name for the instance ("habana-demo-julsimon").Then, I search the Amazon Marketplace for Habana AMIs.I pick the Habana Deep Learning Base AMI (Ubuntu 20.04).Next, I pick the dl1.24xlarge instance size (the only size available).Then, I select the keypair that I'll use to connect to the instance with ssh. If you don't have a keypair, you can create one in place.As a next step, I make sure that the instance allows incoming ssh traffic. I do not restrict the source address for simplicity, but you should definitely do it in your account.By default, this AMI will start an instance with 8GB of Amazon EBS storage, which won't be enough here. I bump storage to 50GB.Next, I assign an Amazon IAM role to the instance. In real life, this role should have the minimum set of permissions required to run your training job, such as the ability to read data from one of your Amazon S3 buckets. This role is not needed here as the dataset will be downloaded from the Hugging Face hub. If you're not familiar with IAM, I highly recommend reading the Getting Started documentation.Then, I ask EC2 to provision my instance as a Spot Instance, a great way to reduce the $13.11 per hour cost.Finally, I launch the instance. A couple of minutes later, the instance is ready and I can connect to it with ssh. Windows users can do the same with PuTTY by following the documentation.ssh -i ~/.ssh/julsimon-keypair.pem ubuntu@ec2-18-207-189-109.compute-1.amazonaws.comOn this instance, the last setup step is to pull the Habana container for PyTorch, which is the framework I'll use to fine-tune my model. You can find information on other prebuilt containers and on how to build your own in the Habana documentation.docker pull \vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:1.5.0-610Once the image has been pulled to the instance, I run it in interactive mode.docker run -it \--runtime=habana \-e HABANA_VISIBLE_DEVICES=all \-e OMPI_MCA_btl_vader_single_copy_mechanism=none \--cap-add=sys_nice \--net=host \--ipc=host vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:1.5.0-610I'm now ready to fine-tune my model.Fine-tuning a text classification model on Habana GaudiI first clone the Optimum Habana repository inside the container I've just started.git clone https://github.com/huggingface/optimum-habana.gitThen, I install the Optimum Habana package from source.cd optimum-habanapip install .Then, I move to the subdirectory containing the text classification example and install the required Python packages.cd examples/text-classificationpip install -r requirements.txtI can now launch the training job, which downloads the bert-large-uncased-whole-word-masking model from the Hugging Face hub, and fine-tunes it on the MRPC task of the GLUE benchmark.Please note that I'm fetching the Habana Gaudi configuration for BERT from the Hugging Face hub, and you could also use your own. In addition, other popular models are supported, and you can find their configuration file in the Habana organization.python run_glue.py \--model_name_or_path bert-large-uncased-whole-word-masking \--gaudi_config_name Habana/bert-large-uncased-whole-word-masking \--task_name mrpc \--do_train \--do_eval \--per_device_train_batch_size 32 \--learning_rate 3e-5 \--num_train_epochs 3 \--max_seq_length 128 \--use_habana \--use_lazy_mode \--output_dir ./output/mrpc/After 2 minutes and 12 seconds, the job is complete and has achieved an excellent F1 score of 0.9181, which could certainly improve with more epochs.***** train metrics *****epoch = 3.0train_loss = 0.371train_runtime = 0:02:12.85train_samples = 3668train_samples_per_second = 82.824train_steps_per_second = 2.597***** eval metrics *****epoch = 3.0eval_accuracy = 0.8505eval_combined_score = 0.8736eval_f1 = 0.8968eval_loss = 0.385eval_runtime = 0:00:06.45eval_samples = 408eval_samples_per_second = 63.206eval_steps_per_second = 7.901Last but not least, I terminate the EC2 instance to avoid unnecessary charges. Looking at the Savings Summary in the EC2 console, I see that I saved 70% thanks to Spot Instances, paying only $3.93 per hour instead of $13.11.As you can see, the combination of Transformers, Habana Gaudi, and AWS instances is powerful, simple, and cost-effective. Give it a try and let us know what you think. We definitely welcome your questions and feedback on the Hugging Face Forum.Please reach out to Habana to learn more about training Hugging Face models on Gaudi processors.
https://huggingface.co/blog/education
Introducing Hugging Face for Education 🤗
Violette Lepercq
April 25, 2022
Given that machine learning will make up the overwhelming majority of software development and that non-technical people will be exposed to AI systems more and more, one of the main challenges of AI is adapting and enhancing employee skills. It is also becoming necessary to support teaching staff in proactively taking AI's ethical and critical issues into account. As an open-source company democratizing machine learning, Hugging Face believes it is essential to educate people from all backgrounds worldwide.We launched the ML demo.cratization tour in March 2022, where experts from Hugging Face taught hands-on classes on Building Machine Learning Collaboratively to more than 1000 students from 16 countries. Our new goal: to teach machine learning to 5 million people by the end of 2023.This blog post provides a high-level description of how we will reach our goals around education.🤗 Education for All🗣️ Our goal is to make the potential and limitations of machine learning understandable to everyone. We believe that doing so will help evolve the field in a direction where the application of these technologies will lead to net benefits for society as a whole. Some examples of our existing efforts:we describe in a very accessible way different uses of ML models (summarization, text generation, object detection…),we allow everyone to try out models directly in their browser through widgets in the model pages, hence lowering the need for technical skills to do so (example),we document and warn about harmful biases identified in systems (like GPT-2).we provide tools to create open-source ML apps that allow anyone to understand the potential of ML in one click.🤗 Education for Beginners🗣️ We want to lower the barrier to becoming a machine learning engineer by providing online courses, hands-on workshops, and other innovative techniques.We provide a free course about natural language processing (NLP) and more domains (soon) using free tools and libraries from the Hugging Face ecosystem. It’s completely free and without ads. The ultimate goal of this course is to learn how to apply Transformers to (almost) any machine learning problem!We provide a free course about Deep Reinforcement Learning. In this course, you can study Deep Reinforcement Learning in theory and practice, learn to use famous Deep RL libraries, train agents in unique environments, publish your trained agents in one line of code to the Hugging Face Hub, and more!We provide a free course on how to build interactive demos for your machine learning models. The ultimate goal of this course is to allow ML developers to easily present their work to a wide audience including non-technical teams or customers, researchers to more easily reproduce machine learning models and behavior, end users to more easily identify and debug failure points of models, and more!Experts at Hugging Face wrote a book on Transformers and their applications to a wide range of NLP tasks.Apart from those efforts, many team members are involved in other educational efforts such as:Participating in meetups, conferences and workshops.Creating podcasts, YouTube videos, and blog posts.Organizing events in which free GPUs are provided for anyone to be able to train and share models and create demos for them.🤗 Education for Instructors🗣️ We want to empower educators with tools and offer collaborative spaces where students can build machine learning using open-source technologies and state-of-the-art machine learning models.We provide to educators free infrastructure and resources to quickly introduce real-world applications of ML to theirs students and make learning more fun and interesting. By creating a classroom for free from the hub, instructors can turn their classes into collaborative environments where students can learn and build ML-powered applications using free open-source technologies and state-of-the-art models. We’ve assembled a free toolkit translated to 8 languages that instructors of machine learning or Data Science can use to easily prepare labs, homework, or classes. The content is self-contained so that it can be easily incorporated into an existing curriculum. This content is free and uses well-known Open Source technologies (🤗 transformers, gradio, etc). Feel free to pick a tutorial and teach it!1️⃣ A Tour through the Hugging Face Hub2️⃣ Build and Host Machine Learning Demos with Gradio & Hugging Face3️⃣ Getting Started with TransformersWe're organizing a dedicated, free workshop (June 6) on how to teach our educational resources in your machine learning and data science classes. Do not hesitate to register.We are currently doing a worldwide tour in collaboration with university instructors to teach more than 10000 students one of our core topics: How to build machine learning collaboratively? You can request someone on the Hugging Face team to run the session for your class via the ML demo.cratization tour initiative.🤗 Education Events & News09/08[EVENT]: ML Demo.cratization tour in Argentina at 2pm (GMT-3). Link here🔥 We are currently working on more content in the course, and more! Stay tuned!
https://huggingface.co/blog/supercharge-customer-service-with-machine-learning
Supercharged Customer Service with Machine Learning
Patrick von Platen
April 25, 2022
In this blog post, we will simulate a real-world customer service use case and use tools machine learning tools of the Hugging Face ecosystem to address it.We strongly recommend using this notebook as a template/example to solve your real-world use case.Defining Task, Dataset & ModelBefore jumping into the actual coding part, it's important to have a clear definition of the use case that you would like to automate or partly automate.A clear definition of the use case helps identify the most suitable task, dataset to use, and model to apply for your use case.Defining your NLP taskAlright, let's dive into a hypothetical problem we wish to solve using models of natural language processing models. Let's assume we are selling a product and our customer support team receives thousands of messages including feedback, complaints, and questions which ideally should all be answered.Quickly, it becomes obvious that customer support is by no means able to reply to every message. Thus, we decide to only respond to the most unsatisfied customers and aim to answer 100% of those messages, as these are likely the most urgent compared to the other neutral and positive messages.Assuming that a) messages of very unsatisfied customers represent only a fraction of all messages and b) that we can filter out unsatisfied messages in an automated way, customer support should be able to reach this goal.To filter out unsatisfied messages in an automated way, we plan on applying natural language processing technologies.The first step is to map our use case - filtering out unsatisfied messages - to a machine learning task.The tasks page on the Hugging Face Hub is a great place to get started to see which task best fits a given scenario. Each task has a detailed description and potential use cases.The task of finding messages of the most unsatisfied customers can be modeled as a text classification task: Classify a message into one of the following 5 categories: very unsatisfied, unsatisfied, neutral, satisfied, or very satisfied.Finding suitable datasetsHaving decided on the task, next, we should find the data the model will be trained on. This is usually more important for the performance of your use case than picking the right model architecture.Keep in mind that a model is only as good as the data it has been trained on. Thus, we should be very careful when curating and/or selecting the dataset.Since we consider the hypothetical use case of filtering out unsatisfied messages, let's look into what datasets are available.For your real-world use case, it is very likely that you have internal data that best represents the actual data your NLP system is supposed to handle. Therefore, you should use such internal data to train your NLP system.It can nevertheless be helpful to also include publicly available data to improve the generalizability of your model.Let's take a look at all available Datasets on the Hugging Face Hub. On the left side, you can filter the datasets according to Task Categories as well as Tasks which are more specific. Our use case corresponds to Text Classification -> Sentiment Analysis so let's select these filters. We are left with ca. 80 datasets at the time of writing this notebook. Two aspects should be evaluated when picking a dataset:Quality: Is the dataset of high quality? More specifically: Does the data correspond to the data you expect to deal with in your use case? Is the data diverse, unbiased, ...?Size: How big is the dataset? Usually, one can safely say the bigger the dataset, the better.It's quite tricky to evaluate whether a dataset is of high quality efficiently, and it's even more challenging to know whether and how the dataset is biased.An efficient and reasonable heuristic for high quality is to look at the download statistics. The more downloads, the more usage, the higher chance that the dataset is of high quality. The size is easy to evaluate as it can usually be quickly read upon. Let's take a look at the most downloaded datasets:GlueAmazon polarityTweet evalYelp review fullAmazon reviews multiNow we can inspect those datasets in more detail by reading through the dataset card, which ideally should give all relevant and important information. In addition, the dataset viewer is an incredibly powerful tool to inspect whether the data suits your use case.Let's quickly go over the dataset cards of the models above:GLUE is a collection of small datasets that primarily serve to compare new model architectures for researchers. The datasets are too small and don't correspond enough to our use case.Amazon polarity is a huge and well-suited dataset for customer feedback since the data deals with customer reviews. However, it only has binary labels (positive/negative), whereas we are looking for more granularity in the sentiment classification.Tweet eval uses different emojis as labels that cannot easily be mapped to a scale going from unsatisfied to satisfied.Amazon reviews multi seems to be the most suitable dataset here. We have sentiment labels ranging from 1-5 corresponding to 1-5 stars on Amazon. These labels can be mapped to very unsatisfied, neutral, satisfied, very satisfied. We have inspected some examples on the dataset viewer to verify that the reviews look very similar to actual customer feedback reviews, so this seems like a very good dataset. In addition, each review has a product_category label, so we could even go as far as to only use reviews of a product category corresponding to the one we are working in. The dataset is multi-lingual, but we are just interested in the English version for now.Yelp review full looks like a very suitable dataset. It's large and contains product reviews and sentiment labels from 1 to 5. Sadly, the dataset viewer is not working here, and the dataset card is also relatively sparse, requiring some more time to inspect the dataset. At this point, we should read the paper, but given the time constraint of this blog post, we'll choose to go for Amazon reviews multi.As a conclusion, let's focus on the Amazon reviews multi dataset considering all training examples.As a final note, we recommend making use of Hub's dataset functionality even when working with private datasets. The Hugging Face Hub, Transformers, and Datasets are flawlessly integrated, which makes it trivial to use them in combination when training models.In addition, the Hugging Face Hub offers:A dataset viewer for every datasetEasy demoing of every model using widgetsPrivate and Public modelsGit version control for repositoriesHighest security mechanismsFinding a suitable modelHaving decided on the task and the dataset that best describes our use case, we can now look into choosing a model to be used.Most likely, you will have to fine-tune a pretrained model for your own use case, but it is worth checking whether the hub already has suitable fine-tuned models. In this case, you might reach a higher performance by just continuing to fine-tune such a model on your dataset.Let's take a look at all models that have been fine-tuned on Amazon Reviews Multi. You can find the list of models on the bottom right corner - clicking on Browse models trained on this dataset you can see a list of all models fine-tuned on the dataset that are publicly available. Note that we are only interested in the English version of the dataset because our customer feedback will only be in English. Most of the most downloaded models are trained on the multi-lingual version of the dataset and those that don't seem to be multi-lingual have very little information or poor performance. At this point,it might be more sensible to fine-tune a purely pretrained model instead of using one of the already fine-tuned ones shown in the link above.Alright, the next step now is to find a suitable pretrained model to be used for fine-tuning. This is actually more difficult than it seems given the large amount of pretrained and fine-tuned models that are on the Hugging Face Hub. The best option is usually to simply try out a variety of different models to see which one performs best.We still haven't found the perfect way of comparing different model checkpoints to each other at Hugging Face, but we provide some resources that are worth looking into:The model summary gives a short overview of different model architectures.A task-specific search on the Hugging Face Hub, e.g. a search on text-classification models, shows you the most downloaded checkpoints which is also an indication of how well those checkpoints perform.However, both of the above resources are currently suboptimal. The model summary is not always kept up to date by the authors. The speed at which new model architectures are released and old model architectures become outdated makes it extremely difficult to have an up-to-date summary of all model architectures.Similarly, it doesn't necessarily mean that the most downloaded model checkpoint is the best one. E.g. bert-base-cased is amongst the most downloaded model checkpoints but is not the best performing checkpoint anymore.The best approach is to try out various model architectures, stay up to date with new model architectures by following experts in the field, and check well-known leaderboards.For text-classification, the important benchmarks to look at are GLUE and SuperGLUE. Both benchmarks evaluate pretrained models on a variety of text-classification tasks, such as grammatical correctness, natural language inference, Yes/No question answering, etc..., which are quite similar to our target task of sentiment analysis. Thus, it is reasonable to choose one of the leading models of these benchmarks for our task.At the time of writing this blog post, the best performing models are very large models containing more than 10 billion parameters most of which are not open-sourced, e.g. ST-MoE-32B, Turing NLR v5, orERNIE 3.0. One of the top-ranking models that is easily accessible is DeBERTa. Therefore, let's try out DeBERTa's newest base version - i.e. microsoft/deberta-v3-base.Training / Fine-tuning a model with 🤗 Transformers and 🤗 DatasetsIn this section, we will jump into the technical details of how tofine-tune a model end-to-end to be able to automatically filter out very unsatisfied customer feedback messages.Cool! Let's start by installing all necessary pip packages and setting up our code environment, then look into preprocessing the dataset, and finally start training the model.The following notebook can be run online in a google colab pro with the GPU runtime environment enabled.Install all necessary packagesTo begin with, let's install git-lfs so that we can automatically upload our trained checkpoints to the Hub during training.apt install git-lfsAlso, we install the 🤗 Transformers and 🤗 Datasets libraries to run this notebook. Since we will be using DeBERTa in this blog post, we also need to install the sentencepiece library for its tokenizer.pip install datasets transformers[sentencepiece]Next, let's login into our Hugging Face account so that models are uploaded correctly under your name tag.from huggingface_hub import notebook_loginnotebook_login()Output:Login successfulYour token has been saved to /root/.huggingface/tokenAuthenticated through git-credential store but this isn't the helper defined on your machine.You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the defaultgit config --global credential.helper storePreprocess the datasetBefore we can start training the model, we should bring the dataset in a formatthat is understandable by the model.Thankfully, the 🤗 Datasets library makes this extremely easy as you will see in the following cells.The load_dataset function loads the dataset, nicely arranges it into predefined attributes, such as review_body and stars, and finally saves the newly arranged data using the arrow format on disk.The arrow format allows for fast and memory-efficient data reading and writing.Let's load and prepare the English version of the amazon_reviews_multi dataset.from datasets import load_datasetamazon_review = load_dataset("amazon_reviews_multi", "en")Output:Downloading and preparing dataset amazon_reviews_multi/en (download: 82.11 MiB, generated: 58.69 MiB, post-processed: Unknown size, total: 140.79 MiB) to /root/.cache/huggingface/datasets/amazon_reviews_multi/en/1.0.0/724e94f4b0c6c405ce7e476a6c5ef4f87db30799ad49f765094cf9770e0f7609...Dataset amazon_reviews_multi downloaded and prepared to /root/.cache/huggingface/datasets/amazon_reviews_multi/en/1.0.0/724e94f4b0c6c405ce7e476a6c5ef4f87db30799ad49f765094cf9770e0f7609. Subsequent calls will reuse this data.Great, that was fast 🔥. Let's take a look at the structure of the dataset.print(amazon_review)Output:{.output .execute_result execution_count="5"}DatasetDict({train: Dataset({features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],num_rows: 200000})validation: Dataset({features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],num_rows: 5000})test: Dataset({features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'],num_rows: 5000})})We have 200,000 training examples as well as 5000 validation and test examples. This sounds reasonable for training! We're only really interested in the input being the "review_body" column and the target being the "starts" column.Let's check out a random example.random_id = 34print("Stars:", amazon_review["train"][random_id]["stars"])print("Review:", amazon_review["train"][random_id]["review_body"])Output:Stars: 1Review: This product caused severe burning of my skin. I have used other brands with no problemsThe dataset is in a human-readable format, but now we need to transform it into a "machine-readable" format. Let's define the model repository which includes all utils necessary to preprocess and fine-tune the checkpoint we decided on.model_repository = "microsoft/deberta-v3-base"Next, we load the tokenizer of the model repository, which is a DeBERTa's Tokenizer.from transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained(model_repository)As mentioned before, we will use the "review_body" as the model's input and "stars" as the model's target. Next, we make use of the tokenizer to transform the input into a sequence of token ids that can be understood by the model. The tokenizer does exactly this and can also help you to limit your input data to a certain length to not run into a memory issue. Here, we limitthe maximum length to 128 tokens which in the case of DeBERTa corresponds to roughly 100 words which in turn corresponds to ca. 5-7 sentences. Looking at the dataset viewer again, we can see that this covers pretty much all training examples.Important: This doesn't mean that our model cannot handle longer input sequences, it just means that we use a maximum length of 128 for training since it covers 99% of our training and we don't want to waste memory. Transformer models have shown to be very good at generalizing to longer sequences after training.If you want to learn more about tokenization in general, please have a look at the Tokenizers docs.The labels are easy to transform as they already correspond to numbers in their raw form, i.e. the range from 1 to 5. Here we just shift the labels into the range 0 to 4 since indexes usually start at 0.Great, let's pour our thoughts into some code. We will define a preprocess_function that we'll apply to each data sample.def preprocess_function(example):output_dict = tokenizer(example["review_body"], max_length=128, truncation=True)output_dict["labels"] = [e - 1 for e in example["stars"]]return output_dictTo apply this function to all data samples in our dataset, we use the map method of the amazon_review object we created earlier. This will apply the function on all the elements of all the splits in amazon_review, so our training, validation, and testing data will be preprocessed in one single command. We run the mapping function in batched=True mode to speed up the process and also remove all columns since we don't need them anymore for training.tokenized_datasets = amazon_review.map(preprocess_function, batched=True, remove_columns=amazon_review["train"].column_names)Let's take a look at the new structure.tokenized_datasetsOutput:DatasetDict({train: Dataset({features: ['input_ids', 'token_type_ids', 'attention_mask', 'labels'],num_rows: 200000})validation: Dataset({features: ['input_ids', 'token_type_ids', 'attention_mask', 'labels'],num_rows: 5000})test: Dataset({features: ['input_ids', 'token_type_ids', 'attention_mask', 'labels'],num_rows: 5000})})We can see that the outer layer of the structure stayed the same but the naming of the columns has changed.Let's take a look at the same random example we looked at previously only that it's preprocessed now.print("Input IDS:", tokenized_datasets["train"][random_id]["input_ids"])print("Labels:", tokenized_datasets["train"][random_id]["labels"])Output:Input IDS: [1, 329, 714, 2044, 3567, 5127, 265, 312, 1158, 260, 273, 286, 427, 340, 3006, 275, 363, 947, 2]Labels: 0Alright, the input text is transformed into a sequence of integers which can be transformed to word embeddings by the model, and the label index is simply shifted by -1.Fine-tune the modelHaving preprocessed the dataset, next we can fine-tune the model. We will make use of the popular Hugging Face Trainer which allows us to start training in just a couple of lines of code. The Trainer can be used for more or less all tasks in PyTorch and is extremely convenient by taking care of a lot of boilerplate code needed for training.Let's start by loading the model checkpoint using the convenient AutoModelForSequenceClassification. Since the checkpoint of the model repository is just a pretrained checkpoint we should define the size of the classification head by passing num_lables=5 (since we have 5 sentiment classes).from transformers import AutoModelForSequenceClassificationmodel = AutoModelForSequenceClassification.from_pretrained(model_repository, num_labels=5)Some weights of the model checkpoint at microsoft/deberta-v3-base were not used when initializing DebertaV2ForSequenceClassification: ['mask_predictions.classifier.bias', 'mask_predictions.LayerNorm.bias', 'mask_predictions.dense.weight', 'mask_predictions.dense.bias', 'mask_predictions.LayerNorm.weight', 'lm_predictions.lm_head.dense.bias', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.LayerNorm.weight', 'lm_predictions.lm_head.dense.weight', 'lm_predictions.lm_head.LayerNorm.bias', 'mask_predictions.classifier.weight']- This IS expected if you are initializing DebertaV2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).- This IS NOT expected if you are initializing DebertaV2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).Some weights of DebertaV2ForSequenceClassification were not initialized from the model checkpoint at microsoft/deberta-v3-base and are newly initialized: ['pooler.dense.bias', 'classifier.weight', 'classifier.bias', 'pooler.dense.weight']You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.Next, we load a data collator. A data collator is responsible for making sure each batch is correctly padded during training, which should happen dynamically since training samples are reshuffled before each epoch.from transformers import DataCollatorWithPaddingdata_collator = DataCollatorWithPadding(tokenizer=tokenizer)During training, it is important to monitor the performance of the model on a held-out validation set. To do so, we should pass a to define a compute_metrics function to the Trainer which is then called at each validation step during training.The simplest metric for the text classification task is accuracy, which simply states how much percent of the training samples were correctly classified. Using the accuracy metric might be problematic however if the validation or test data is very unbalanced. Let's verify quickly that this is not the case by counting the occurrences of each label.from collections import Counterprint("Validation:", Counter(tokenized_datasets["validation"]["labels"]))print("Test:", Counter(tokenized_datasets["test"]["labels"]))Output:Validation: Counter({0: 1000, 1: 1000, 2: 1000, 3: 1000, 4: 1000})Test: Counter({0: 1000, 1: 1000, 2: 1000, 3: 1000, 4: 1000})The validation and test data sets are as balanced as they can be, so we can safely use accuracy here!Let's load the accuracy metric via the datasets library.from datasets import load_metricaccuracy = load_metric("accuracy")Next, we define the compute_metrics which will be applied to the predicted outputs of the model which is of type EvalPrediction and therefore exposes the model's predictions and the gold labels.We compute the predicted label class by taking the argmax of the model's prediction before passing it alongside the gold labels to the accuracy metric.import numpy as npdef compute_metrics(pred):pred_logits = pred.predictionspred_classes = np.argmax(pred_logits, axis=-1)labels = np.asarray(pred.label_ids)acc = accuracy.compute(predictions=pred_classes, references=labels)return {"accuracy": acc["accuracy"]}Great, now all components required for training are ready and all that's left to do is to define the hyper-parameters of the Trainer. We need to make sure that the model checkpoints are uploaded to the Hugging Face Hub during training. By setting push_to_hub=True, this is done automatically at every save_steps via the convenient push_to_hub method.Besides, we define some standard hyper-parameters such as learning rate, warm-up steps and training epochs. We will log the loss every 500 steps and run evaluation every 5000 steps.from transformers import TrainingArgumentstraining_args = TrainingArguments(output_dir="deberta_amazon_reviews_v1",num_train_epochs=2, learning_rate=2e-5,warmup_steps=200,logging_steps=500,save_steps=5000,eval_steps=5000,push_to_hub=True,evaluation_strategy="steps",)Putting it all together, we can finally instantiate the Trainer by passing all required components. We'll use the "validation" split as the held-out dataset during training.from transformers import Trainertrainer = Trainer(args=training_args,compute_metrics=compute_metrics,model=model,tokenizer=tokenizer,data_collator=data_collator,train_dataset=tokenized_datasets["train"],eval_dataset=tokenized_datasets["validation"])The trainer is ready to go 🚀 You can start training by calling trainer.train().train_metrics = trainer.train().metricstrainer.save_metrics("train", train_metrics)Output:***** Running training *****Num examples = 200000Num Epochs = 2Instantaneous batch size per device = 8Total train batch size (w. parallel, distributed & accumulation) = 8Gradient Accumulation steps = 1Total optimization steps = 50000Output:StepTraining LossValidation LossAccuracy50000.9312000.9796020.585600100000.9316000.9336070.597400150000.9076000.9170620.602600200000.9024000.9194140.604600250000.8794000.9109280.608400300000.8067000.9339230.609200350000.8268000.9072600.616200400000.8205000.9041600.615800450000.7950000.9189470.616800500000.7836000.9075720.618400Output:***** Running Evaluation *****Num examples = 5000Batch size = 8Saving model checkpoint to deberta_amazon_reviews_v1/checkpoint-50000Configuration saved in deberta_amazon_reviews_v1/checkpoint-50000/config.jsonModel weights saved in deberta_amazon_reviews_v1/checkpoint-50000/pytorch_model.bintokenizer config file saved in deberta_amazon_reviews_v1/checkpoint-50000/tokenizer_config.jsonSpecial tokens file saved in deberta_amazon_reviews_v1/checkpoint-50000/special_tokens_map.jsonadded tokens file saved in deberta_amazon_reviews_v1/checkpoint-50000/added_tokens.jsonTraining completed. Do not forget to share your model on huggingface.co/models =)Cool, we see that the model seems to learn something! Training loss and validation loss are going down and the accuracy also ends up being well over random chance (20%). Interestingly, we see an accuracy of around 58.6 % after only 5000 steps which doesn't improve that much anymore afterward. Choosing a bigger model or training for longer would have probably given better results here, but that's good enough for our hypothetical use case!Alright, finally let's upload the model checkpoint to the Hub.trainer.push_to_hub()Output:Saving model checkpoint to deberta_amazon_reviews_v1Configuration saved in deberta_amazon_reviews_v1/config.jsonModel weights saved in deberta_amazon_reviews_v1/pytorch_model.bintokenizer config file saved in deberta_amazon_reviews_v1/tokenizer_config.jsonSpecial tokens file saved in deberta_amazon_reviews_v1/special_tokens_map.jsonadded tokens file saved in deberta_amazon_reviews_v1/added_tokens.jsonSeveral commits (2) will be pushed upstream.The progress bars may be unreliable.Evaluate / Analyse the modelNow that we have fine-tuned the model we need to be very careful about analyzing its performance. Note that canonical metrics, such as accuracy, are useful to get a general pictureabout your model's performance, but it might not be enough to evaluate how well the model performs on your actual use case.The better approach is to find a metric that best describes the actual use case of the model and measure exactly this metric during and after training.Let's dive into evaluating the model 🤿.The model has been uploaded to the Hub under deberta_v3_amazon_reviews after training, so in a first step, let's download it from there again.from transformers import AutoModelForSequenceClassificationmodel = AutoModelForSequenceClassification.from_pretrained("patrickvonplaten/deberta_v3_amazon_reviews")The Trainer is not only an excellent class to train a model, but also to evaluate a model on a dataset. Let's instantiate the trainer with the same instances and functions as before, but this time there is no need to pass a training dataset.trainer = Trainer(args=training_args,compute_metrics=compute_metrics,model=model,tokenizer=tokenizer,data_collator=data_collator,)We use the Trainer's predict function to evaluate the model on the test dataset on the same metric.prediction_metrics = trainer.predict(tokenized_datasets["test"]).metricsprediction_metricsOutput:***** Running Prediction *****Num examples = 5000Batch size = 8Output:{'test_accuracy': 0.608,'test_loss': 0.9637690186500549,'test_runtime': 21.9574,'test_samples_per_second': 227.714,'test_steps_per_second': 28.464}The results are very similar to performance on the validation dataset, which is usually a good sign as it shows that the model didn't overfit the test dataset.However, 60% accuracy is far from being perfect on a 5-class classification problem, but do we need very high accuracy for all classes?Since we are mostly concerned with very negative customer feedback, let's just focus on how well the model performs on classifying reviews of the most unsatisfied customers. We also decide to help the model a bit - all feedback classified as either very unsatisfied or unsatisfied will be handled by us - to catch close to 99% of the very unsatisfied messages. At the same time, we also measure how many unsatisfied messages we can answer this way and how much unnecessary work we do by answering messages of neutral, satisfied, and very satisfied customers.Great, let's write a new compute_metrics function.import numpy as npdef compute_metrics(pred):pred_logits = pred.predictionspred_classes = np.argmax(pred_logits, axis=-1)labels = np.asarray(pred.label_ids)# First let's compute % of very unsatisfied messages we can catchvery_unsatisfied_label_idx = (labels == 0)very_unsatisfied_pred = pred_classes[very_unsatisfied_label_idx]# Now both 0 and 1 labels are 0 labels the rest is > 0very_unsatisfied_pred = very_unsatisfied_pred * (very_unsatisfied_pred - 1)# Let's count how many labels are 0 -> that's the "very unsatisfied"-accuracytrue_positives = sum(very_unsatisfied_pred == 0) / len(very_unsatisfied_pred)# Second let's compute how many satisfied messages we unnecessarily reply tosatisfied_label_idx = (labels > 1)satisfied_pred = pred_classes[satisfied_label_idx]# how many predictions are labeled as unsatisfied over all satisfied messages?false_positives = sum(satisfied_pred <= 1) / len(satisfied_pred)return {"%_unsatisfied_replied": round(true_positives, 2), "%_satisfied_incorrectly_labels": round(false_positives, 2)}We again instantiate the Trainer to easily run the evaluation.trainer = Trainer(args=training_args,compute_metrics=compute_metrics,model=model,tokenizer=tokenizer,data_collator=data_collator,)And let's run the evaluation again with our new metric computation which is better suited for our use case.prediction_metrics = trainer.predict(tokenized_datasets["test"]).metricsprediction_metricsOutput:***** Running Prediction *****Num examples = 5000Batch size = 8Output:{'test_%_satisfied_incorrectly_labels': 0.11733333333333333,'test_%_unsatisfied_replied': 0.949,'test_loss': 0.9637690186500549,'test_runtime': 22.8964,'test_samples_per_second': 218.375,'test_steps_per_second': 27.297}Cool! This already paints a pretty nice picture. We catch around 95% of very unsatisfied customers automatically at a cost of wasting our efforts on 10% of satisfied messages.Let's do some quick math. We receive daily around 10,000 messages for which we expect ca. 500 to be very negative. Instead of having to answer to all 10,000 messages, using this automatic filtering, we would only need to look into 500 + 0.12 * 10,000 = 1700 messages and only reply to 475 messages while incorrectly missing 5% of the messages. Pretty nice - a 83% reduction in human effort at missing only 5% of very unsatisfied customers!Obviously, the numbers don't represent the gained value of an actual use case, but we could come close to it with enough high-quality training data of your real-world example!Let's save the resultstrainer.save_metrics("prediction", prediction_metrics)and again upload everything on the Hub.trainer.push_to_hub()Output:Saving model checkpoint to deberta_amazon_reviews_v1Configuration saved in deberta_amazon_reviews_v1/config.jsonModel weights saved in deberta_amazon_reviews_v1/pytorch_model.bintokenizer config file saved in deberta_amazon_reviews_v1/tokenizer_config.jsonSpecial tokens file saved in deberta_amazon_reviews_v1/special_tokens_map.jsonadded tokens file saved in deberta_amazon_reviews_v1/added_tokens.jsonTo https://huggingface.co/patrickvonplaten/deberta_amazon_reviews_v1599b891..ad77e6d main -> mainDropping the following result as it does not have all the necessary fields:{'task': {'name': 'Text Classification', 'type': 'text-classification'}}To https://huggingface.co/patrickvonplaten/deberta_amazon_reviews_v1ad77e6d..13e5ddd main -> mainThe data is now saved here.That's it for today 😎. As a final step, it would also make a lot of sense to try the model out on actual real-world data. This can be done directly on the inference widget on the model card:It does seem to generalize quite well to real-world data 🔥OptimizationAs soon as you think the model's performance is good enough for production it's all about making the model as memory efficient and fast as possible.There are some obvious solutions to this like choosing the best suited accelerated hardware, e.g. better GPUs, making sure no gradients are computed during the forward pass, or lowering the precision, e.g. to float16.More advanced optimization methods include using open-source accelerator libraries such as ONNX Runtime, quantization, and inference servers like Triton.At Hugging Face, we have been working a lot to facilitate the optimization of models, especially with our open-source Optimum library. Optimum makes it extremely simple to optimize most 🤗 Transformers models.If you're looking for highly optimized solutions which don't require any technical knowledge, you might be interested in the Inference API, a plug & play solution to serve in production a wide variety of machine learning tasks, including sentiment analysis.Moreover, if you are searching for support for your custom use cases, Hugging Face's team of experts can help accelerate your ML projects! Our team answer questions and find solutions as needed in your machine learning journey from research to production. Visit hf.co/support to learn more and request a quote.
https://huggingface.co/blog/carbon-emissions-on-the-hub
CO2 Emissions and the 🤗 Hub: Leading the Charge
Sasha Luccioni, Zachary Mueller, Nate Raw
April 22, 2022
What are CO2 Emissions and why are they important?Climate change is one of the greatest challenges that we are facing and reducing emissions of greenhouse gases such as carbon dioxide (CO2) is an important part of tackling this problem. Training and deploying machine learning models will emit CO2 due to the energy usage of the computing infrastructures that are used: from GPUs to storage, it all needs energy to function and emits CO2 in the process.Pictured: Recent Transformer models and their carbon footprintsThe amount of CO2 emitted depends on different factors such as runtime, hardware used, and carbon intensity of the energy source. Using the tools described below will help you both track and report your own emissions (which is important to improve the transparency of our field as a whole!) and choose models based on their carbon footprint. How to calculate your own CO2 Emissions automatically with TransformersBefore we begin, if you do not have the latest version of the huggingface_hub library on your system, please run the following:pip install huggingface_hub -UHow to find low-emission models using the Hugging Face HubWith the model now uploaded to the Hub, how can you search for models on the Hub while trying to be eco-friendly? Well, the huggingface_hub library has a new special parameter to perform this search: emissions_threshold. All you need to do is specify a minimum or maximum number of grams, and all models that fall within that range. For example, we can search for all models that took a maximum of 100 grams to make:from huggingface_hub import HfApiapi = HfApi()models = api.list_models(emissions_thresholds=(None, 100), cardData=True)len(models)>>> 191There were quite a few! This also helps to find smaller models, given they typically did not release as much carbon during training.We can look at one up close to see it does fit our threshold:model = models[0]print(f'Model Name: {model.modelId}CO2 Emitted during training: {model.cardData["co2_eq_emissions"]}')>>> Model Name: esiebomajeremiah/autonlp-email-classification-657119381CO2 Emitted during training: 3.516233232503715Similarly, we can search for a minimum value to find very large models that emitted a lot of CO2 during training:models = api.list_models(emissions_thresholds=(500, None), cardData=True)len(models)>>> 10Now let's see exactly how much CO2 one of these emitted:model = models[0]print(f'Model Name: {model.modelId}CO2 Emitted during training: {model.cardData["co2_eq_emissions"]}')>>> Model Name: Maltehb/aelaectra-danish-electra-small-casedCO2 Emitted during training: 4009.5That's a lot of CO2!As you can see, in just a few lines of code we can quickly vet models we may want to use to make sure we're being environmentally cognizant! How to Report Your Carbon Emissions with transformersIf you're using transformers, you can automatically track and report carbon emissions thanks to the codecarbon integration. If you've installed codecarbon on your machine, the Trainer object will automatically add the CodeCarbonCallback while training, which will store carbon emissions data for you as you train.So, if you run something like this...from datasets import load_datasetfrom transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments​ds = load_dataset("imdb")model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")​def tokenize_function(examples):return tokenizer(examples["text"], padding="max_length", truncation=True)​​small_train_dataset = ds["train"].shuffle(seed=42).select(range(1000)).map(tokenize_function, batched=True)small_eval_dataset = ds["test"].shuffle(seed=42).select(range(1000)).map(tokenize_function, batched=True)​​training_args = TrainingArguments("codecarbon-text-classification",num_train_epochs=4,push_to_hub=True)​trainer = Trainer(model=model,args=training_args,train_dataset=small_train_dataset,eval_dataset=small_eval_dataset,)​trainer.train()...you'll be left with a file within the codecarbon-text-classification directory called emissions.csv. This file will keep track of the carbon emissions across different training runs. Then, when you're ready, you can take the emissions from the run you used to train your final model and include that in its model card. 📝An example of this data being included at the top of the model card is shown below:For more references on the metadata format for co2_eq_emissions see the hub docs.Further readingsRolnick et al. (2019) - Tackling Climate Change with Machine Learning Strubell et al. (2019) - Energy and Policy Considerations for Deep Learning in NLPSchwartz et al. (2020) - Green AI
https://huggingface.co/blog/lewis-tunstall-interview
Machine Learning Experts - Lewis Tunstall
Britney Muller
April 13, 2022
🤗 Welcome to Machine Learning Experts - Lewis TunstallHey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is Lewis Tunstall. Lewis is a Machine Learning Engineer at Hugging Face where he works on applying Transformers to automate business processes and solve MLOps challenges.Lewis has built ML applications for startups and enterprises in the domains of NLP, topological data analysis, and time series. You’ll hear Lewis talk about his new book, transformers, large scale model evaluation, how he’s helping ML engineers optimize for faster latency and higher throughput, and more.In a previous life, Lewis was a theoretical physicist and outside of work loves to play guitar, go trail running, and contribute to open-source projects.Very excited to introduce this fun and brilliant episode to you! Here’s my conversation with Lewis Tunstall:Note: Transcription has been slightly modified/reformatted to deliver the highest-quality reading experience.Welcome, Lewis! Thank you so much for taking time out of your busy schedule to chat with me today about your awesome work!Lewis: Thanks, Britney. It’s a pleasure to be here.Curious if you can do a brief self-introduction and highlight what brought you to Hugging Face?Lewis: What brought me to Hugging Face was transformers. In 2018, I was working with transformers at a startup in Switzerland. My first project was a question answering task where you input some text and train a model to try and find the answer to a question within that text.In those days the library was called: pytorch-pretrained-bert, it was a very focused code base with a couple of scripts and it was the first time I worked with transformers. I had no idea what was going on so I read the original ‘Attention Is All You Need’ paper but I couldn’t understand it. So I started looking around for other resources to learn from. In the process, Hugging Face exploded with their library growing into many architectures and I got really excited about contributing to open-source software. So around 2019, I had this kinda crazy idea to write a book about transformers because I felt there was an information gap that was missing. So I partnered up with my friend, Leandro (von Werra) and we sent Thom (Wolf) a cold email out of nowhere saying, “Hey we are going to write a book about transformers, are you interested?” and I was expecting no response. But to our great surprise, he responded “Yea, sure let’s have a chat.” and around 1.5 years later this is our book: NLP with Transformers.This collaboration set the seeds for Leandro and I to eventually join Hugging Face. And I've been here now for around nine months. That is incredible. How does it feel to have a copy of your book in your hands?Lewis: I have to say, I just became a parent about a year and a half ago and it feels kind of similar to my son being born. You're holding this thing that you created. It's quite an exciting feeling and so different to actually hold it (compared to reading a PDF). Confirms that it’s actually real and I didn't just dream about it.Exactly. Congratulations!Want to briefly read one endorsement that I love about this book;“_Complexity made simple. This is a rare and precious book about NLP, transformers, and the growing ecosystem around them, Hugging Face. Whether these are still buzzwords to you or you already have a solid grasp of it all, the authors will navigate you with humor, scientific rigor, and plenty of code examples into the deepest secrets of the coolest technology around. From “off-the-shelf pre-trained” to “from-scratch custom” models, and from performance to missing labels issues, the authors address practically every real-life struggle of an ML engineer and provide state-of-the-art solutions, making this book destined to dictate the standards in the field for years to come._” —Luca Perrozi Ph.D., Data Science and Machine Learning Associate Manager at Accenture.Checkout Natural Language Processing with Transformers.Can you talk about the work you've done with the transformers library?Lewis: One of the things that I experienced in my previous jobs before Hugging Face was there's this challenge in the industry when deploying these models into production; these models are really large in terms of the number of parameters and this adds a lot of complexity to the requirements you might have. So for example, if you're trying to build a chatbot you need this model to be very fast and responsive. And most of the time these models are a bit too slow if you just take an off-the-shelf model, train it, and then try to integrate it into your application. So what I've been working on for the last few months on the transformers library is providing the functionality to export these models into a format that lets you run them much more efficiently using tools that we have at Hugging Face, but also just general tools in the open-source ecosystem.In a way, the philosophy of the transformers library is like writing lots of code so that the users don't have to write that code.In this particular example, what we're talking about is something called the ONNX format. It's a special format that is used in industry where you can basically have a model that's written in PyTorch but you can then convert it to TensorFlow or you can run it on some very dedicated hardware.And if you actually look at what's needed to make this conversion happen in the transformers library, it's fairly gnarly. But we make it so that you only really have to run one line of code and the library will take care of you. So the idea is that this particular feature lets machine learning engineers or even data scientists take their model, convert it to this format, and then optimize it to get faster latency and higher throughput. That's very cool. Have there been, any standout applications of transformers?Lewis: I think there are a few. One is maybe emotional or personal, for example many of us when OpenAI released GPT-2, this very famous language model which can generate text.OpenAI actually provided in their blog posts some examples of the essays that this model had created. And one of them was really funny. One was an essay about why we shouldn't recycle or why recycling is bad.And the model wrote a compelling essay on why recycling was bad. Leandro and I were working at a startup at the time and I printed it out and stuck it right above the recycling bin in the office as a joke. And people were like, “Woah, who wrote this?” and I said, “An algorithm.”I think there's something sort of strangely human, right? Where if we see generated text we get more surprised when it looks like something I (or another human) might have written versus other applications that have been happening like classifying text or more conventional tasks.That's incredible. I remember when they released those examples for GPT-2, and one of my favorites (that almost gave me this sense of, whew, we're not quite there yet) were some of the more inaccurate mentions like “underwater fires”.Lewis: Exactly!Britney: But, then something had happened with an oil spill that next year, where there were actually fires underwater! And I immediately thought about that text and thought, maybe AI is onto something already that we're not quite aware of?You and other experts at Hugging Face have been working hard on the Hugging Face Course. How did that come about & where is it headed?Lewis: When I joined Hugging Face, Sylvian and Lysandre, two of the core maintainers of the transformers library, were developing a course to basically bridge the gap between people who are more like software engineers who are curious about natural language processing but specifically curious about the transformers revolution that's been happening. So I worked with them and others in the open-source team to create a free course called the Hugging Face Course. And this course is designed to really help people go from knowing kind of not so much about ML all the way through to having the ability to train models on many different tasks.And, we've released two parts of this course and planning to release the third part this year. I'm really excited about the next part that we're developing right now where we're going to explore different modalities where transformers are really powerful. Most of the time we think of transformers for NLP, but likely there's been this explosion where transformers are being used in things like audio or in computer vision and we're going to be looking at these in detail. What are some transformers applications that you're excited about?Lewis: So one that's kind of fun is in the course we had an event last year where we got people in the community to use the course material to build applications.And one of the participants in this event created a cover letter generator for jobs. So the idea is that when you apply for a job there's always this annoying thing you have to write a cover letter and it's always like a bit like you have to be witty. So this guy created a cover letter generator where you provide some information about yourself and then it generates it from that.And he actually used that to apply to Hugging Face.No way?!Lewis: He's joining the Big Science team as an intern. So. I mean this is a super cool thing, right? When you learn something and then use that thing to apply which I thought was pretty awesome. Where do you want to see more ML applications?Lewis: So I think personally, the area that I'm most excited about is the application of machine learning into natural sciences. And that's partly because of my background. I used to be a Physicist in a previous lifetime but I think what's also very exciting here is that in a lot of fields. For example, in physics or chemistry you already know what the say underlying laws are in terms of equations that you can write down but it turns out that many of the problems that you're interested in studying often require a simulation. Or they often require very hardcore supercomputers to understand and solve these equations. And one of the most exciting things to me is the combination of deep learning with the prior knowledge that scientists have gathered to make breakthroughs that weren't previously possible.And I think a great example is DeepMind’s Alpha Fold model for protein structure prediction where they were basically using a combination of transformers with some extra information to generate predictions of proteins that I think previously were taking on the order of months and now they can do them in days.So this accelerates the whole field in a really powerful way. And I can imagine these applications ultimately lead to hopefully a better future for humanity.How you see the world of model evaluation evolving?Lewis: That's a great question. So at Hugging Face, one of the things I've been working on has been trying to build the infrastructure and the tooling that enables what we call 'large-scale evaluation'. So you may know that the Hugging Face Hub has thousands of models and datasets. But if you're trying to navigate this space you might ask yourself, 'I'm interested in question answering and want to know what the top 10 models on this particular task are'.And at the moment, it's hard to find the answer to that, not just on the Hub, but in general in the space of machine learning this is quite hard. You often have to read papers and then you have to take those models and test them yourself manually and that's very slow and inefficient.So one thing that we've been working on is to develop a way that you can evaluate models and datasets directly through the Hub. We're still trying to experiment there with the direction. But I'm hoping that we have something cool to show later this year. And there's another side to this which is that a large part of the measuring progress in machine learning is through the use of benchmarks. These benchmarks are traditionally a set of datasets with some tasks but what's been maybe missing is that a lot of researchers speak to us and say, “Hey, I've got this cool idea for a benchmark, but I don't really want to implement all of the nitty-gritty infrastructure for the submissions, and the maintenance, and all those things.”And so we've been working with some really cool partners on hosting benchmarks on the Hub directly. So that then people in the research community can use the tooling that we have and then simplify the evaluation of these models. That is super interesting and powerful.Lewis: Maybe one thing to mention is that the whole evaluation question is a very subtle one. We know from previous benchmarks, such as SQuAD, a famous benchmark to measure how good models are at question answering, that many of these transformer models are good at taking shortcuts.Well, that's the aim but it turns out that many of these transformer models are really good at taking shortcuts. So, what they’re actually doing is they're getting a very high score on a benchmark which doesn't necessarily translate into the actual thing you were interested in which was answering questions.And you have all these subtle failure modes where the models will maybe provide completely wrong answers or they should not even answer at all. And so at the moment in the research community there's a very active and vigorous discussion about what role benchmarks play in the way we measure progress.But also, how do these benchmarks encode our values as a community? And one thing that I think Hugging Face can really offer the community here is the means to diversify the space of values because traditionally most of these research papers come from the U.S. which is a great country but it's a small slice of the human experience, right?What are some common mistakes machine learning engineers or teams make?Lewis: I can maybe tell you the ones that I've done.Probably a good representative of the rest of the things. So I think the biggest lesson I learned when I was starting out in the field is using baseline models when starting out. It’s a common problem that I did and then later saw other junior engineers doing is reaching for the fanciest state-of-the-art model.Although that may work, a lot of the time what happens is you introduce a lot of complexity into the problem and your state-of-the-art model may have a bug and you won't really know how to fix it because the model is so complex. It’s a very common pattern in industry and especially within NLP is that you can actually get quite far with regular expressions and linear models like logistic regression and these kinds of things will give you a good start. Then if you can build a better model then great, you should do that, but it's great to have a reference point. And then I think the second big lesson I’ve learned from building a lot of projects is that you can get a bit obsessed with the modeling part of the problem because that's the exciting bit when you're doing machine learning but there's this whole ecosystem. Especially if you work in a large company there'll be this whole ecosystem of services and things that are around your application. So the lesson there is you should really try to build something end to end that maybe doesn't even have any machine learning at all. But it's the scaffolding upon which you can build the rest of the system because you could spend all this time training an awesome mode, and then you go, oh, oops.It doesn't integrate with the requirements we have in our application. And then you've wasted all this time. That's a good one! Don't over-engineer. Something I always try to keep in mind.Lewis: Exactly. And it's a natural thing I think as humans especially if you're nerdy you really want to find the most interesting way to do something and most of the time simple is better.If you could go back and do one thing differently at the beginning of your career in machine learning, what would it be?Lewis: Oh, wow. That's a tough one. Hmm. So, the reason this is a really hard question to answer is that now that I’m working at Hugging Face, it's the most fulfilling type of work that I've really done in my whole life. And the question is if I changed something when I started out maybe I wouldn't be here, right? It's one of those things where it's a tricky one in that sense. I suppose one thing that maybe I would've done slightly differently is when I started out working as a data scientist you tend to develop the skills which are about mapping business problems to software problems or ultimately machine learning problems.And this is a really great skill to have. But what I later discovered is that my true driving passion is doing open source software development. So probably the thing I would have done differently would have been to start that much earlier. Because at the end of the day most open source is really driven by community members.So that would have been maybe a way to shortcut my path to doing this full-time. I love the idea of had you done something differently maybe you wouldn't be at Hugging Face.Lewis: It’s like the butterfly effect movie, right? You go back in time and then you don't have any legs or something.Totally. Don't want to mess with a good thing!Lewis: Exactly.Rapid Fire Questions:Best piece of advice for someone looking to get into AI/Machine Learning?Lewis: Just start. Just start coding. Just start contributing if you want to do open-source. You can always find reasons not to do it but you just have to get your hands dirty.What are some of the industries you're most excited to see machine learning applied?Lewis: As I mentioned before, I think the natural sciences is the area I’m most excited aboutThis is where I think that's most exciting. If we look at something, say at the industrial side, I guess some of the development of new drugs through machine learning is very exciting. Personally, I'd be really happy if there were advancements in robotics where I could finally have a robot to like fold my laundry because I really hate doing this and it would be nice if like there was an automated way of handling that. Should people be afraid of AI taking over the world?Lewis: Maybe. It’s a tough one because I think we have reasons to think that we may create systems that are quite dangerous in the sense that they could be used to cause a lot of harm. An analogy is perhaps with weapons you can use within the sports like archery and shooting, but you can also use them for war. One big risk is probably if we think about combining these techniques with the military perhaps this leads to some tricky situations.But, I'm not super worried about the Terminator. I'm more worried about, I don't know, a rogue agent on the financial stock market bankrupting the whole world. That's a good point.Lewis: Sorry, that's a bit dark.No, that was great. The next question is a follow-up on your folding laundry robot. When will AI-assisted robots be in homes everywhere?Lewis: Honest answer. I don't know. Everyone, I know who's working on robotics says this is still an extremely difficult task in the sense that robotics hasn't quite experienced the same kind of revolutions that NLP and deep learning have had. But on the other hand, you can see some pretty exciting developments in the last year, especially around the idea of being able to transfer knowledge from a simulation into the real world.I think there's hope that in my lifetime I will have a laundry-folding robot.What have you been interested in lately? It could be a movie, a recipe, a podcast, literally anything. And I'm just curious what that is and how someone interested in that might find it or get started.Lewis: It's a great question. So for me, I like podcasts in general. It’s my new way of reading books because I have a young baby so I'm just doing chores and listening at the same time. One podcast that really stands out recently is actually the DeepMind podcast produced by Hannah Fry who's a mathematician in the UK and she gives this beautiful journey through not just what Deep Mind does, but more generally, what deep learning and especially reinforcement learning does and how they're impacting the world. Listening to this podcast feels like you're listening to like a BBC documentary because you know the English has such great accents and you feel really inspired because a lot of the work that she discusses in this podcast has a strong overlap with what we do at Hugging Face. You see this much bigger picture of trying to pave the way for a better future.It resonated strongly. And I just love it because the explanations are super clear and you can share it with your family and your friends and say, “Hey, if you want to know what I'm doing? This can give you a rough idea.”It gives you a very interesting insight into the Deep Mind researchers and their backstory as well.I'm definitely going to give that a listen. [Update: It’s one of my new favorite podcasts. :) Thank you, Lewis!]What are some of your favorite Machine Learning papers?Lewis: Depends on how we measure this, but there's one paper that stands out to me, which is quite an old paper. It’s by the creator of random forests, Leo Breiman. Random forests is a very famous classic machine learning technique that's useful for tabular data that you see in industry and I had to teach random forests at university a year ago.And I was like, okay, I'll read this paper from the 2000s and see if I understand it. And it's a model of clarity. It's very short, and very clearly explains how the algorithm is implemented. You can basically just take this paper and implement the code very very easily. And that to me was a really nice example of how papers were written in medieval times. Whereas nowadays, most papers, have this formulaic approach of, okay, here's an introduction, here's a table with some numbers that get better, and here's like some random related work section. So, I think that's one that like stands out to me a lot.But another one that's a little bit more recent is a paper by DeepMind again on using machine learning techniques to prove fundamental theorems like algebraic topology, which is a special branch of abstract mathematics. And at one point in my life, I used to work on these related topics.So, to me, it's a very exciting, perspective of augmenting the knowledge that a mathematician would have in trying to narrow down the space of theorems that they might have to search for. I think this to me was surprising because a lot of the time I've been quite skeptical that machine learning will lead to this fundamental scientific insight beyond the obvious ones like making predictions.But this example showed that you can actually be quite creative and help mathematicians find new ideas. What is the meaning of life?Lewis: I think that the honest answer is, I don't know. And probably anyone who does tell you an answer probably is lying. That's a bit sarcastic. I dunno, I guess being a site scientist by training and especially a physicist, you develop this worldview that is very much that there isn't really some sort of deeper meaning to this.It's very much like the universe is quite random and I suppose the only thing you can take from that beyond being very sad is that you derive your own meaning, right? And most of the time this comes either from the work that you do or from the family or from your friends that you have.But I think when you find a way to derive your own meaning and discover what you do is actually interesting and meaningful that that's the best part. Life is very up and down, right? At least for me personally, the things that have always been very meaningful are generally in creating things. So, I used to be a musician, so that was a way of creating music for other people and there was great pleasure in doing that. And now I kind of, I guess, create code which is a form of creativity.Absolutely. I think that's beautiful, Lewis! Is there anything else you would like to share or mention before we sign off?Lewis: Maybe buy my book. It is so good!Lewis: [shows book featuring a parrot on the cover] Do you know the story about the parrot? I don't think so.Lewis: So when O’Reilly is telling you “We're going to get our illustrator now to design the cover,” it's a secret, right?They don't tell you what the logic is or you have no say in the matter. So, basically, the illustrator comes up with an idea and in one of the last chapters of the book we have a section where we basically train a GPT-2 like model on Python code, this was Thom's idea, and he decided to call it code parrot.I think the idea or the joke he had was that there's a lot of discussion in the community about this paper that Meg Mitchell and others worked on called, ‘Stochastic Parrots’. And the idea was that you have these very powerful language models which seem to exhibit human-like traits in their writing as we discussed earlier but deep down maybe they're just doing some sort of like parrot parenting thing.You know, if you talk to like a cockatoo it will swear at you or make jokes. That may not be a true measure of intelligence, right? So I think that the illustrator somehow maybe saw that and decided to put a parrot which I think is a perfect metaphor for the book.And the fact that there are transformers in it.Had no idea that that was the way O'Reilly's covers came about. They don't tell you and just pull context from the book and create something?Lewis: It seems like it. I mean, we don't really know the process. I'm just sort of guessing that maybe the illustrator was trying to get an idea and saw a few animals in the book. In one of the chapters we have a discussion about giraffes and zebras and stuff. But yeah I'm happy with the parrot cover.I love it. Well, it looks absolutely amazing. A lot of these types of books tend to be quite dry and technical and this one reads almost like a novel mixed with great applicable technical information, which is beautiful.Lewis: Thanks. Yeah, that’s one thing we realized afterward because it was the first time we were writing a book we thought we should be sort of serious, right? But if you sort of know me I'm like never really serious about anything. And in hindsight, we should have been even more silly in the book.I had to control my humor in various places but maybe there'll be a second edition one day and then we can just inject it with memes.Please do, I look forward to that!Lewis: In fact, there is one meme in the book. We tried to sneak this in past the Editor and have the DOGE dog inside the book and we use a special vision transformer to try and classify what this meme is.So glad you got that one in there. Well done! Look forward to many more in the next edition. Thank you so much for joining me today. I really appreciate it. Where can our listeners find you online?Lewis: I'm fairly active on Twitter. You can just find me my handle @_lewtun. LinkedIn is a strange place and I'm not really on there very much. And of course, there's Hugging Face, the Hugging Face Forums, and Discord.Perfect. Thank you so much, Lewis. And I'll chat with you soon!Lewis: See ya, Britney. Bye.Thank you for listening to Machine Learning Experts!
https://huggingface.co/blog/habana
Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training
Susan Lansing
April 12, 2022
Habana Labs and Hugging Face Partner to Accelerate Transformer Model TrainingHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesHabana Labs and Hugging Face Partner to Accelerate Transformer Model Training
https://huggingface.co/blog/transformers-design-philosophy
Don't Repeat Yourself*
Patrick von Platen
April 5, 2022
🤗 Transformers Design Philosophy"Don't repeat yourself", or DRY, is a well-known principle of software development. The principle originates from "The pragmatic programmer", one of the most read books on code design.The principle's simple message makes obvious sense: Don't rewrite a logic that already exists somewhere else. This ensures the code remains in sync, making it easier to maintain and more robust. Any change to this logical pattern will uniformly affect all of its dependencies.At first glance, the design of Hugging Face's Transformers library couldn't be more contrary to the DRY principle. Code for the attention mechanism is more or less copied over 50 times into different model files. Sometimes code of the whole BERT model is copied into other model files. We often force new model contributions identical to existing models - besides a small logical tweak - to copy all of the existing code. Why do we do this? Are we just too lazy or overwhelmed to centralize all logical pieces into one place?No, we are not lazy - it's a very conscious decision not to apply the DRY design principle to the Transformers library. Instead, we decided to adopt a different design principle which we like to call the single model file policy. The single model file policy states that all code necessary for the forward pass of a model is in one and only one file - called the model file. If a reader wants to understand how BERT works for inference, she should only have to look into BERT's modeling_bert.py file. We usually reject any attempt to abstract identical sub-components of different models into a new centralized place. We don't want to have a attention_layer.py that includes all possible attention mechanisms. Again why do we do this?In short the reasons are:1. Transformers is built by and for the open-source community.2. Our product are models and our customers are users reading or tweaking model code.3. The field of machine learning evolves extremely fast.4. Machine Learning models are static.1. Built by and for the open-source communityTransformers is built to actively incentivize external contributions. A contribution is often either a bug fix or a new model contribution. If a bug is found in one of the model files, we want to make it as easy as possible for the finder to fix it. There is little that is more demotivating than fixing a bug only to see that it caused 100 failures of other models. Because model code is independent from all other models, it's fairly easy for someone that only understands the one model she is working with to fix it. Similarly, it's easier to add new modeling code and review the corresponding PR if only a single new model file is added. The contributor does not have to figure out how to add new functionality to a centralized attention mechanism without breaking existing models. The reviewer can easily verify that none of the existing models are broken.2. Modeling code is our productWe assume that a significant amount of users of the Transformers library not only read the documentation, but also look into the actual modeling code and potentially modify it. This hypothesis is backed by the Transformers library being forked over 10,000 times and the Transformers paper being cited over a thousand times.Therefore it is of utmost importance that someone reading Transformers modeling code for the first time can easily understand and potentially adapt it. Providing all the necessary logical components in order in a single modeling file helps a lot to achieve improved readability and adaptability. Additionally, we care a great deal about sensible variable/method naming and prefer expressive/readable code over character-efficient code. 3. Machine Learning is evolving at a neck-breaking speedResearch in the field of machine learning, and especially neural networks, evolves extremely fast. A model that was state-of-the-art a year ago might be outdated today. We don't know which attention mechanism, position embedding, or architecture will be the best in a year. Therefore, we cannot define standard logical patterns that apply to all models. As an example, two years ago, one might have defined BERT's self attention layer as the standard attention layer used by all Transformers models. Logically, a "standard" attention function could have been moved into a central attention.py file. But then came attention layers that added relative positional embeddings in each attention layer (T5), multiple different forms of chunked attention (Reformer, Longformer, BigBird), and separate attention mechanism for position and word embeddings (DeBERTa), etc... Every time we would have to have asked ourselves whether the "standard" attention function should be adapted or whether it would have been better to add a new attention function to attention.py. But then how do we name it? attention_with_positional_embd, reformer_attention, deberta_attention? It's dangerous to give logical components of machine learning models general names because the perception of what this component stands for might change or become outdated very quickly. E.g., does chunked attention corresponds to GPTNeo's, Reformer's, or BigBird's chunked attention? Is the attention layer a self-attention layer, a cross-attentional layer, or does it include both? However, if we name attention layers by their model's name, we should directly put the attention function in the corresponding modeling file.4. Machine Learning models are staticThe Transformers library is a unified and polished collection of machine learning models that different research teams have created. Every machine learning model is usually accompanied by a paper and its official GitHub repository. Once a machine learning model is published, it is rarely adapted or changed afterward.Instead, research teams tend to publish a new model built upon previous models but rarely make significant changes to already published code. This is an important realization when deciding on the design principles of the Transformers library.It means that once a model architecture has been added to Transformers, the fundamental components of the model don't change anymore. Bugs are often found and fixed, methods and variables might be renamed, and the output or input format of the model might be slightly changed, but the model's core components don't change anymore. Consequently, the need to apply global changes to all models in Transformers is significantly reduced, making it less important that every logical pattern only exists once since it's rarely changed.A second realization is that models do not depend on each other in a bidirectional way. More recent published models might depend on existing models, but it's quite obvious that an existing model cannot logically depend on its successor. E.g. T5 is partly built upon BERT and therefore T5's modeling code might logically depend on BERT's modeling code, but BERT cannot logically depend in any way on T5. Thus, it would not be logically sound to refactor BERT's attention function to also work with T5's attention function - someone reading through BERT's attention layer should not have to know anything about T5. Again, this advocates against centralizing components such as the attention layer into modules that all models can access.On the other hand, the modeling code of successor models can very well logically depend on its predecessor model. E.g., DeBERTa-v2 modeling code does logically depend to some extent on DeBERTa's modeling code. Maintainability is significantly improved by ensuring the modeling code of DeBERTa-v2 stays in sync with DeBERTa's. Fixing a bug in DeBERTa should ideally also fix the same bug in DeBERTa-v2. How can we maintain the single model file policy while ensuring that successor models stay in sync with their predecessor model? Now, we explain why we put the asterisk * {}^{\textbf{*}} * after "Repeat Yourself". We don't blindly copy-paste all existing modeling code even if it looks this way. One of Transformers' core maintainers, Sylvain Gugger, found a great mechanism that respects both the single file policy and keeps maintainability cost in bounds. This mechanism, loosely called "the copying mechanism", allows us to mark logical components, such as an attention layer function, with a # Copied from <predecessor_model>.<function> statement, which enforces the marked code to be identical to the <function> of the <predecessor_model>. E.g., this line of over DeBERTa-v2's class enforces the whole class to be identical to DeBERTa's class except for the prefix DeBERTav2.This way, the copying mechanism keeps modeling code very easy to understand while significantly reducing maintenance. If some code is changed in a function of a predecessor model that is referred to by a function of its successor model, there are tools in place that automatically correct the successor model's function.DrawbacksClearly, there are also drawbacks to the single file policy two of which we quickly want to mention here.A major goal of Transformers is to provide a unified API for both inference and training for all models so that a user can quickly switch between different models in her setup. However, ensuring a unified API across models is much more difficult if modeling files are not allowed to use abstracted logical patterns. We solvethis problem by running a lot of tests (ca. 20,000 tests are run daily at the time of writing this blog post) to ensure that models follow a consistent API. In this case, the single file policy requires us to be very rigorous when reviewing model and test additions.Second, there is a lot of research on just a single component of a Machine Learning model. E.g., researchteams investigate new forms of an attention mechanism that would apply to all existing pre-trained models as has been done in the Rethinking Attention with Performers. How should we incorporate such research into the Transformers library? It is indeed problematic. Should we change all existing models? This would go against points 3. and 4. as written above. Should we add 100+ new modeling files each prefixed with Performer...? This seems absurd. In such a case there is sadly no good solutionand we opt for not integrating the paper into Transformers in this case. If the paper would have gotten much more traction and included strong pre-trained checkpoints, we would have probably added new modeling files of the most important models such as modeling_performer_bert.pyavailable.ConclusionAll in all, at 🤗 Hugging Face we are convinced that the single file policy is the right coding philosophy for Transformers.What do you think? If you read until here, we would be more than interested in hearing your opinion!If you would like to leave a comment, please visit the corresponding forum post here.
https://huggingface.co/blog/decision-transformers
Introducing Decision Transformers on Hugging Face 🤗
Edward Beeching, Thomas Simonini
March 28, 2022
At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. Recently, we have integrated Deep RL frameworks such as Stable-Baselines3. And today we are happy to announce that we integrated the Decision Transformer, an Offline Reinforcement Learning method, into the 🤗 transformers library and the Hugging Face Hub. We have some exciting plans for improving accessibility in the field of Deep RL and we are looking forward to sharing them with you over the coming weeks and months.What is Offline Reinforcement Learning?Introducing Decision TransformersUsing the Decision Transformer in 🤗 TransformersConclusionWhat's next?ReferencesWhat is Offline Reinforcement Learning?Deep Reinforcement Learning (RL) is a framework to build decision-making agents. These agents aim to learn optimal behavior (policy) by interacting with the environment through trial and error and receiving rewards as unique feedback.The agent’s goal is to maximize its cumulative reward, called return. Because RL is based on the reward hypothesis: all goals can be described as the maximization of the expected cumulative reward.Deep Reinforcement Learning agents learn with batches of experience. The question is, how do they collect it?:A comparison between Reinforcement Learning in an Online and Offline setting, figure taken from this postIn online reinforcement learning, the agent gathers data directly: it collects a batch of experience by interacting with the environment. Then, it uses this experience immediately (or via some replay buffer) to learn from it (update its policy).But this implies that either you train your agent directly in the real world or have a simulator. If you don’t have one, you need to build it, which can be very complex (how to reflect the complex reality of the real world in an environment?), expensive, and insecure since if the simulator has flaws, the agent will exploit them if they provide a competitive advantage. On the other hand, in offline reinforcement learning, the agent only uses data collected from other agents or human demonstrations. It does not interact with the environment.The process is as follows:Create a dataset using one or more policies and/or human interactions.Run offline RL on this dataset to learn a policyThis method has one drawback: the counterfactual queries problem. What do we do if our agent decides to do something for which we don’t have the data? For instance, turning right on an intersection but we don’t have this trajectory. There’s already exists some solutions on this topic, but if you want to know more about offline reinforcement learning you can watch this videoIntroducing Decision TransformersThe Decision Transformer model was introduced by “Decision Transformer: Reinforcement Learning via Sequence Modeling” by Chen L. et al. It abstracts Reinforcement Learning as a conditional-sequence modeling problem.The main idea is that instead of training a policy using RL methods, such as fitting a value function, that will tell us what action to take to maximize the return (cumulative reward), we use a sequence modeling algorithm (Transformer) that, given a desired return, past states, and actions, will generate future actions to achieve this desired return. It’s an autoregressive model conditioned on the desired return, past states, and actions to generate future actions that achieve the desired return.This is a complete shift in the Reinforcement Learning paradigm since we use generative trajectory modeling (modeling the joint distribution of the sequence of states, actions, and rewards) to replace conventional RL algorithms. It means that in Decision Transformers, we don’t maximize the return but rather generate a series of future actions that achieve the desired return.The process goes this way:We feed the last K timesteps into the Decision Transformer with 3 inputs:Return-to-goStateActionThe tokens are embedded either with a linear layer if the state is a vector or CNN encoder if it’s frames.The inputs are processed by a GPT-2 model which predicts future actions via autoregressive modeling.Decision Transformer architecture. States, actions, and returns are fed into modality specific linear embeddings and a positional episodic timestep encoding is added. Tokens are fed into a GPT architecture which predicts actions autoregressively using a causal self-attention mask. Figure from [1].Using the Decision Transformer in 🤗 TransformersThe Decision Transformer model is now available as part of the 🤗 transformers library. In addition, we share nine pre-trained model checkpoints for continuous control tasks in the Gym environment.An “expert” Decision Transformers model, learned using offline RL in the Gym Walker2d environment.Install the packagepip install git+https://github.com/huggingface/transformersLoading the modelUsing the Decision Transformer is relatively easy, but as it is an autoregressive model, some care has to be taken in order to prepare the model’s inputs at each time-step. We have prepared both a Python script and a Colab notebook that demonstrates how to use this model.Loading a pretrained Decision Transformer is simple in the 🤗 transformers library:from transformers import DecisionTransformerModelmodel_name = "edbeeching/decision-transformer-gym-hopper-expert"model = DecisionTransformerModel.from_pretrained(model_name)Creating the environmentWe provide pretrained checkpoints for the Gym Hopper, Walker2D and Halfcheetah. Checkpoints for Atari environments will soon be available.import gymenv = gym.make("Hopper-v3")state_dim = env.observation_space.shape[0] # state sizeact_dim = env.action_space.shape[0] # action sizeAutoregressive prediction functionThe model performs an autoregressive prediction; that is to say that predictions made at the current time-step t are sequentially conditioned on the outputs from previous time-steps. This function is quite meaty, so we will aim to explain it in the comments.# Function that gets an action from the model using autoregressive prediction # with a window of the previous 20 timesteps.def get_action(model, states, actions, rewards, returns_to_go, timesteps):# This implementation does not condition on past rewardsstates = states.reshape(1, -1, model.config.state_dim)actions = actions.reshape(1, -1, model.config.act_dim)returns_to_go = returns_to_go.reshape(1, -1, 1)timesteps = timesteps.reshape(1, -1)# The prediction is conditioned on up to 20 previous time-stepsstates = states[:, -model.config.max_length :]actions = actions[:, -model.config.max_length :]returns_to_go = returns_to_go[:, -model.config.max_length :]timesteps = timesteps[:, -model.config.max_length :]# pad all tokens to sequence length, this is required if we process batchespadding = model.config.max_length - states.shape[1]attention_mask = torch.cat([torch.zeros(padding), torch.ones(states.shape[1])])attention_mask = attention_mask.to(dtype=torch.long).reshape(1, -1)states = torch.cat([torch.zeros((1, padding, state_dim)), states], dim=1).float()actions = torch.cat([torch.zeros((1, padding, act_dim)), actions], dim=1).float()returns_to_go = torch.cat([torch.zeros((1, padding, 1)), returns_to_go], dim=1).float()timesteps = torch.cat([torch.zeros((1, padding), dtype=torch.long), timesteps], dim=1)# perform the predictionstate_preds, action_preds, return_preds = model(states=states,actions=actions,rewards=rewards,returns_to_go=returns_to_go,timesteps=timesteps,attention_mask=attention_mask,return_dict=False,)return action_preds[0, -1]Evaluating the modelIn order to evaluate the model, we need some additional information; the mean and standard deviation of the states that were used during training. Fortunately, these are available for each of the checkpoint’s model card on the Hugging Face Hub! We also need a target return for the model. This is the power of return conditioned Offline Reinforcement Learning: we can use the target return to control the performance of the policy. This could be really powerful in a multiplayer setting, where we would like to adjust the performance of an opponent bot to be at a suitable difficulty for the player. The authors show a great plot of this in their paper!Sampled (evaluation) returns accumulated by Decision Transformer when conditioned onthe specified target (desired) returns. Top: Atari. Bottom: D4RL medium-replay datasets. Figure from [1].TARGET_RETURN = 3.6 # This was normalized during trainingMAX_EPISODE_LENGTH = 1000 state_mean = np.array([1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432,0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673,])state_std = np.array([0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924,1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027,])state_mean = torch.from_numpy(state_mean)state_std = torch.from_numpy(state_std)state = env.reset()target_return = torch.tensor(TARGET_RETURN).float().reshape(1, 1)states = torch.from_numpy(state).reshape(1, state_dim).float()actions = torch.zeros((0, act_dim)).float()rewards = torch.zeros(0).float()timesteps = torch.tensor(0).reshape(1, 1).long()# take steps in the environmentfor t in range(max_ep_len):# add zeros for actions as input for the current time-stepactions = torch.cat([actions, torch.zeros((1, act_dim))], dim=0)rewards = torch.cat([rewards, torch.zeros(1)])# predicting the action to takeaction = get_action(model,(states - state_mean) / state_std,actions,rewards,target_return,timesteps)actions[-1] = actionaction = action.detach().numpy()# interact with the environment based on this actionstate, reward, done, _ = env.step(action)cur_state = torch.from_numpy(state).reshape(1, state_dim)states = torch.cat([states, cur_state], dim=0)rewards[-1] = rewardpred_return = target_return[0, -1] - (reward / scale)target_return = torch.cat([target_return, pred_return.reshape(1, 1)], dim=1)timesteps = torch.cat([timesteps, torch.ones((1, 1)).long() * (t + 1)], dim=1)if done:breakYou will find a more detailed example, with the creation of videos of the agent in our Colab notebook.ConclusionIn addition to Decision Transformers, we want to support more use cases and tools from the Deep Reinforcement Learning community. Therefore, it would be great to hear your feedback on the Decision Transformer model, and more generally anything we can build with you that would be useful for RL. Feel free to reach out to us. What’s next?In the coming weeks and months, we plan on supporting other tools from the ecosystem:Integrating RL-baselines3-zooUploading RL-trained-agents models into the Hub: a big collection of pre-trained Reinforcement Learning agents using stable-baselines3Integrating other Deep Reinforcement Learning librariesImplementing Convolutional Decision Transformers For AtariAnd more to come 🥳The best way to keep in touch is to join our discord server to exchange with us and with the community.References[1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." Advances in neural information processing systems 34 (2021).[2] Agarwal, Rishabh, Dale Schuurmans, and Mohammad Norouzi. "An optimistic perspective on offline reinforcement learning." International Conference on Machine Learning. PMLR, 2020.AcknowledgementsWe would like to thank the paper’s first authors, Kevin Lu and Lili Chen, for their constructive conversations.
https://huggingface.co/blog/meg-mitchell-interview
Machine Learning Experts - Margaret Mitchell
Britney Muller
March 23, 2022
Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is none other than Margaret Mitchell (Meg for short). Meg founded & co-led Google’s Ethical AI Group, is a pioneer in the field of Machine Learning, has published over 50 papers, and is a leading researcher in Ethical AI.You’ll hear Meg talk about the moment she realized the importance of ethical AI (an incredible story!), how ML teams can be more aware of harmful data bias, and the power (and performance) benefits of inclusion and diversity in ML.Very excited to introduce this powerful episode to you! Here’s my conversation with Meg Mitchell:Transcription:Note: Transcription has been slightly modified/reformatted to deliver the highest-quality reading experience.Could you share a little bit about your background and what brought you to Hugging Face?Dr. Margaret Mitchell’s Background:Bachelor’s in Linguistics at Reed College - Worked on NLPWorked on assistive and augmentative technology after her Bachelor’s and also during her graduate studiesMaster’s in Computational Linguistics at the University of WashingtonPhD in Computer ScienceMeg: I did heavy statistical work as a postdoc at Johns Hopkins and then went to Microsoft Research where I continued doing vision to language generation that led to working on an app for people who are blind to navigate the world a bit easier called Seeing AI.After a few years at Microsoft, I left to work at Google to focus on big data problems inherent in deep learning. That’s where I started focusing on things like fairness, rigorous evaluation for different kinds of issues, and bias. While at Google, I founded and co-led the Ethical AI Team which focuses on inclusion and transparency.After four years at Google, I came over to Hugging Face where I was able to jump in and focus on coding.I’m helping to create protocols for ethical AI research, inclusive hiring, systems, and setting up a good culture here at Hugging Face.When did you recognize the importance of Ethical AI?Meg: This occurred when I was working at Microsoft while I was working on the assistance technology, Seeing AI. In general, I was working on generating language from images and I started to see was how lopsided data was. Data represents a subset of the world and it influences what a model will say.So I began to run into issues where white people would be described as ‘people’ and black people would be described as ‘black people’ as if white was a default and black was a marked characteristic. That was concerning to me.There was also an ah-ha moment when I was feeding my system a sequence of images, getting it to talk more about a story of what is happening. And I fed it some images of this massive blast where a lot of people worked, called the ‘Hebstad blast’. You could see that the person taking the picture was on the second or third story looking out on the blast. The blast was very close to this person. It was a very dire and intense moment and when I fed this to the system the system’s output was that “ this is awesome, this is a great view, this is beautiful’. And I thought.. this is a great view of this horrible scene but the important part here is that people may be dying. This is a massive destructive explosion. But the thing is, when you’re learning from images people don’t tend to take photos of terrible things, they take photos of sunsets, fireworks, etc., and a visual recognition model had learned on these images and believed that color in the sky was a positive, beautiful thing.At that moment, I realized that if a model with that sort of thinking had access to actions it would be just one hop away from a system that would blow up buildings because it thought it was beautiful.This was a moment for me when I realized I didn’t want to keep making these systems do better on benchmarks, I wanted to fundamentally shift how we were looking at these problems, how we were approaching data and analysis of data, how we were evaluating and all of the factors we were leaving out with these straightforward pipelines.So that really became my shift into ethical AI work.In what applications is data ethics most important?Meg: Human-centric technology that deals with people and identity (face recognition, pedestrian recognition). In NLP this would pertain more to the privacy of individuals, how individuals are talked about, and the biases models pick up with regards to descriptors used for people.How can ML teams be more aware of harmful bias?Meg: A primary issue is that these concepts haven't been taught and most teams simply aren’t aware. Another problem is the lack of a lexicon to contextualize and communicate what is going on.For example:This is what marginalization isThis is what a power differential isHere is what inclusion isHere is how stereotypes workHaving a better understanding of these pillars is really important.Another issue is the culture behind machine learning. It’s taken a bit of an ‘Alpha’ or ‘macho’ approach where the focus is on ‘beating’ the last numbers, making things ‘faster’, ‘bigger’, etc. There are lots of parallels that can be made to human anatomy.There’s also a very hostile competitiveness that comes out where you find that women are disproportionately treated as less than.Since women are often much more familiar with discrimination women are focusing a lot more on ethics, stereotypes, sexism, etc. within AI. This means it gets associated with women more and seen as less than which makes the culture a lot harder to penetrate.It’s generally assumed that I’m not technical. It’s something I have to prove over and over again. I’m called a linguist, an ethicist because these are things I care about and know about but that is treated as less-than. People say or think, “You don’t program, you don’t know about statistics, you are not as important,” and it’s often not until I start talking about things technically that people take me seriously which is unfortunate.There is a massive cultural barrier in ML.Lack of diversity and inclusion hurts everyoneMeg: Diversity is when you have a lot of races, ethnicities, genders, abilities, statuses at the table.Inclusion is when each person feels comfortable talking, they feel welcome.One of the best ways to be more inclusive is to not be exclusive. Feels fairly obvious but is often missed. People get left out of meetings because we don’t find them helpful or find them annoying or combative (which is a function of various biases). To be inclusive you need to not be exclusive so when scheduling a meeting pay attention to the demographic makeup of the people you’re inviting. If your meeting is all-male, that’s a problem. It’s incredibly valuable to become more aware and intentional about the demographic makeup of the people you’re including in an email. But you’ll notice in tech, a lot of meetings are all male, and if you bring it up that can be met with a lot of hostility. Air on the side of including people.We all have biases but there are tactics to break some of those patterns. When writing an email I’ll go through their gender and ethnicities to ensure I’m being inclusive. It’s a very conscious effort. That sort of thinking through demographics helps. However, mention this before someone sends an email or schedules a meeting. People tend to not respond as well when you mention these things after the fact.Diversity in AI - Isn’t there proof that having a more diverse set of people on an ML project results in better outcomes?Meg: Yes, since you have different perspectives you have a different distribution over options and thus, more options. One of the fundamental aspects of machine learning is that when you start training you can use a randomized starting point and what kind of distribution you want to sample from.Most engineers can agree that you don’t want to sample from one little piece of the distribution to have the best chance of finding a local optimum.You need to translate this approach to the people sitting at the table.Just how you want to have a Gaussian approach over different start states, so too do you want that at the table when you’re starting projects because it gives you this larger search space making it easier to attain a local optimum.Can you talk about Model Cards and how that project came to be?Meg: This project started at Google when I first started working on fairness and what a rigorous evaluation of fairness would look like.In order to do that you need to have an understanding of context and understanding of who would use it. This revolved around how to approach model biases and it wasn’t getting a lot of pick up. I was talking to Timnit Gebru who was at that time someone in the field with similar interest to me and she was talking about this idea of datasheets; a kind of documentation for data (based on her experience at Apple) doing engineering where you tend to have specifications of hardware. But we don’t have something similar for data and she was talking about how crazy that is. So Timnit had this idea of datasheets for datasets. It struck me that by having an ‘artifact’ people in tech who are motivated by launches would care a lot more about it. So if we say you have to produce this artifact and it will count as a launch suddenly people would be more incentivized to do it.The way we came up with the name was that a comparable word to ‘data sheet’ that could be used for models was card (plus it was shorter). Also decided to call it ‘model cards’ because the name was very generic and would have longevity over time.Timnit’s paper was called ‘Data Sheets for Datasets’. So we called ours ‘Model Cards for Model Reporting’ and once we had the published paper people started taking us more seriously. Couldn’t have done this without Timnit Gebru’s brilliance suggesting “You need an artifact, a standardized thing that people will want to produce.”Where are model cards headed?Meg: There’s a pretty big barrier to entry to do model cards in a way that is well informed by ethics. Partly because the people who need to fill this out are often engineers and developers who want to launch their model and don’t want to sit around thinking about documentation and ethics.Part of why I wanted to join Hugging Face is because it gave me an opportunity to standardize how these processes could be filled out and automated as much as possible. One thing I really like about Hugging Face is there is a focus on creating end-to-end machine learning processes that are as smooth as possible. Would love to do something like that with model cards where you could have something largely automatically generated as a function of different questions asked or even based on model specifications directly.We want to work towards having model cards as filled out as possible and interactive. Interactivity would allow you to see the difference in false-negative rate as you move the decision threshold. Normally with classification systems, you set some threshold at which you say yes or no, like .7, but in practice, you actually want to vary the decision threshold to trade off different errors. A static report of how well it works isn’t as informative as you want it to be because you want to know how well it works as different decision thresholds are chosen, and you could use that to decide what decision threshold to be used with your system. So we created a model card where you could interactively change the decision threshold and see how the numbers change. Moving towards that direction in further automation and interactivity is the way to go.Decision thresholds & model transparencyMeg: When Amazon first started putting out facial recognition and facial analysis technology it was found that the gender classification was disproportionately bad for black women and Amazon responded by saying “this was done using the wrong decision threshold”. And then one of the police agencies who had been using one of these systems had been asked what decision threshold they had been using and said, “Oh we’re not using a decision threshold,”. Which was like oh you really don’t understand how this works and are using this out of the box with default parameter settings?! That is a problem. So minimally having this documentary brings awareness to decisions around the various types of parameters.Machine learning models are so different from other things we put out into the public. Toys, medicine, and cars have all sorts of regulations to ensure products are safe and work as intended. We don’t have that in machine learning, partly because it’s new so the laws and regulations don’t exist yet. It’s a bit like the wild west, and that’s what we’re trying to change with model cards.What are you working on at Hugging Face?Working on a few different tools designed for engineers.Working on philosophical and social science research: Just did a deep dive into UDHR (Universal Declaration of Human Rights) and how those can be applied with AI. Trying to help bridge the gaps between AI, ML, law, and philosophy.Trying to develop some statistical methods that are helpful for testing systems as well as understanding datasets.We also recently put out a tool that shows how well a language maps to Zipfian distributions (how natural language tends to go) so you can test how well your model is matching with natural language that way.Working a lot on the culture stuff: spending a lot of time on hiring and what processes we should have in place to be more inclusive.Working on Big Science: a massive effort with people from all around the world, not just hugging face working on data governance (how can big data be used and examined without having it proliferate all over the world/being tracked with how it’s used).Occasionally I’ll do an interview or talk to a Senator, so it’s all over the place.Try to answer emails sometimes.Note: Everyone at Hugging Face wears several hats. :)Meg’s impact on AIMeg is featured in the book Genius Makers ‘The Mavericks who brought AI to Google, Facebook, and the World’. Cade Metz interviewed Meg for this while she was at Google.Meg’s pioneering research, systems, and work have played a pivotal role in the history of AI. (we are so lucky to have her at Hugging Face!)Rapid Fire Questions:Best piece of advice for someone looking to get into AI?Meg: Depends on who the person is. If they have marginalized characteristics I would give very different advice. For example, if it was a woman I would say, 'Don’t listen to your supervisors saying you aren’t good at this. Chances are you are just thinking about things differently than they are used to so have confidence in yourself.'If it’s someone with more majority characteristics I’d say, 'Forget about the pipeline problem, pay attention to the people around you and make sure that you hold them up so that the pipeline you’re in now becomes less of a problem.'Also, 'Evaluate your systems'.What industries are you most excited to see ML applied (or ML Ethics be applied)Meg: The health and assistive domains continue to be areas I care a lot about and see a ton of potential.Also want to see systems that help people understand their own biases. Lots of technology is being created to screen job candidates for job interviews but I feel that technology should really be focused on the interviewer and how they might be coming at the situation with different biases. Would love to have more technology that assists humans to be more inclusive instead of assisting humans to exclude people.You frequently include incredible examples of biased models in your Keynotes and interviews. One in particular that I love is the criminal detection model you've talked about that was using patterns of mouth angles to identify criminals (which you swiftly debunked).Meg: Yes, [the example is that] they were making this claim that there was this angle theta that was more indicative of criminals when it was a smaller angle. However, I was looking at the math and I realized that what they were talking about was a smile! Where you would have a wider angle for a smile vs a smaller angle associated with a straight face. They really missed the boat on what they were actually capturing there. Experimenter's bias: wanting to find things that aren’t there.Should people be afraid of AI taking over the world?Meg: There are a lot of things to be afraid of with AI. I like to see it as we have a distribution over different kinds of outcomes, some more positive than others, so there’s not one set one that we can know. There are a lot of different things where AI can be super helpful and more task-based over more generalized intelligence. You can see it going in another direction, similar to what I mentioned earlier about a model thinking something destructive is beautiful is one hop away from a system that is able to press a button to set off a missile. Don’t think people should be scared per se, but they should think about the best and worst-case scenarios and try to mitigate or stop the worst outcomes.I think the biggest thing right now is these systems can widen the divide between the haves and have nots. Further giving power to people who have power and further worsening things for people who don’t. The people designing these systems tend to be people with more power and wealth and they design things for their kinds of interest. I think that’s happening right now and something to think about in the future.Hopefully, we can focus on the things that are most beneficial and continue heading in that direction.Fav ML papers?Meg: Most recently I’ve really loved what Abeba Birhane has been doing on values that are encoded in machine learning. My own team at Google had been working on data genealogies, bringing critical analysis on how ML data is handled which they have a few papers on - for example, Data and its (dis)contents: A survey of dataset development and use in machine learning research. Really love that work and might be biased because it included my team and direct reports, I’m very proud of them but it really is fundamentally good work.Earlier papers that I’m interested in are more reflective of what I was doing at that time. Really love the work of Herbert Clark who was a psycholinguistics/communications person and he did a lot of work that is easily ported to computational models about how humans communicate. Really love his work and cite him a lot throughout my thesis.Anything else you would like to mention?Meg: One of the things I’m working on, that I think other people should be working on, is lowering the barrier of entry to AI for people with different academic backgrounds.We have a lot of people developing technology, which is great, but we don’t have a lot of people in a situation where they can really question the technology because there is often a bottleneck.For example, if you want to know about data directly you have to be able to log into a server and write a SQL query. So there is a bottleneck where engineers have to do it and I want to remove that barrier. How can we take things that are fundamentally technical code stuff and open it up so people can directly query the data without knowing how to program?We will be able to make better technology when we remove the barriers that require engineers to be in the middle.OutroBritney: Meg had a hard stop on the hour but I was able to ask her my last question offline: What’s something you’ve been interested in lately? Meg’s response: "How to propagate and grow plants in synthetic/controlled settings." Just when I thought she couldn’t get any cooler. 🤯I’ll leave you with a recent quote from Meg in a Science News article on Ethical AI:“The most pressing problem is the diversity and inclusion of who’s at the table from the start. All the other issues fall out from there.” -Meg Mitchell.Thank you for listening to Machine Learning Experts!Honorable mentions + links:Emily BenderEhud ReiterAbeba BirhaneSeeing AIData Sheets for DatasetsModel CardsModel Cards PaperAbeba BirhaneThe Values Encoded in Machine Learning ResearchData and its (dis)contents:Herbert ClarkFollow Meg Online:TwitterWebsiteLinkedIn
https://huggingface.co/blog/ai-residency
Announcing the 🤗 AI Research Residency Program 🎉 🎉 🎉
Douwe Kiela
March 22, 2022
The 🤗 Research Residency Program is a 9-month opportunity to launch or advance your career in machine learning research 🚀. The goal of the residency is to help you grow into an impactful AI researcher. Residents will work alongside Researchers from our Science Team. Together, you will pick a research problem and then develop new machine learning techniques to solve it in an open & collaborative way, with the hope of ultimately publishing your work and making it visible to a wide audience.Applicants from all backgrounds are welcome! Ideally, you have some research experience and are excited about our mission to democratize responsible machine learning. The progress of our field has the potential to exacerbate existing disparities in ways that disproportionately hurt the most marginalized people in society — including people of color, people from working-class backgrounds, women, and LGBTQ+ people. These communities must be centered in the work we do as a research community. So we strongly encourage proposals from people whose personal experience reflects these identities.. We encourage applications relating to AI that demonstrate a clear and positive societal impact. How to Apply Since the focus of your work will be on developing Machine Learning techniques, your application should show evidence of programming skills and of prerequisite courses, like calculus or linear algebra, or links to an open-source project that demonstrates programming and mathematical ability.More importantly, your application needs to present interest in effecting positive change through AI in any number of creative ways. This can stem from a topic that is of particular interest to you and your proposal would capture concrete ways in which machine learning can contribute. Thinking through the entire pipeline, from understanding where ML tools are needed to gathering data and deploying the resulting approach, can help make your project more impactful.We are actively working to build a culture that values diversity, equity, and inclusivity. We are intentionally building a workplace where people feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community. Hugging Face is an equal opportunity employer and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.Submit your application here. FAQs Can I complete the program part-time?No. The Residency is only offered as a full-time position.I have been out of school for several years. Can I apply?Yes. We will consider applications from various backgrounds.Can I be enrolled as a student at a university or work for another employer during the residency?No, the residency can’t be completed simultaneously with any other obligations.Will I receive benefits during the Residency?Yes, residents are eligible for most benefits, including medical (depending on location).Will I be required to relocate for this residency?Absolutely not! We are a distributed team and you are welcome to work from wherever you are currently located.Is there a deadline?Applications close on April 3rd, 2022!
https://huggingface.co/blog/fine-tune-segformer
Fine-Tune a Semantic Segmentation Model with a Custom Dataset
Tobias Cornille, Niels Rogge
March 17, 2022
This guide shows how you can fine-tune Segformer, a state-of-the-art semantic segmentation model. Our goal is to build a model for a pizza delivery robot, so it can see where to drive and recognize obstacles 🍕🤖. We'll first label a set of sidewalk images on Segments.ai. Then we'll fine-tune a pre-trained SegFormer model by using 🤗 transformers, an open-source library that offers easy-to-use implementations of state-of-the-art models. Along the way, you'll learn how to work with the Hugging Face Hub, the largest open-source catalog of models and datasets.Semantic segmentation is the task of classifying each pixel in an image. You can see it as a more precise way of classifying an image. It has a wide range of use cases in fields such as medical imaging and autonomous driving. For example, for our pizza delivery robot, it is important to know exactly where the sidewalk is in an image, not just whether there is a sidewalk or not.Because semantic segmentation is a type of classification, the network architectures used for image classification and semantic segmentation are very similar. In 2014, a seminal paper by Long et al. used convolutional neural networks for semantic segmentation. More recently, Transformers have been used for image classification (e.g. ViT), and now they're also being used for semantic segmentation, pushing the state-of-the-art further.SegFormer is a model for semantic segmentation introduced by Xie et al. in 2021. It has a hierarchical Transformer encoder that doesn't use positional encodings (in contrast to ViT) and a simple multi-layer perceptron decoder. SegFormer achieves state-of-the-art performance on multiple common datasets. Let's see how our pizza delivery robot performs for sidewalk images.Let's get started by installing the necessary dependencies. Because we're going to push our dataset and model to the Hugging Face Hub, we need to install Git LFS and log in to Hugging Face.The installation of git-lfs might be different on your system. Note that Google Colab has Git LFS pre-installed.pip install -q transformers datasets evaluate segments-aiapt-get install git-lfsgit lfs installhuggingface-cli login 1. Create/choose a dataset The first step in any ML project is assembling a good dataset. In order to train a semantic segmentation model, we need a dataset with semantic segmentation labels. We can either use an existing dataset from the Hugging Face Hub, such as ADE20k, or create our own dataset.For our pizza delivery robot, we could use an existing autonomous driving dataset such as CityScapes or BDD100K. However, these datasets were captured by cars driving on the road. Since our delivery robot will be driving on the sidewalk, there will be a mismatch between the images in these datasets and the data our robot will see in the real world. We don't want our delivery robot to get confused, so we'll create our own semantic segmentation dataset using images captured on sidewalks. We'll show how you can label the images we captured in the next steps. If you just want to use our finished, labeled dataset, you can skip the "Create your own dataset" section and continue from "Use a dataset from the Hub". Create your own dataset To create your semantic segmentation dataset, you'll need two things: images covering the situations your model will encounter in the real worldsegmentation labels, i.e. images where each pixel represents a class/category.We went ahead and captured a thousand images of sidewalks in Belgium. Collecting and labeling such a dataset can take a long time, so you can start with a smaller dataset and expand it if the model does not perform well enough.Some examples of the raw images in the sidewalk dataset.To obtain segmentation labels, we need to indicate the classes of all the regions/objects in these images. This can be a time-consuming endeavour, but using the right tools can speed up the task significantly. For labeling, we'll use Segments.ai, since it has smart labeling tools for image segmentation and an easy-to-use Python SDK. Set up the labeling task on Segments.ai First, create an account at https://segments.ai/join. Next, create a new dataset and upload your images. You can either do this from the web interface or via the Python SDK (see the notebook). Label the images Now that the raw data is loaded, go to segments.ai/home and open the newly created dataset. Click "Start labeling" and create segmentation masks. You can use the ML-powered superpixel and autosegment tools to label faster.Tip: when using the superpixel tool, scroll to change the superpixel size, and click and drag to select segments. Push the result to the Hugging Face Hub When you're done labeling, create a new dataset release containing the labeled data. You can either do this on the releases tab on Segments.ai, or programmatically through the SDK as shown in the notebook. Note that creating the release can take a few seconds. You can check the releases tab on Segments.ai to check if your release is still being created.Now, we'll convert the release to a Hugging Face dataset via the Segments.ai Python SDK. If you haven't set up the Segments Python client yet, follow the instructions in the "Set up the labeling task on Segments.ai" section of the notebook. Note that the conversion can take a while, depending on the size of your dataset.from segments.huggingface import release2datasetrelease = segments_client.get_release(dataset_identifier, release_name)hf_dataset = release2dataset(release)If we inspect the features of the new dataset, we can see the image column and the corresponding label. The label consists of two parts: a list of annotations and a segmentation bitmap. The annotation corresponds to the different objects in the image. For each object, the annotation contains an id and a category_id. The segmentation bitmap is an image where each pixel contains the id of the object at that pixel. More information can be found in the relevant docs.For semantic segmentation, we need a semantic bitmap that contains a category_id for each pixel. We'll use the get_semantic_bitmap function from the Segments.ai SDK to convert the bitmaps to semantic bitmaps. To apply this function to all the rows in our dataset, we'll use dataset.map. from segments.utils import get_semantic_bitmapdef convert_segmentation_bitmap(example): return { "label.segmentation_bitmap": get_semantic_bitmap(example["label.segmentation_bitmap"],example["label.annotations"],id_increment=0, ) }semantic_dataset = hf_dataset.map( convert_segmentation_bitmap,)You can also rewrite the convert_segmentation_bitmap function to use batches and pass batched=True to dataset.map. This will significantly speed up the mapping, but you might need to tweak the batch_size to ensure the process doesn't run out of memory.The SegFormer model we're going to fine-tune later expects specific names for the features. For convenience, we'll match this format now. Thus, we'll rename the image feature to pixel_values and the label.segmentation_bitmap to label and discard the other features.semantic_dataset = semantic_dataset.rename_column('image', 'pixel_values')semantic_dataset = semantic_dataset.rename_column('label.segmentation_bitmap', 'label')semantic_dataset = semantic_dataset.remove_columns(['name', 'uuid', 'status', 'label.annotations'])We can now push the transformed dataset to the Hugging Face Hub. That way, your team and the Hugging Face community can make use of it. In the next section, we'll see how you can load the dataset from the Hub.hf_dataset_identifier = f"{hf_username}/{dataset_name}"semantic_dataset.push_to_hub(hf_dataset_identifier) Use a dataset from the Hub If you don't want to create your own dataset, but found a suitable dataset for your use case on the Hugging Face Hub, you can define the identifier here. For example, you can use the full labeled sidewalk dataset. Note that you can check out the examples directly in your browser.hf_dataset_identifier = "segments/sidewalk-semantic" 2. Load and prepare the Hugging Face dataset for training Now that we've created a new dataset and pushed it to the Hugging Face Hub, we can load the dataset in a single line.from datasets import load_datasetds = load_dataset(hf_dataset_identifier)Let's shuffle the dataset and split the dataset in a train and test set.ds = ds.shuffle(seed=1)ds = ds["train"].train_test_split(test_size=0.2)train_ds = ds["train"]test_ds = ds["test"]We'll extract the number of labels and the human-readable ids, so we can configure the segmentation model correctly later on.import jsonfrom huggingface_hub import hf_hub_downloadrepo_id = f"datasets/{hf_dataset_identifier}"filename = "id2label.json"id2label = json.load(open(hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset"), "r"))id2label = {int(k): v for k, v in id2label.items()}label2id = {v: k for k, v in id2label.items()}num_labels = len(id2label) Image processor & data augmentation A SegFormer model expects the input to be of a certain shape. To transform our training data to match the expected shape, we can use SegFormerImageProcessor. We could use the ds.map function to apply the image processor to the whole training dataset in advance, but this can take up a lot of disk space. Instead, we'll use a transform, which will only prepare a batch of data when that data is actually used (on-the-fly). This way, we can start training without waiting for further data preprocessing.In our transform, we'll also define some data augmentations to make our model more resilient to different lighting conditions. We'll use the ColorJitter function from torchvision to randomly change the brightness, contrast, saturation, and hue of the images in the batch.from torchvision.transforms import ColorJitterfrom transformers import SegformerImageProcessorprocessor = SegformerImageProcessor()jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) def train_transforms(example_batch): images = [jitter(x) for x in example_batch['pixel_values']] labels = [x for x in example_batch['label']] inputs = processor(images, labels) return inputsdef val_transforms(example_batch): images = [x for x in example_batch['pixel_values']] labels = [x for x in example_batch['label']] inputs = processor(images, labels) return inputs# Set transformstrain_ds.set_transform(train_transforms)test_ds.set_transform(val_transforms) 3. Fine-tune a SegFormer model Load the model to fine-tune The SegFormer authors define 5 models with increasing sizes: B0 to B5. The following chart (taken from the original paper) shows the performance of these different models on the ADE20K dataset, compared to other models.SourceHere, we'll load the smallest SegFormer model (B0), pre-trained on ImageNet-1k. It's only about 14MB in size!Using a small model will make sure that our model can run smoothly on our pizza delivery robot.from transformers import SegformerForSemanticSegmentationpretrained_model_name = "nvidia/mit-b0" model = SegformerForSemanticSegmentation.from_pretrained( pretrained_model_name, id2label=id2label, label2id=label2id) Set up the Trainer To fine-tune the model on our data, we'll use Hugging Face's Trainer API. We need to set up the training configuration and an evalutation metric to use a Trainer.First, we'll set up the TrainingArguments. This defines all training hyperparameters, such as learning rate and the number of epochs, frequency to save the model and so on. We also specify to push the model to the hub after training (push_to_hub=True) and specify a model name (hub_model_id).from transformers import TrainingArgumentsepochs = 50lr = 0.00006batch_size = 2hub_model_id = "segformer-b0-finetuned-segments-sidewalk-2"training_args = TrainingArguments( "segformer-b0-finetuned-segments-sidewalk-outputs", learning_rate=lr, num_train_epochs=epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_total_limit=3, evaluation_strategy="steps", save_strategy="steps", save_steps=20, eval_steps=20, logging_steps=1, eval_accumulation_steps=5, load_best_model_at_end=True, push_to_hub=True, hub_model_id=hub_model_id, hub_strategy="end",)Next, we'll define a function that computes the evaluation metric we want to work with. Because we're doing semantic segmentation, we'll use the mean Intersection over Union (mIoU), directly accessible in the evaluate library. IoU represents the overlap of segmentation masks. Mean IoU is the average of the IoU of all semantic classes. Take a look at this blogpost for an overview of evaluation metrics for image segmentation.Because our model outputs logits with dimensions height/4 and width/4, we have to upscale them before we can compute the mIoU.import torchfrom torch import nnimport evaluatemetric = evaluate.load("mean_iou")def compute_metrics(eval_pred): with torch.no_grad(): logits, labels = eval_pred logits_tensor = torch.from_numpy(logits) # scale the logits to the size of the label logits_tensor = nn.functional.interpolate( logits_tensor, size=labels.shape[-2:], mode="bilinear", align_corners=False, ).argmax(dim=1) pred_labels = logits_tensor.detach().cpu().numpy() metrics = metric.compute( predictions=pred_labels, references=labels, num_labels=len(id2label), ignore_index=0, reduce_labels=processor.do_reduce_labels, ) # add per category metrics as individual key-value pairs per_category_accuracy = metrics.pop("per_category_accuracy").tolist() per_category_iou = metrics.pop("per_category_iou").tolist() metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)}) metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)}) return metricsFinally, we can instantiate a Trainer object.from transformers import Trainertrainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics,)Now that our trainer is set up, training is as simple as calling the train function. We don't need to worry about managing our GPU(s), the trainer will take care of that.trainer.train()When we're done with training, we can push our fine-tuned model and the image processor to the Hub.This will also automatically create a model card with our results. We'll supply some extra information in kwargs to make the model card more complete.kwargs = { "tags": ["vision", "image-segmentation"], "finetuned_from": pretrained_model_name, "dataset": hf_dataset_identifier,}processor.push_to_hub(hub_model_id)trainer.push_to_hub(**kwargs) 4. Inference Now comes the exciting part, using our fine-tuned model! In this section, we'll show how you can load your model from the hub and use it for inference. However, you can also try out your model directly on the Hugging Face Hub, thanks to the cool widgets powered by the hosted inference API. If you pushed your model to the Hub in the previous step, you should see an inference widget on your model page. You can add default examples to the widget by defining example image URLs in your model card. See this model card as an example. Use the model from the Hub We'll first load the model from the Hub using SegformerForSemanticSegmentation.from_pretrained().from transformers import SegformerImageProcessor, SegformerForSemanticSegmentationprocessor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")model = SegformerForSemanticSegmentation.from_pretrained(f"{hf_username}/{hub_model_id}")Next, we'll load an image from our test dataset.image = test_ds[0]['pixel_values']gt_seg = test_ds[0]['label']imageTo segment this test image, we first need to prepare the image using the image processor. Then we forward it through the model.We also need to remember to upscale the output logits to the original image size. In order to get the actual category predictions, we just have to apply an argmax on the logits.from torch import nninputs = processor(images=image, return_tensors="pt")outputs = model(**inputs)logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)# First, rescale logits to original image sizeupsampled_logits = nn.functional.interpolate( logits, size=image.size[::-1], # (height, width) mode='bilinear', align_corners=False)# Second, apply argmax on the class dimensionpred_seg = upsampled_logits.argmax(dim=1)[0]Now it's time to display the result. We'll display the result next to the ground-truth mask.What do you think? Would you send our pizza delivery robot on the road with this segmentation information?The result might not be perfect yet, but we can always expand our dataset to make the model more robust. We can now also go train a larger SegFormer model, and see how it stacks up. 5. Conclusion That's it! You now know how to create your own image segmentation dataset and how to use it to fine-tune a semantic segmentation model.We introduced you to some useful tools along the way, such as:Segments.ai for labeling your data🤗 datasets for creating and sharing a dataset🤗 transformers for easily fine-tuning a state-of-the-art segmentation modelHugging Face Hub for sharing our dataset and model, and for creating an inference widget for our modelWe hope you enjoyed this post and learned something. Feel free to share your own model with us on Twitter (@TobiasCornille, @NielsRogge, and @huggingface).
https://huggingface.co/blog/bert-inferentia-sagemaker
Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia
Philipp Schmid
March 16, 2022
notebook: sagemaker/18_inferentia_inferenceThe adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for Computer Vision, Speech, and Time-Series. 💬 🖼 🎤 ⏳Companies are now slowly moving from the experimentation and research phase to the production phase in order to use transformer models for large-scale workloads. But by default BERT and its friends are relatively slow, big, and complex models compared to the traditional Machine Learning algorithms. Accelerating Transformers and BERT is and will become an interesting challenge to solve in the future.AWS's take to solve this challenge was to design a custom machine learning chip designed for optimized inference workload called AWS Inferentia. AWS says that AWS Inferentia “delivers up to 80% lower cost per inference and up to 2.3X higher throughput than comparable current generation GPU-based Amazon EC2 instances.” The real value of AWS Inferentia instances compared to GPU comes through the multiple Neuron Cores available on each device. A Neuron Core is the custom accelerator inside AWS Inferentia. Each Inferentia chip comes with 4x Neuron Cores. This enables you to either load 1 model on each core (for high throughput) or 1 model across all cores (for lower latency).TutorialIn this end-to-end tutorial, you will learn how to speed up BERT inference for text classification with Hugging Face Transformers, Amazon SageMaker, and AWS Inferentia. You can find the notebook here: sagemaker/18_inferentia_inferenceYou will learn how to: 1. Convert your Hugging Face Transformer to AWS Neuron2. Create a custom inference.py script for text-classification3. Create and upload the neuron model and inference script to Amazon S34. Deploy a Real-time Inference Endpoint on Amazon SageMaker5. Run and evaluate Inference performance of BERT on InferentiaLet's get started! 🚀If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances), you need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.1. Convert your Hugging Face Transformer to AWS NeuronWe are going to use the AWS Neuron SDK for AWS Inferentia. The Neuron SDK includes a deep learning compiler, runtime, and tools for converting and compiling PyTorch and TensorFlow models to neuron compatible models, which can be run on EC2 Inf1 instances. As a first step, we need to install the Neuron SDK and the required packages.Tip: If you are using Amazon SageMaker Notebook Instances or Studio you can go with the conda_python3 conda kernel.# Set Pip repository to point to the Neuron repository!pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com# Install Neuron PyTorch!pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] sagemaker>=2.79.0 transformers==4.12.3 --upgradeAfter we have installed the Neuron SDK we can load and convert our model. Neuron models are converted using torch_neuron with its trace method similar to torchscript. You can find more information in our documentation.To be able to convert our model we first need to select the model we want to use for our text classification pipeline from hf.co/models. For this example, let's go with distilbert-base-uncased-finetuned-sst-2-english but this can be easily adjusted with other BERT-like models. model_id = "distilbert-base-uncased-finetuned-sst-2-english"At the time of writing, the AWS Neuron SDK does not support dynamic shapes, which means that the input size needs to be static for compiling and inference. In simpler terms, this means that when the model is compiled with e.g. an input of batch size 1 and sequence length of 16, the model can only run inference on inputs with that same shape. When using a t2.medium instance the compilation takes around 3 minutesimport osimport tensorflow # to workaround a protobuf version conflict issueimport torchimport torch.neuronfrom transformers import AutoTokenizer, AutoModelForSequenceClassification# load tokenizer and modeltokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForSequenceClassification.from_pretrained(model_id, torchscript=True)# create dummy input for max length 128dummy_input = "dummy input which will be padded later"max_length = 128embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt")neuron_inputs = tuple(embeddings.values())# compile model with torch.neuron.trace and update configmodel_neuron = torch.neuron.trace(model, neuron_inputs)model.config.update({"traced_sequence_length": max_length})# save tokenizer, neuron model and config for later usesave_dir="tmp"os.makedirs("tmp",exist_ok=True)model_neuron.save(os.path.join(save_dir,"neuron_model.pt"))tokenizer.save_pretrained(save_dir)model.config.save_pretrained(save_dir)2. Create a custom inference.py script for text-classificationThe Hugging Face Inference Toolkit supports zero-code deployments on top of the pipeline feature from 🤗 Transformers. This allows users to deploy Hugging Face transformers without an inference script [Example]. Currently, this feature is not supported with AWS Inferentia, which means we need to provide an inference.py script for running inference. If you would be interested in support for zero-code deployments for Inferentia let us know on the forum.To use the inference script, we need to create an inference.py script. In our example, we are going to overwrite the model_fn to load our neuron model and the predict_fn to create a text-classification pipeline. If you want to know more about the inference.py script check out this example. It explains amongst other things what model_fn and predict_fn are.!mkdir codeWe are using the NEURON_RT_NUM_CORES=1 to make sure that each HTTP worker uses 1 Neuron core to maximize throughput. %%writefile code/inference.pyimport osfrom transformers import AutoConfig, AutoTokenizerimport torchimport torch.neuron# To use one neuron core per workeros.environ["NEURON_RT_NUM_CORES"] = "1"# saved weights nameAWS_NEURON_TRACED_WEIGHTS_NAME = "neuron_model.pt"def model_fn(model_dir):# load tokenizer and neuron model from model_dirtokenizer = AutoTokenizer.from_pretrained(model_dir)model = torch.jit.load(os.path.join(model_dir, AWS_NEURON_TRACED_WEIGHTS_NAME))model_config = AutoConfig.from_pretrained(model_dir)return model, tokenizer, model_configdef predict_fn(data, model_tokenizer_model_config):# destruct model, tokenizer and model configmodel, tokenizer, model_config = model_tokenizer_model_config# create embeddings for inputsinputs = data.pop("inputs", data)embeddings = tokenizer(inputs,return_tensors="pt",max_length=model_config.traced_sequence_length,padding="max_length",truncation=True,)# convert to tuple for neuron modelneuron_inputs = tuple(embeddings.values())# run predicitonwith torch.no_grad():predictions = model(*neuron_inputs)[0]scores = torch.nn.Softmax(dim=1)(predictions)# return dictonary, which will be json serializablereturn [{"label": model_config.id2label[item.argmax().item()], "score": item.max().item()} for item in scores]3. Create and upload the neuron model and inference script to Amazon S3Before we can deploy our neuron model to Amazon SageMaker we need to create a model.tar.gz archive with all our model artifacts saved into tmp/, e.g. neuron_model.pt and upload this to Amazon S3.To do this we need to set up our permissions.import sagemakerimport boto3sess = sagemaker.Session()# sagemaker session bucket -> used for uploading data, models and logs# sagemaker will automatically create this bucket if it not existssagemaker_session_bucket=Noneif sagemaker_session_bucket is None and sess is not None:# set to default bucket if a bucket name is not givensagemaker_session_bucket = sess.default_bucket()try:role = sagemaker.get_execution_role()except ValueError:iam = boto3.client('iam')role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)print(f"sagemaker role arn: {role}")print(f"sagemaker bucket: {sess.default_bucket()}")print(f"sagemaker session region: {sess.boto_region_name}")Next, we create our model.tar.gz. The inference.py script will be placed into a code/ folder.# copy inference.py into the code/ directory of the model directory.!cp -r code/ tmp/code/# create a model.tar.gz archive with all the model artifacts and the inference.py script.%cd tmp!tar zcvf model.tar.gz *%cd ..Now we can upload our model.tar.gz to our session S3 bucket with sagemaker.from sagemaker.s3 import S3Uploader# create s3 uris3_model_path = f"s3://{sess.default_bucket()}/{model_id}"# upload model.tar.gzs3_model_uri = S3Uploader.upload(local_path="tmp/model.tar.gz",desired_s3_uri=s3_model_path)print(f"model artifcats uploaded to {s3_model_uri}")4. Deploy a Real-time Inference Endpoint on Amazon SageMakerAfter we have uploaded our model.tar.gz to Amazon S3 can we create a custom HuggingfaceModel. This class will be used to create and deploy our real-time inference endpoint on Amazon SageMaker.from sagemaker.huggingface.model import HuggingFaceModel# create Hugging Face Model Classhuggingface_model = HuggingFaceModel(model_data=s3_model_uri, # path to your model and scriptrole=role, # iam role with permissions to create an Endpointtransformers_version="4.12", # transformers version usedpytorch_version="1.9", # pytorch version usedpy_version='py37', # python version used)# Let SageMaker know that we've already compiled the model via neuron-cchuggingface_model._is_compiled_model = True# deploy the endpoint endpointpredictor = huggingface_model.deploy(initial_instance_count=1, # number of instancesinstance_type="ml.inf1.xlarge" # AWS Inferentia Instance)5. Run and evaluate Inference performance of BERT on InferentiaThe .deploy() returns an HuggingFacePredictor object which can be used to request inference.data = {"inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",}res = predictor.predict(data=data)resWe managed to deploy our neuron compiled BERT to AWS Inferentia on Amazon SageMaker. Now, let's test its performance. As a dummy load test, we will loop and send 10,000 synchronous requests to our endpoint. # send 10000 requestsfor i in range(10000):resp = predictor.predict(data={"inputs": "it 's a charming and often affecting journey ."})Let's inspect the performance in cloudwatch.print(f"https://console.aws.amazon.com/cloudwatch/home?region={sess.boto_region_name}#metricsV2:graph=~(metrics~(~(~'AWS*2fSageMaker~'ModelLatency~'EndpointName~'{predictor.endpoint_name}~'VariantName~'AllTraffic))~view~'timeSeries~stacked~false~region~'{sess.boto_region_name}~start~'-PT5M~end~'P0D~stat~'Average~period~30);query=~'*7bAWS*2fSageMaker*2cEndpointName*2cVariantName*7d*20{predictor.endpoint_name}")The average latency for our BERT model is 5-6ms for a sequence length of 128.Figure 1. Model LatencyDelete model and endpointTo clean up, we can delete the model and endpoint.predictor.delete_model()predictor.delete_endpoint()ConclusionWe successfully managed to compile a vanilla Hugging Face Transformers model to an AWS Inferentia compatible Neuron Model. After that we deployed our Neuron model to Amazon SageMaker using the new Hugging Face Inference DLC. We managed to achieve 5-6ms latency per neuron core, which is faster than CPU in terms of latency, and achieves a higher throughput than GPUs since we ran 4 models in parallel. If you or you company are currently using a BERT-like Transformer for encoder tasks (text-classification, token-classification, question-answering etc.), and the latency meets your requirements you should switch to AWS Inferentia. This will not only save costs, but can also increase efficiency and performance for your models. We are planning to do a more detailed case study on cost-performance of transformers in the future, so stay tuned! Also if you want to learn more about accelerating transformers you should also check out Hugging Face optimum. Thanks for reading! If you have any questions, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.
https://huggingface.co/blog/image-search-datasets
Image search with 🤗 datasets
Daniel van Strien
March 16, 2022
🤗 datasets is a library that makes it easy to access and share datasets. It also makes it easy to process data efficiently -- including working with data which doesn't fit into memory.When datasets was first launched, it was associated mostly with text data. However, recently, datasets has added increased support for audio as well as images. In particular, there is now a datasets feature type for images. A previous blog post showed how datasets can be used with 🤗 transformers to train an image classification model. In this blog post, we'll see how we can combine datasets and a few other libraries to create an image search application.First, we'll install datasets. Since we're going to be working with images, we'll also install pillow. We'll also need sentence_transformers and faiss. We'll introduce those in more detail below. We also install rich - we'll only briefly use it here, but it's a super handy package to have around -- I'd really recommend exploring it further!!pip install datasets pillow rich faiss-gpu sentence_transformers To start, let's take a look at the image feature. We can use the wonderful rich library to poke around pythonobjects (functions, classes etc.)from rich import inspectimport datasetsinspect(datasets.Image, help=True)╭───────────────────────── <class 'datasets.features.image.Image'> ─────────────────────────╮│ class Image(decode: bool = True, id: Union[str, NoneType] = None) -> None:││ ││ Image feature to read image data from an image file. ││ ││ Input: The Image feature accepts as input:││ - A :obj:`str`: Absolute path to the image file (i.e. random access is allowed). ││ - A :obj:`dict` with the keys: ││ ││ - path: String with relative path of the image file to the archive file. ││ - bytes: Bytes of the image file. ││ ││ This is useful for archived files with sequential access. ││ ││ - An :obj:`np.ndarray`: NumPy array representing an image.││ - A :obj:`PIL.Image.Image`: PIL image object. ││ ││ Args: ││ decode (:obj:`bool`, default ``True``): Whether to decode the image data. If `False`, ││ returns the underlying dictionary in the format {"path": image_path, "bytes": ││ image_bytes}. ││ ││ decode = True ││ dtype = 'PIL.Image.Image' ││ id = None ││ pa_type = StructType(struct<bytes: binary, path: string>) │╰───────────────────────────────────────────────────────────────────────────────────────────╯We can see there a few different ways in which we can pass in our images. We'll come back to this in a little while.A really nice feature of the datasets library (beyond the functionality for processing data, memory mapping etc.) is that you getsome nice things 'for free'. One of these is the ability to add a faiss index to a dataset. faiss is a "library for efficient similarity search and clustering of densevectors".The datasets docs shows an example of using a faiss index for text retrieval. In this post we'll see if we can do the same for images. The dataset: "Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900" This is a dataset of images which have been pulled from a collection of digitised books from the British Library. These images come from books across a wide time period and from a broad range of domains. The images were extracted using information contained in the OCR output for each book. As a result, it's known which book the images came from, but not necessarily anything else about that image i.e. what is shown in the image. Some attempts to help overcome this have included uploading the images to flickr. This allows people to tag the images or put them into various different categories.There have also been projects to tag the dataset using machine learning. This work makes it possible to search by tags, but we might want a 'richer' ability to search. For this particular experiment, we'll work with a subset of the collections which contain "embellishments". This dataset is a bit smaller, so it will be better for experimenting with. We can get the full data from the British Library's data repository: https://doi.org/10.21250/db17. Since the full dataset is still fairly large, you'll probably want to start with a smaller sample. Creating our dataset Our dataset consists of a folder containing subdirectories inside which are images. This is a fairly standard format for sharing image datasets. Thanks to a recently merged pull request we can directly load this dataset using datasets ImageFolder loader 🤯from datasets import load_datasetdataset = load_dataset("imagefolder", data_files="https://zenodo.org/record/6224034/files/embellishments_sample.zip?download=1")Let's see what we get back.datasetDatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 10000 })})We can get back a DatasetDict, and we have a Dataset with image and label features. Since we don't have any train/validation splits here, let's grab the train part of our dataset. Let's also take a look at one example from our dataset to see what this looks like.dataset = dataset["train"]dataset[0]{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=358x461 at 0x7F9488DBB090>, 'label': 208}Let's start with the label column. It contains the parent folder for our images. In this case, the label column represents the year of publication for the books from which the images are taken. We can see the mappings for this using dataset.features:dataset.features['label']In this particular dataset, the image filenames also contain some metadata about the book from which the image was taken. There are a few ways we can get this information.When we look at one example from our dataset that the image feature was a PIL.JpegImagePlugin.JpegImageFile. Since PIL.Images have a filename attribute, one way in which we can grab our filenames is by accessing this. dataset[0]['image'].filename/root/.cache/huggingface/datasets/downloads/extracted/f324a87ed7bf3a6b83b8a353096fbd9500d6e7956e55c3d96d2b23cc03146582/embellishments_sample/1920/000499442_0_000579_1_[The Ring and the Book etc ]_1920.jpgSince we might want easy access to this information later, let's create a new column to extract the filename. For this, we'll use the map method.dataset = dataset.map(lambda example: {"fname": example['image'].filename.split("/")[-1]})We can look at one example to see what this looks like now.dataset[0]{'fname': '000499442_0_000579_1_[The Ring and the Book etc ]_1920.jpg', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=358x461 at 0x7F94862A9650>, 'label': 208}We've got our metadata now. Let's see some pictures already! If we access an example and index into the image column we'll see our image 😃dataset[10]['image']Note in an earlier version of this blog post the steps to download and load the images was much more convoluted. The new ImageFolder loader makes this process much easier 😀 In particular, we don't need to worry about how to load our images since datasets took care of this for us. Push all the things to the hub! One of the super awesome things about the 🤗 ecosystem is the Hugging Face Hub. We can use the Hub to access models and datasets. It is often used for sharing work with others, but it can also be a useful tool for work in progress. datasets recently added a push_to_hub method that allows you to push a dataset to the Hub with minimal fuss. This can be really helpful by allowing you to pass around a dataset with all the transforms etc. already done.For now, we'll push the dataset to the Hub and keep it private initially.Depending on where you are running the code, you may need to authenticate. You can either do this using the huggingface-cli login command or, if you are running in a notebook, using notebook_loginfrom huggingface_hub import notebook_loginnotebook_login()dataset.push_to_hub('davanstrien/embellishments-sample', private=True)Note: in a previous version of this blog post we had to do a few more steps to ensure images were embedded when using push_to_hub. Thanks to this pull request we no longer need to worry about these extra steps. We just need to make sure embed_external_files=True (which is the default behaviour). Switching machines At this point, we've created a dataset and moved it to the Hub. This means it is possible to pick up the work/dataset elsewhere.In this particular example, having access to a GPU is important. Using the Hub as a way to pass around our data we could start on a laptopand pick up the work on Google Colab.If we move to a new machine, we may need to login again. Once we've done this we can load our datasetfrom datasets import load_datasetdataset = load_dataset("davanstrien/embellishments-sample", use_auth_token=True) Creating embeddings 🕸 We now have a dataset with a bunch of images in it. To begin creating our image search app, we need to embed these images. There are various ways to try and do this, but one possible way is to use the CLIP models via the sentence_transformers library. The CLIP model from OpenAI learns a joint representation for both images and text, which is very useful for what we want to do since we want to input text and get back an image.We can download the model using the SentenceTransformer class.from sentence_transformers import SentenceTransformermodel = SentenceTransformer('clip-ViT-B-32')This model will take as input either an image or some text and return an embedding. We can use the datasets map method to encode all our images using this model. When we call map, we return a dictionary with the key embeddings containing the embeddings returned by the model. We also pass device='cuda' when we call the model; this ensures that we're doing the encoding on the GPU.ds_with_embeddings = dataset.map( lambda example: {'embeddings':model.encode(example['image'], device='cuda')}, batched=True, batch_size=32)We can 'save' our work by pushing back to the Hub usingpush_to_hub.ds_with_embeddings.push_to_hub('davanstrien/embellishments-sample', private=True)If we were to move to a different machine, we could grab our work again by loading it from the Hub 😃from datasets import load_datasetds_with_embeddings = load_dataset("davanstrien/embellishments-sample", use_auth_token=True)We now have a new column which contains the embeddings for our images. We could manually search through these and compare them to some input embedding but datasets has an add_faiss_index method. This uses the faiss library to create an efficient index for searching embeddings. For more background on this library, you can watch this YouTube videods_with_embeddings['train'].add_faiss_index(column='embeddings')Dataset({ features: ['fname', 'year', 'path', 'image', 'embeddings'], num_rows: 10000 }) Image search Note that these examples were generated from the full version of the dataset so you may get slightly different results.We now have everything we need to create a simple image search. We can use the same model we used to encode our images to encode some input text. This will act as the prompt we try and find close examples for. Let's start with 'a steam engine'.prompt = model.encode("A steam engine")We can use another method from the datasets library get_nearest_examples to get images which have an embedding close to our input prompt embedding. We can pass in a number of results we want to get back.scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt, k=9)We can index into the first example this retrieves:retrieved_examples['image'][0]This isn't quite a steam engine, but it's also not a completely weird result. We can plot the other results to see what was returned.import matplotlib.pyplot as pltplt.figure(figsize=(20, 20))columns = 3for i in range(9): image = retrieved_examples['image'][i] plt.subplot(9 / columns + 1, columns, i + 1) plt.imshow(image)Some of these results look fairly close to our input prompt. We can wrapthis in a function so we can more easily play around with different promptsdef get_image_from_text(text_prompt, number_to_retrieve=9): prompt = model.encode(text_prompt) scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt, k=number_to_retrieve) plt.figure(figsize=(20, 20)) columns = 3 for i in range(9): image = retrieved_examples['image'][i] plt.title(text_prompt) plt.subplot(9 / columns + 1, columns, i + 1) plt.imshow(image)get_image_from_text("An illustration of the sun behind a mountain") Trying a bunch of prompts ✨ Now we have a function for getting a few results, we can try a bunch ofdifferent prompts:For some of these I'll choose prompts which are a broad 'category' i.e. 'a musical instrument' or 'an animal', others are specific i.e. 'a guitar'.Out of interest I also tried a boolean operator: "An illustration of a cat or a dog".Finally I tried something a little more abstract: "an empty abyss"prompts = ["A musical instrument", "A guitar", "An animal", "An illustration of a cat or a dog", "an empty abyss"]for prompt in prompts: get_image_from_text(prompt)We can see these results aren't always right, but they are usually reasonable. It already seems like this could be useful for searching for the semantic content of an image in this dataset. However we might hold off on sharing this as is... Creating a Hugging Face Space? 🤷🏼 One obvious next step for this kind of project is to create a Hugging Face Space demo. This is what I've done for other models.It was a fairly simple process to get a Gradio app setup from the point we got to here. Here is a screenshot of this app:However, I'm a little bit vary about making this public straightaway. Looking at the model card for the CLIP model we can look at the primary intended uses: Primary intended uses We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.sourceThis is fairly close to what we are interested in here. Particularly we might be interested in how well the model deals with the kinds of images in our dataset (illustrations from mostly 19th century books). The images in our dataset are (probably) fairly different from the training data. The fact that some of the images also contain text might help CLIP since it displays some OCR ability.However, looking at the out-of-scope use cases in the model card: Out-of-Scope Use Cases Any deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP's performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case > currently potentially harmful. > sourcesuggests that 'deployment' is not a good idea. Whilst the results I got are interesting, I haven't played around with the model enough yet (and haven't done anything more systematic to evaluate its performance and biases) to be confident about 'deploying' it. Another additional consideration is the target dataset itself. The images are drawn from books covering a variety of subjects and time periods. There are plenty of books which represent colonial attitudes and as a result some of the images included may represent certain groups of people in a negative way. This could potentially be a bad combo with a tool which allows any arbitrary text input to be encoded as a prompt.There may be ways around this issue but this will require a bit more thought. Conclusion Although we don't have a nice demo to show for it, we've seen how we can use datasets to:load images into the new Image feature type'save' our work using push_to_hub and use this to move data between machines/sessionscreate a faiss index for images that we can use to retrieve images from a text (or image) input.
https://huggingface.co/blog/constrained-beam-search
Guiding Text Generation with Constrained Beam Search in 🤗 Transformers
Chan Woo Kim
March 11, 2022
IntroductionThis blog post assumes that the reader is familiar with text generation methods using the different variants of beam search, as explained in the blog post: "How to generate text: using different decoding methods for language generation with Transformers"Unlike ordinary beam search, constrained beam search allows us to exert control over the output of text generation. This is useful because we sometimes know exactly what we want inside the output. For example, in a Neural Machine Translation task, we might know which words must be included in the final translation with a dictionary lookup. Sometimes, generation outputs that are almost equally possible to a language model might not be equally desirable for the end-user due to the particular context. Both of these situations could be solved by allowing the users to tell the model which words must be included in the end output. Why It's DifficultHowever, this is actually a very non-trivial problem. This is because the task requires us to force the generation of certain subsequences somewhere in the final output, at some point during the generation. Let's say that we're want to generate a sentence S that has to include the phrase p1={t1,t2} p_1=\{ t_1, t_2 \} p1​={t1​,t2​} with tokens t1,t2 t_1, t_2 t1​,t2​ in order. Let's define the expected sentence S S S as:Sexpected={s1,s2,...,sk,t1,t2,sk+1,...,sn} S_{expected} = \{ s_1, s_2, ..., s_k, t_1, t_2, s_{k+1}, ..., s_n \} Sexpected​={s1​,s2​,...,sk​,t1​,t2​,sk+1​,...,sn​}The problem is that beam search generates the sequence token-by-token. Though not entirely accurate, one can think of beam search as the function B(s0:i)=si+1 B(\mathbf{s}_{0:i}) = s_{i+1} B(s0:i​)=si+1​, where it looks at the currently generated sequence of tokens from 0 0 0 to i i i then predicts the next token at i+1 i+1 i+1 . But how can this function know, at an arbitrary step i<k i < k i<k , that the tokens must be generated at some future step k k k ? Or when it's at the step i=k i=k i=k , how can it know for sure that this is the best spot to force the tokens, instead of some future step i>k i>k i>k ?And what if you have multiple constraints with varying requirements? What if you want to force the phrase p1={t1,t2} p_1=\{t_1, t_2\} p1​={t1​,t2​} and also the phrase p2={t3,t4,t5,t6} p_2=\{ t_3, t_4, t_5, t_6\} p2​={t3​,t4​,t5​,t6​} ? What if you want the model to choose between the two phrases? What if we want to force the phrase p1 p_1 p1​ and force just one phrase among the list of phrases {p21,p22,p23} \{p_{21}, p_{22}, p_{23}\} {p21​,p22​,p23​} ? The above examples are actually very reasonable use-cases, as it will be shown below, and the new constrained beam search feature allows for all of them!This post will quickly go over what the new constrained beam search feature can do for you and then go into deeper details about how it works under the hood.Example 1: Forcing a WordLet's say we're trying to translate "How old are you?" to German. "Wie alt bist du?" is what you'd say in an informal setting, and "Wie alt sind Sie?" is what you'd say in a formal setting.And depending on the context, we might want one form of formality over the other, but how do we tell the model that?Traditional Beam SearchHere's how we would do text translation in the traditional beam search setting.!pip install -q git+https://github.com/huggingface/transformers.gitfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLMtokenizer = AutoTokenizer.from_pretrained("t5-base")model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")encoder_input_str = "translate English to German: How old are you?"input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_idsoutputs = model.generate( input_ids, num_beams=10, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True,)print("Output:" + 100 * '-')print(tokenizer.decode(outputs[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------Wie alt bist du?With Constrained Beam SearchBut what if we knew that we wanted a formal output instead of the informal one? What if we knew from prior knowledge what the generation must include, and we could inject it into the generation?The following is what is possible now with the force_words_ids keyword argument to model.generate():tokenizer = AutoTokenizer.from_pretrained("t5-base")model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")encoder_input_str = "translate English to German: How old are you?"force_words = ["Sie"]input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_idsforce_words_ids = tokenizer(force_words, add_special_tokens=False).input_idsoutputs = model.generate( input_ids, force_words_ids=force_words_ids, num_beams=5, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True,)print("Output:" + 100 * '-')print(tokenizer.decode(outputs[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------Wie alt sind Sie?As you can see, we were able to guide the generation with prior knowledge about our desired output. Previously we would've had to generate a bunch of possible outputs, then filter the ones that fit our requirement. Now we can do that at the generation stage.Example 2: Disjunctive ConstraintsWe mentioned above a use-case where we know which words we want to be included in the final output. An example of this might be using a dictionary lookup during neural machine translation.But what if we don't know which word forms to use, where we'd want outputs like ["raining", "rained", "rains", ...] to be equally possible? In a more general sense, there are always cases when we don't want the exact word verbatim, letter by letter, and might be open to other related possibilities too.Constraints that allow for this behavior are Disjunctive Constraints, which allow the user to input a list of words, whose purpose is to guide the generation such that the final output must contain just at least one among the list of words. Here's an example that uses a mix of the above two types of constraints: from transformers import GPT2LMHeadModel, GPT2Tokenizermodel = GPT2LMHeadModel.from_pretrained("gpt2")tokenizer = GPT2Tokenizer.from_pretrained("gpt2")force_word = "scared"force_flexible = ["scream", "screams", "screaming", "screamed"]force_words_ids = [ tokenizer([force_word], add_prefix_space=True, add_special_tokens=False).input_ids, tokenizer(force_flexible, add_prefix_space=True, add_special_tokens=False).input_ids,]starting_text = ["The soldiers", "The child"]input_ids = tokenizer(starting_text, return_tensors="pt").input_idsoutputs = model.generate( input_ids, force_words_ids=force_words_ids, num_beams=10, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True,)print("Output:" + 100 * '-')print(tokenizer.decode(outputs[0], skip_special_tokens=True))print(tokenizer.decode(outputs[1], skip_special_tokens=True))Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.Output:----------------------------------------------------------------------------------------------------The soldiers, who were all scared and screaming at each other as they tried to get out of theThe child was taken to a local hospital where she screamed and scared for her life, police said.As you can see, the first output used "screaming", the second output used "screamed", and both used "scared" verbatim. The list to choose from ["screaming", "screamed", ...] doesn't have to be word forms; this can satisfy any use-case where we need just one from a list of words.Traditional Beam searchThe following is an example of traditional beam search, taken from a previous blog post:Unlike greedy search, beam search works by keeping a longer list of hypotheses. In the above picture, we have displayed three next possible tokens at each possible step in the generation.Here's another way to look at the first step of the beam search for the above example, in the case of num_beams=3:Instead of only choosing "The dog" like what a greedy search would do, a beam search would allow further consideration of "The nice" and "The car".In the next step, we consider the next possible tokens for each of the three branches we created in the previous step.Though we end up considering significantly more than num_beams outputs, we reduce them down to num_beams at the end of the step. We can't just keep branching out, then the number of beams we'd have to keep track of would be beamsn \text{beams}^{n} beamsn for n n n steps, which becomes very large very quickly ( 10 10 10 beams after 10 10 10 steps is 10,000,000,000 10,000,000,000 10,000,000,000 beams!). For the rest of the generation, we repeat the above step until the ending criteria has been met, like generating the <eos> token or reaching max_length, for example. Branch out, rank, reduce, and repeat.Constrained Beam SearchConstrained beam search attempts to fulfill the constraints by injecting the desired tokens at every step of the generation. Let's say that we're trying to force the phrase "is fast" in the generated output. In the traditional beam search setting, we find the top k most probable next tokens at each branch and append them for consideration. In the constrained setting, we do the same but also append the tokens that will take us closer to fulfilling our constraints. Here's a demonstration:On top of the usual high-probability next tokens like "dog" and "nice", we force the token "is" in order to get us closer to fulfilling our constraint of "is fast".For the next step, the branched-out candidates below are mostly the same as that of traditional beam search. But like the above example, constrained beam search adds onto the existing candidates by forcing the constraints at each new branch:BanksBefore we talk about the next step, we need to think about the resulting undesirable behavior we can see in the above step. The problem with naively just forcing the desired phrase "is fast" in the output is that, most of the time, you'd end up with nonsensical outputs like "The is fast" above. This is actually what makes this a nontrivial problem to solve. A deeper discussion about the complexities of solving this problem can be found in the original feature request issue that was raised in huggingface/transformers.Banks solve this problem by creating a balance between fulfilling the constraints and creating sensible output. Bank n n n refers to the list of beams that have made n n n steps progress in fulfilling the constraints. After sorting all the possible beams into their respective banks, we do a round-robin selection. With the above example, we'd select the most probable output from Bank 2, then most probable from Bank 1, one from Bank 0, the second most probable from Bank 2, the second most probable from Bank 1, and so forth. Since we're using num_beams=3, we just do the above process three times to end up with ["The is fast", "The dog is", "The dog and"].This way, even though we're forcing the model to consider the branch where we've manually appended the desired token, we still keep track of other high-probable sequences that probably make more sense. Even though "The is fast" fulfills our constraint completely, it's not a very sensible phrase. Luckily, we have "The dog is" and "The dog and" to work with in future steps, which hopefully will lead to more sensible outputs later on.This behavior is demonstrated in the third step of the above example:Notice how "The is fast" doesn't require any manual appending of constraint tokens since it's already fulfilled (i.e., already contains the phrase "is fast"). Also, notice how beams like "The dog is slow" or "The dog is mad" are actually in Bank 0, since, although it includes the token "is", it must restart from the beginning to generate "is fast". By appending something like "slow" after "is", it has effectively reset its progress. And finally notice how we ended up at a sensible output that contains our constraint phrase: "The dog is fast"! We were worried initially because blindly appending the desired tokens led to nonsensical phrases like "The is fast". However, using round-robin selection from banks, we implicitly ended up getting rid of nonsensical outputs in preference for the more sensible outputs. More About Constraint Classes and Custom ConstraintsThe main takeaway from the explanation can be summarized as the following. At every step, we keep pestering the model to consider the tokens that fulfill our constraints, all the while keeping track of beams that don't, until we end up with reasonably high probability sequences that contain our desired phrases. So a principled way to design this implementation was to represent each constraint as a Constraint object, whose purpose was to keep track of its progress and tell the beam search which tokens to generate next. Although we have provided the keyword argument force_words_ids for model.generate(), the following is what actually happens in the back-end:from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, PhrasalConstrainttokenizer = AutoTokenizer.from_pretrained("t5-base")model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")encoder_input_str = "translate English to German: How old are you?"constraints = [ PhrasalConstraint( tokenizer("Sie", add_special_tokens=False).input_ids )]input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_idsoutputs = model.generate( input_ids, constraints=constraints, num_beams=10, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True,)print("Output:" + 100 * '-')print(tokenizer.decode(outputs[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------Wie alt sind Sie?You can define one yourself and input it into the constraints keyword argument to design your unique constraints. You just have to create a sub-class of the Constraint abstract interface class and follow its requirements. You can find more information in the definition of Constraint found here.Some unique ideas (not yet implemented; maybe you can give it a try!) include constraints like OrderedConstraints, TemplateConstraints that may be added further down the line. Currently, the generation is fulfilled by including the sequences, wherever in the output. For example, a previous example had one sequence with scared -> screaming and the other with screamed -> scared. OrderedConstraints could allow the user to specify the order in which these constraints are fulfilled. TemplateConstraints could allow for a more niche use of the feature, where the objective can be something like:starting_text = "The woman"template = ["the", "", "School of", "", "in"]possible_outputs == [ "The woman attended the Ross School of Business in Michigan.", "The woman was the administrator for the Harvard School of Business in MA."]or:starting_text = "The woman"template = ["the", "", "", "University", "", "in"]possible_outputs == [ "The woman attended the Carnegie Mellon University in Pittsburgh.",]impossible_outputs == [ "The woman attended the Harvard University in MA."]or if the user does not care about the number of tokens that can go in between two words, then one can just use OrderedConstraint.ConclusionConstrained beam search gives us a flexible means to inject external knowledge and requirements into text generation. Previously, there was no easy way to tell the model to 1. include a list of sequences where 2. some of which are optional and some are not, such that 3. they're generated somewhere in the sequence at respective reasonable positions. Now, we can have full control over our generation with a mix of different subclasses of Constraint objects! This new feature is based mainly on the following papers:Guided Open Vocabulary Image Captioning with Constrained Beam SearchFast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine TranslationImproved Lexically Constrained Decoding for Translation and Monolingual RewritingGuided Generation of Cause and EffectLike the ones above, many new research papers are exploring ways of using external knowledge (e.g., KGs, KBs) to guide the outputs of large deep learning models. Hopefully, this constrained beam search feature becomes another effective way to achieve this purpose.Thanks to everybody that gave guidance for this feature contribution: Patrick von Platen for being involved from the initial issue to the final PR, and Narsil Patry, for providing detailed feedback on the code.Thumbnail of this post uses an icon with the attribution: Shorthand icons created by Freepik - Flaticon
https://huggingface.co/blog/bert-101
BERT 101 🤗 State Of The Art NLP Model Explained
Britney Muller
March 2, 2022
What is BERT?BERT, short for Bidirectional Encoder Representations from Transformers, is a Machine Learning (ML) model for natural language processing. It was developed in 2018 by researchers at Google AI Language and serves as a swiss army knife solution to 11+ of the most common language tasks, such as sentiment analysis and named entity recognition. Language has historically been difficult for computers to ‘understand’. Sure, computers can collect, store, and read text inputs but they lack basic language context.So, along came Natural Language Processing (NLP): the field of artificial intelligence aiming for computers to read, analyze, interpret and derive meaning from text and spoken words. This practice combines linguistics, statistics, and Machine Learning to assist computers in ‘understanding’ human language.Individual NLP tasks have traditionally been solved by individual models created for each specific task. That is, until— BERT!BERT revolutionized the NLP space by solving for 11+ of the most common NLP tasks (and better than previous models) making it the jack of all NLP trades. In this guide, you'll learn what BERT is, why it’s different, and how to get started using BERT:What is BERT used for?How does BERT work?BERT model size & architectureBERT’s performance on common language tasksEnvironmental impact of deep learningThe open source power of BERTHow to get started using BERTBERT FAQsConclusionLet's get started! 🚀1. What is BERT used for?BERT can be used on a wide variety of language tasks:Can determine how positive or negative a movie’s reviews are. (Sentiment Analysis)Helps chatbots answer your questions. (Question answering)Predicts your text when writing an email (Gmail). (Text prediction)Can write an article about any topic with just a few sentence inputs. (Text generation)Can quickly summarize long legal contracts. (Summarization)Can differentiate words that have multiple meanings (like ‘bank’) based on the surrounding text. (Polysemy resolution)There are many more language/NLP tasks + more detail behind each of these.Fun Fact: You interact with NLP (and likely BERT) almost every single day! NLP is behind Google Translate, voice assistants (Alexa, Siri, etc.), chatbots, Google searches, voice-operated GPS, and more.1.1 Example of BERTBERT helps Google better surface (English) results for nearly all searches since November of 2020. Here’s an example of how BERT helps Google better understand specific searches like:SourcePre-BERT Google surfaced information about getting a prescription filled. Post-BERT Google understands that “for someone” relates to picking up a prescription for someone else and the search results now help to answer that.2. How does BERT Work?BERT works by leveraging the following:2.1 Large amounts of training dataA massive dataset of 3.3 Billion words has contributed to BERT’s continued success. BERT was specifically trained on Wikipedia (~2.5B words) and Google’s BooksCorpus (~800M words). These large informational datasets contributed to BERT’s deep knowledge not only of the English language but also of our world! 🚀Training on a dataset this large takes a long time. BERT’s training was made possible thanks to the novel Transformer architecture and sped up by using TPUs (Tensor Processing Units - Google’s custom circuit built specifically for large ML models). —64 TPUs trained BERT over the course of 4 days.Note: Demand for smaller BERT models is increasing in order to use BERT within smaller computational environments (like cell phones and personal computers). 23 smaller BERT models were released in March 2020. DistilBERT offers a lighter version of BERT; runs 60% faster while maintaining over 95% of BERT’s performance.2.2 What is a Masked Language Model?MLM enables/enforces bidirectional learning from text by masking (hiding) a word in a sentence and forcing BERT to bidirectionally use the words on either side of the covered word to predict the masked word. This had never been done before!Fun Fact: We naturally do this as humans! Masked Language Model Example:Imagine your friend calls you while camping in Glacier National Park and their service begins to cut out. The last thing you hear before the call drops is:Friend: “Dang! I’m out fishing and a huge trout just [blank] my line!”Can you guess what your friend said?? You’re naturally able to predict the missing word by considering the words bidirectionally before and after the missing word as context clues (in addition to your historical knowledge of how fishing works). Did you guess that your friend said, ‘broke’? That’s what we predicted as well but even we humans are error-prone to some of these methods. Note: This is why you’ll often see a “Human Performance” comparison to a language model’s performance scores. And yes, newer models like BERT can be more accurate than humans! 🤯The bidirectional methodology you did to fill in the [blank] word above is similar to how BERT attains state-of-the-art accuracy. A random 15% of tokenized words are hidden during training and BERT’s job is to correctly predict the hidden words. Thus, directly teaching the model about the English language (and the words we use). Isn’t that neat?Play around with BERT’s masking predictions: Hosted inference APIFill-MaskExamplesMask token: [MASK]ComputeThis model can be loaded on the Inference API on-demand.JSON OutputMaximizeFun Fact: Masking has been around a long time - 1953 Paper on Cloze procedure (or ‘Masking’). 2.3 What is Next Sentence Prediction?NSP (Next Sentence Prediction) is used to help BERT learn about relationships between sentences by predicting if a given sentence follows the previous sentence or not. Next Sentence Prediction Example:Paul went shopping. He bought a new shirt. (correct sentence pair)Ramona made coffee. Vanilla ice cream cones for sale. (incorrect sentence pair)In training, 50% correct sentence pairs are mixed in with 50% random sentence pairs to help BERT increase next sentence prediction accuracy.Fun Fact: BERT is trained on both MLM (50%) and NSP (50%) at the same time.2.4 TransformersThe Transformer architecture makes it possible to parallelize ML training extremely efficiently. Massive parallelization thus makes it feasible to train BERT on large amounts of data in a relatively short period of time. Transformers use an attention mechanism to observe relationships between words. A concept originally proposed in the popular 2017 Attention Is All You Need paper sparked the use of Transformers in NLP models all around the world.Since their introduction in 2017, Transformers have rapidly become the state-of-the-art approach to tackle tasks in many domains such as natural language processing, speech recognition, and computer vision. In short, if you’re doing deep learning, then you need Transformers!Lewis Tunstall, Hugging Face ML Engineer & Author of Natural Language Processing with TransformersTimeline of popular Transformer model releases:Source2.4.1 How do Transformers work?Transformers work by leveraging attention, a powerful deep-learning algorithm, first seen in computer vision models.—Not all that different from how we humans process information through attention. We are incredibly good at forgetting/ignoring mundane daily inputs that don’t pose a threat or require a response from us. For example, can you remember everything you saw and heard coming home last Tuesday? Of course not! Our brain’s memory is limited and valuable. Our recall is aided by our ability to forget trivial inputs. Similarly, Machine Learning models need to learn how to pay attention only to the things that matter and not waste computational resources processing irrelevant information. Transformers create differential weights signaling which words in a sentence are the most critical to further process.A transformer does this by successively processing an input through a stack of transformer layers, usually called the encoder. If necessary, another stack of transformer layers - the decoder - can be used to predict a target output. —BERT however, doesn’t use a decoder. Transformers are uniquely suited for unsupervised learning because they can efficiently process millions of data points.Fun Fact: Google has been using your reCAPTCHA selections to label training data since 2011. The entire Google Books archive and 13 million articles from the New York Times catalog have been transcribed/digitized via people entering reCAPTCHA text. Now, reCAPTCHA is asking us to label Google Street View images, vehicles, stoplights, airplanes, etc. Would be neat if Google made us aware of our participation in this effort (as the training data likely has future commercial intent) but I digress..To learn more about Transformers check out our Hugging Face Transformers Course.3. BERT model size & architectureLet’s break down the architecture for the two original BERT models:ML Architecture Glossary:ML Architecture PartsDefinitionParameters:Number of learnable variables/values available for the model.Transformer Layers:Number of Transformer blocks. A transformer block transforms a sequence of word representations to a sequence of contextualized words (numbered representations).Hidden Size:Layers of mathematical functions, located between the input and output, that assign weights (to words) to produce a desired result.Attention Heads:The size of a Transformer block.Processing:Type of processing unit used to train the model.Length of Training:Time it took to train the model.Here’s how many of the above ML architecture parts BERTbase and BERTlarge has:Transformer LayersHidden SizeAttention HeadsParametersProcessingLength of TrainingBERTbase1276812110M4 TPUs4 daysBERTlarge24102416340M16 TPUs4 daysLet’s take a look at how BERTlarge’s additional layers, attention heads, and parameters have increased its performance across NLP tasks.4. BERT's performance on common language tasksBERT has successfully achieved state-of-the-art accuracy on 11 common NLP tasks, outperforming previous top NLP models, and is the first to outperform humans! But, how are these achievements measured?NLP Evaluation Methods:4.1 SQuAD v1.1 & v2.0SQuAD (Stanford Question Answering Dataset) is a reading comprehension dataset of around 108k questions that can be answered via a corresponding paragraph of Wikipedia text. BERT’s performance on this evaluation method was a big achievement beating previous state-of-the-art models and human-level performance:4.2 SWAGSWAG (Situations With Adversarial Generations) is an interesting evaluation in that it detects a model’s ability to infer commonsense! It does this through a large-scale dataset of 113k multiple choice questions about common sense situations. These questions are transcribed from a video scene/situation and SWAG provides the model with four possible outcomes in the next scene. The model then does its’ best at predicting the correct answer.BERT out outperformed top previous top models including human-level performance:4.3 GLUE BenchmarkGLUE (General Language Understanding Evaluation) benchmark is a group of resources for training, measuring, and analyzing language models comparatively to one another. These resources consist of nine “difficult” tasks designed to test an NLP model’s understanding. Here’s a summary of each of those tasks:While some of these tasks may seem irrelevant and banal, it’s important to note that these evaluation methods are incredibly powerful in indicating which models are best suited for your next NLP application. Attaining performance of this caliber isn’t without consequences. Next up, let’s learn about Machine Learning's impact on the environment.5. Environmental impact of deep learningLarge Machine Learning models require massive amounts of data which is expensive in both time and compute resources.These models also have an environmental impact: SourceMachine Learning’s environmental impact is one of the many reasons we believe in democratizing the world of Machine Learning through open source! Sharing large pre-trained language models is essential in reducing the overall compute cost and carbon footprint of our community-driven efforts.6. The open source power of BERTUnlike other large learning models like GPT-3, BERT’s source code is publicly accessible (view BERT’s code on Github) allowing BERT to be more widely used all around the world. This is a game-changer!Developers are now able to get a state-of-the-art model like BERT up and running quickly without spending large amounts of time and money. 🤯 Developers can instead focus their efforts on fine-tuning BERT to customize the model’s performance to their unique tasks. It’s important to note that thousands of open-source and free, pre-trained BERT models are currently available for specific use cases if you don’t want to fine-tune BERT. BERT models pre-trained for specific tasks:Twitter sentiment analysisAnalysis of Japanese textEmotion categorizer (English - anger, fear, joy, etc.)Clinical Notes analysisSpeech to text translationToxic comment detectionYou can also find hundreds of pre-trained, open-source Transformer models available on the Hugging Face Hub.7. How to get started using BERTWe've created this notebook so you can try BERT through this easy tutorial in Google Colab. Open the notebook or add the following code to your own. Pro Tip: Use (Shift + Click) to run a code cell.Note: Hugging Face's pipeline class makes it incredibly easy to pull in open source ML models like transformers with just a single line of code.7.1 Install TransformersFirst, let's install Transformers via the following code:!pip install transformers7.2 Try out BERTFeel free to swap out the sentence below for one of your own. However, leave [MASK] in somewhere to allow BERT to predict the missing wordfrom transformers import pipelineunmasker = pipeline('fill-mask', model='bert-base-uncased')unmasker("Artificial Intelligence [MASK] take over the world.")When you run the above code you should see an output like this:[{'score': 0.3182411789894104,'sequence': 'artificial intelligence can take over the world.','token': 2064,'token_str': 'can'},{'score': 0.18299679458141327,'sequence': 'artificial intelligence will take over the world.','token': 2097,'token_str': 'will'},{'score': 0.05600147321820259,'sequence': 'artificial intelligence to take over the world.','token': 2000,'token_str': 'to'},{'score': 0.04519503191113472,'sequence': 'artificial intelligences take over the world.','token': 2015,'token_str': '##s'},{'score': 0.045153118669986725,'sequence': 'artificial intelligence would take over the world.','token': 2052,'token_str': 'would'}]Kind of frightening right? 🙃7.3 Be aware of model biasLet's see what jobs BERT suggests for a "man":unmasker("The man worked as a [MASK].")When you run the above code you should see an output that looks something like:[{'score': 0.09747546911239624,'sequence': 'the man worked as a carpenter.','token': 10533,'token_str': 'carpenter'},{'score': 0.052383411675691605,'sequence': 'the man worked as a waiter.','token': 15610,'token_str': 'waiter'},{'score': 0.04962698742747307,'sequence': 'the man worked as a barber.','token': 13362,'token_str': 'barber'},{'score': 0.037886083126068115,'sequence': 'the man worked as a mechanic.','token': 15893,'token_str': 'mechanic'},{'score': 0.037680838257074356,'sequence': 'the man worked as a salesman.','token': 18968,'token_str': 'salesman'}]BERT predicted the man's job to be a Carpenter, Waiter, Barber, Mechanic, or SalesmanNow let's see what jobs BERT suggesst for "woman"unmasker("The woman worked as a [MASK].")You should see an output that looks something like:[{'score': 0.21981535851955414,'sequence': 'the woman worked as a nurse.','token': 6821,'token_str': 'nurse'},{'score': 0.1597413569688797,'sequence': 'the woman worked as a waitress.','token': 13877,'token_str': 'waitress'},{'score': 0.11547300964593887,'sequence': 'the woman worked as a maid.','token': 10850,'token_str': 'maid'},{'score': 0.03796879202127457,'sequence': 'the woman worked as a prostitute.','token': 19215,'token_str': 'prostitute'},{'score': 0.030423851683735847,'sequence': 'the woman worked as a cook.','token': 5660,'token_str': 'cook'}]BERT predicted the woman's job to be a Nurse, Waitress, Maid, Prostitute, or Cook displaying a clear gender bias in professional roles.7.4 Some other BERT Notebooks you might enjoy:A Visual Notebook to BERT for the First TimeTrain your tokenizer+Don't forget to checkout the Hugging Face Transformers Course to learn more 🎉8. BERT FAQsCan BERT be used with PyTorch?Yes! Our experts at Hugging Face have open-sourced the PyTorch transformers repository on GitHub. Pro Tip: Lewis Tunstall, Leandro von Werra, and Thomas Wolf also wrote a book to help people build language applications with Hugging Face called, ‘Natural Language Processing with Transformers’.Can BERT be used with Tensorflow?Yes! You can use Tensorflow as the backend of Transformers.How long does it take to pre-train BERT?The 2 original BERT models were trained on 4(BERTbase) and 16(BERTlarge) Cloud TPUs for 4 days.How long does it take to fine-tune BERT?For common NLP tasks discussed above, BERT takes between 1-25mins on a single Cloud TPU or between 1-130mins on a single GPU.What makes BERT different?BERT was one of the first models in NLP that was trained in a two-step way: BERT was trained on massive amounts of unlabeled data (no human annotation) in an unsupervised fashion.BERT was then trained on small amounts of human-annotated data starting from the previous pre-trained model resulting in state-of-the-art performance.9. ConclusionBERT is a highly complex and advanced language model that helps people automate language understanding. Its ability to accomplish state-of-the-art performance is supported by training on massive amounts of data and leveraging Transformers architecture to revolutionize the field of NLP. Thanks to BERT’s open-source library, and the incredible AI community’s efforts to continue to improve and share new BERT models, the future of untouched NLP milestones looks bright.What will you create with BERT? Learn how to fine-tune BERT for your particular use case 🤗
https://huggingface.co/blog/fine-tune-vit
Fine-Tune ViT for Image Classification with 🤗 Transformers
Nate Raw
February 11, 2022
Just as transformers-based models have revolutionized NLP, we're now seeing an explosion of papers applying them to all sorts of other domains. One of the most revolutionary of these was the Vision Transformer (ViT), which was introduced in June 2021 by a team of researchers at Google Brain.This paper explored how you can tokenize images, just as you would tokenize sentences, so that they can be passed to transformer models for training. It's quite a simple concept, really...Split an image into a grid of sub-image patchesEmbed each patch with a linear projectionEach embedded patch becomes a token, and the resulting sequence of embedded patches is the sequence you pass to the model.It turns out that once you've done the above, you can pre-train and fine-tune transformers just as you're used to with NLP tasks. Pretty sweet 😎.In this blog post, we'll walk through how to leverage 🤗 datasets to download and process image classification datasets, and then use them to fine-tune a pre-trained ViT with 🤗 transformers. To get started, let's first install both those packages.pip install datasets transformersLoad a datasetLet's start by loading a small image classification dataset and taking a look at its structure.We'll use the beans dataset, which is a collection of pictures of healthy and unhealthy bean leaves. 🍃from datasets import load_datasetds = load_dataset('beans')dsLet's take a look at the 400th example from the 'train' split from the beans dataset. You'll notice each example from the dataset has 3 features:image: A PIL Imageimage_file_path: The str path to the image file that was loaded as imagelabels: A datasets.ClassLabel feature, which is an integer representation of the label. (Later you'll see how to get the string class names, don't worry!)ex = ds['train'][400]ex{'image': <PIL.JpegImagePlugin ...>,'image_file_path': '/root/.cache/.../bean_rust_train.4.jpg','labels': 1}Let's take a look at the image 👀image = ex['image']imageThat's definitely a leaf! But what kind? 😅Since the 'labels' feature of this dataset is a datasets.features.ClassLabel, we can use it to look up the corresponding name for this example's label ID.First, let's access the feature definition for the 'labels'.labels = ds['train'].features['labels']labelsClassLabel(num_classes=3, names=['angular_leaf_spot', 'bean_rust', 'healthy'], names_file=None, id=None)Now, let's print out the class label for our example. You can do that by using the int2str function of ClassLabel, which, as the name implies, allows to pass the integer representation of the class to look up the string label.labels.int2str(ex['labels'])'bean_rust'Turns out the leaf shown above is infected with Bean Rust, a serious disease in bean plants. 😢Let's write a function that'll display a grid of examples from each class to get a better idea of what you're working with.import randomfrom PIL import ImageDraw, ImageFont, Imagedef show_examples(ds, seed: int = 1234, examples_per_class: int = 3, size=(350, 350)):w, h = sizelabels = ds['train'].features['labels'].namesgrid = Image.new('RGB', size=(examples_per_class * w, len(labels) * h))draw = ImageDraw.Draw(grid)font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationMono-Bold.ttf", 24)for label_id, label in enumerate(labels):# Filter the dataset by a single label, shuffle it, and grab a few samplesds_slice = ds['train'].filter(lambda ex: ex['labels'] == label_id).shuffle(seed).select(range(examples_per_class))# Plot this label's examples along a rowfor i, example in enumerate(ds_slice):image = example['image']idx = examples_per_class * label_id + ibox = (idx % examples_per_class * w, idx // examples_per_class * h)grid.paste(image.resize(size), box=box)draw.text(box, label, (255, 255, 255), font=font)return gridshow_examples(ds, seed=random.randint(0, 1337), examples_per_class=3)A grid of a few examples from each class in the datasetFrom what I'm seeing, Angular Leaf Spot: Has irregular brown patchesBean Rust: Has circular brown spots surrounded with a white-ish yellow ringHealthy: ...looks healthy. 🤷‍♂️Loading ViT Image ProcessorNow we know what our images look like and better understand the problem we're trying to solve. Let's see how we can prepare these images for our model!When ViT models are trained, specific transformations are applied to images fed into them. Use the wrong transformations on your image, and the model won't understand what it's seeing! 🖼 ➡️ 🔢To make sure we apply the correct transformations, we will use a ViTImageProcessor initialized with a configuration that was saved along with the pretrained model we plan to use. In our case, we'll be using the google/vit-base-patch16-224-in21k model, so let's load its image processor from the Hugging Face Hub.from transformers import ViTImageProcessormodel_name_or_path = 'google/vit-base-patch16-224-in21k'processor = ViTImageProcessor.from_pretrained(model_name_or_path)You can see the image processor configuration by printing it.ViTImageProcessor {"do_normalize": true,"do_resize": true,"image_mean": [0.5,0.5,0.5],"image_std": [0.5,0.5,0.5],"resample": 2,"size": 224}To process an image, simply pass it to the image processor's call function. This will return a dict containing pixel values, which is the numeric representation to be passed to the model.You get a NumPy array by default, but if you add the return_tensors='pt' argument, you'll get back torch tensors instead.processor(image, return_tensors='pt')Should give you something like...{'pixel_values': tensor([[[[ 0.2706, 0.3255, 0.3804, ...]]]])}...where the shape of the tensor is (1, 3, 224, 224).Processing the DatasetNow that you know how to read images and transform them into inputs, let's write a function that will put those two things together to process a single example from the dataset.def process_example(example):inputs = processor(example['image'], return_tensors='pt')inputs['labels'] = example['labels']return inputsprocess_example(ds['train'][0]){'pixel_values': tensor([[[[-0.6157, -0.6000, -0.6078, ..., ]]]]),'labels': 0}While you could call ds.map and apply this to every example at once, this can be very slow, especially if you use a larger dataset. Instead, you can apply a transform to the dataset. Transforms are only applied to examples as you index them.First, though, you'll need to update the last function to accept a batch of data, as that's what ds.with_transform expects.ds = load_dataset('beans')def transform(example_batch):# Take a list of PIL images and turn them to pixel valuesinputs = processor([x for x in example_batch['image']], return_tensors='pt')# Don't forget to include the labels!inputs['labels'] = example_batch['labels']return inputsYou can directly apply this to the dataset using ds.with_transform(transform).prepared_ds = ds.with_transform(transform)Now, whenever you get an example from the dataset, the transform will be applied in real time (on both samples and slices, as shown below)prepared_ds['train'][0:2]This time, the resulting pixel_values tensor will have shape (2, 3, 224, 224).{'pixel_values': tensor([[[[-0.6157, -0.6000, -0.6078, ..., ]]]]),'labels': [0, 0]}Training and EvaluationThe data is processed and you are ready to start setting up the training pipeline. This blog post uses 🤗's Trainer, but that'll require us to do a few things first:Define a collate function.Define an evaluation metric. During training, the model should be evaluated on its prediction accuracy. You should define a compute_metrics function accordingly.Load a pretrained checkpoint. You need to load a pretrained checkpoint and configure it correctly for training.Define the training configuration.After fine-tuning the model, you will correctly evaluate it on the evaluation data and verify that it has indeed learned to correctly classify the images.Define our data collatorBatches are coming in as lists of dicts, so you can just unpack + stack those into batch tensors.Since the collate_fn will return a batch dict, you can **unpack the inputs to the model later. ✨import torchdef collate_fn(batch):return {'pixel_values': torch.stack([x['pixel_values'] for x in batch]),'labels': torch.tensor([x['labels'] for x in batch])}Define an evaluation metricThe accuracy metric from datasets can easily be used to compare the predictions with the labels. Below, you can see how to use it within a compute_metrics function that will be used by the Trainer.import numpy as npfrom datasets import load_metricmetric = load_metric("accuracy")def compute_metrics(p):return metric.compute(predictions=np.argmax(p.predictions, axis=1), references=p.label_ids)Let's load the pretrained model. We'll add num_labels on init so the model creates a classification head with the right number of units. We'll also include the id2label and label2id mappings to have human-readable labels in the Hub widget (if you choose to push_to_hub).from transformers import ViTForImageClassificationlabels = ds['train'].features['labels'].namesmodel = ViTForImageClassification.from_pretrained(model_name_or_path,num_labels=len(labels),id2label={str(i): c for i, c in enumerate(labels)},label2id={c: str(i) for i, c in enumerate(labels)})Almost ready to train! The last thing needed before that is to set up the training configuration by defining TrainingArguments.Most of these are pretty self-explanatory, but one that is quite important here is remove_unused_columns=False. This one will drop any features not used by the model's call function. By default it's True because usually it's ideal to drop unused feature columns, making it easier to unpack inputs into the model's call function. But, in our case, we need the unused features ('image' in particular) in order to create 'pixel_values'.What I'm trying to say is that you'll have a bad time if you forget to set remove_unused_columns=False.from transformers import TrainingArgumentstraining_args = TrainingArguments(output_dir="./vit-base-beans",per_device_train_batch_size=16,evaluation_strategy="steps",num_train_epochs=4,fp16=True,save_steps=100,eval_steps=100,logging_steps=10,learning_rate=2e-4,save_total_limit=2,remove_unused_columns=False,push_to_hub=False,report_to='tensorboard',load_best_model_at_end=True,)Now, all instances can be passed to Trainer and we are ready to start training!from transformers import Trainertrainer = Trainer(model=model,args=training_args,data_collator=collate_fn,compute_metrics=compute_metrics,train_dataset=prepared_ds["train"],eval_dataset=prepared_ds["validation"],tokenizer=processor,)Train 🚀train_results = trainer.train()trainer.save_model()trainer.log_metrics("train", train_results.metrics)trainer.save_metrics("train", train_results.metrics)trainer.save_state()Evaluate 📊metrics = trainer.evaluate(prepared_ds['validation'])trainer.log_metrics("eval", metrics)trainer.save_metrics("eval", metrics)Here were my evaluation results - Cool beans! Sorry, had to say it.***** eval metrics *****epoch = 4.0eval_accuracy = 0.985eval_loss = 0.0637eval_runtime = 0:00:02.13eval_samples_per_second = 62.356eval_steps_per_second = 7.97Finally, if you want, you can push your model up to the hub. Here, we'll push it up if you specified push_to_hub=True in the training configuration. Note that in order to push to hub, you'll have to have git-lfs installed and be logged into your Hugging Face account (which can be done via huggingface-cli login).kwargs = {"finetuned_from": model.config._name_or_path,"tasks": "image-classification","dataset": 'beans',"tags": ['image-classification'],}if training_args.push_to_hub:trainer.push_to_hub('🍻 cheers', **kwargs)else:trainer.create_model_card(**kwargs)The resulting model has been shared to nateraw/vit-base-beans. I'm assuming you don't have pictures of bean leaves laying around, so I added some examples for you to give it a try! 🚀
https://huggingface.co/blog/sentiment-analysis-python
Getting Started with Sentiment Analysis using Python
Federico Pascual
February 2, 2022
Sentiment analysis is the automated process of tagging data according to their sentiment, such as positive, negative and neutral. Sentiment analysis allows companies to analyze data at scale, detect insights and automate processes.In the past, sentiment analysis used to be limited to researchers, machine learning engineers or data scientists with experience in natural language processing. However, the AI community has built awesome tools to democratize access to machine learning in recent years. Nowadays, you can use sentiment analysis with a few lines of code and no machine learning experience at all! 🤯In this guide, you'll learn everything to get started with sentiment analysis using Python, including:What is sentiment analysis?How to use pre-trained sentiment analysis models with PythonHow to build your own sentiment analysis modelHow to analyze tweets with sentiment analysisLet's get started! 🚀1. What is Sentiment Analysis?Sentiment analysis is a natural language processing technique that identifies the polarity of a given text. There are different flavors of sentiment analysis, but one of the most widely used techniques labels data into positive, negative and neutral. For example, let's take a look at these tweets mentioning @VerizonSupport:"dear @verizonsupport your service is straight 💩 in dallas.. been with y’all over a decade and this is all time low for y’all. i’m talking no internet at all." → Would be tagged as "Negative"."@verizonsupport ive sent you a dm" → would be tagged as "Neutral"."thanks to michelle et al at @verizonsupport who helped push my no-show-phone problem along. order canceled successfully and ordered this for pickup today at the apple store in the mall." → would be tagged as "Positive".Sentiment analysis allows processing data at scale and in real-time. For example, do you want to analyze thousands of tweets, product reviews or support tickets? Instead of sorting through this data manually, you can use sentiment analysis to automatically understand how people are talking about a specific topic, get insights for data-driven decisions and automate business processes.Sentiment analysis is used in a wide variety of applications, for example:Analyze social media mentions to understand how people are talking about your brand vs your competitors.Analyze feedback from surveys and product reviews to quickly get insights into what your customers like and dislike about your product.Analyze incoming support tickets in real-time to detect angry customers and act accordingly to prevent churn.2. How to Use Pre-trained Sentiment Analysis Models with PythonNow that we have covered what sentiment analysis is, we are ready to play with some sentiment analysis models! 🎉On the Hugging Face Hub, we are building the largest collection of models and datasets publicly available in order to democratize machine learning 🚀. In the Hub, you can find more than 27,000 models shared by the AI community with state-of-the-art performances on tasks such as sentiment analysis, object detection, text generation, speech recognition and more. The Hub is free to use and most models have a widget that allows to test them directly on your browser!There are more than 215 sentiment analysis models publicly available on the Hub and integrating them with Python just takes 5 lines of code:pip install -q transformersfrom transformers import pipelinesentiment_pipeline = pipeline("sentiment-analysis")data = ["I love you", "I hate you"]sentiment_pipeline(data)This code snippet uses the pipeline class to make predictions from models available in the Hub. It uses the default model for sentiment analysis to analyze the list of texts data and it outputs the following results:[{'label': 'POSITIVE', 'score': 0.9998},{'label': 'NEGATIVE', 'score': 0.9991}]You can use a specific sentiment analysis model that is better suited to your language or use case by providing the name of the model. For example, if you want a sentiment analysis model for tweets, you can specify the model id:specific_model = pipeline(model="finiteautomata/bertweet-base-sentiment-analysis")specific_model(data)You can test these models with your own data using this Colab notebook:The following are some popular models for sentiment analysis models available on the Hub that we recommend checking out:Twitter-roberta-base-sentiment is a roBERTa model trained on ~58M tweets and fine-tuned for sentiment analysis. Fine-tuning is the process of taking a pre-trained large language model (e.g. roBERTa in this case) and then tweaking it with additional training data to make it perform a second similar task (e.g. sentiment analysis).Bert-base-multilingual-uncased-sentiment is a model fine-tuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian.Distilbert-base-uncased-emotion is a model fine-tuned for detecting emotions in texts, including sadness, joy, love, anger, fear and surprise.Are you interested in doing sentiment analysis in languages such as Spanish, French, Italian or German? On the Hub, you will find many models fine-tuned for different use cases and ~28 languages. You can check out the complete list of sentiment analysis models here and filter at the left according to the language of your interest.3. Building Your Own Sentiment Analysis ModelUsing pre-trained models publicly available on the Hub is a great way to get started right away with sentiment analysis. These models use deep learning architectures such as transformers that achieve state-of-the-art performance on sentiment analysis and other machine learning tasks. However, you can fine-tune a model with your own data to further improve the sentiment analysis results and get an extra boost of accuracy in your particular use case.In this section, we'll go over two approaches on how to fine-tune a model for sentiment analysis with your own data and criteria. The first approach uses the Trainer API from the 🤗Transformers, an open source library with 50K stars and 1K+ contributors and requires a bit more coding and experience. The second approach is a bit easier and more straightforward, it uses AutoNLP, a tool to automatically train, evaluate and deploy state-of-the-art NLP models without code or ML experience.Let's dive in!a. Fine-tuning model with PythonIn this tutorial, you'll use the IMDB dataset to fine-tune a DistilBERT model for sentiment analysis. The IMDB dataset contains 25,000 movie reviews labeled by sentiment for training a model and 25,000 movie reviews for testing it. DistilBERT is a smaller, faster and cheaper version of BERT. It has 40% smaller than BERT and runs 60% faster while preserving over 95% of BERT’s performance. You'll use the IMDB dataset to fine-tune a DistilBERT model that is able to classify whether a movie review is positive or negative. Once you train the model, you will use it to analyze new data! ⚡️We have created this notebook so you can use it through this tutorial in Google Colab.1. Activate GPU and Install DependenciesAs a first step, let's set up Google Colab to use a GPU (instead of CPU) to train the model much faster. You can do this by going to the menu, clicking on 'Runtime' > 'Change runtime type', and selecting 'GPU' as the Hardware accelerator. Once you do this, you should check if GPU is available on our notebook by running the following code: import torchtorch.cuda.is_available()Then, install the libraries you will be using in this tutorial:!pip install datasets transformers huggingface_hubYou should also install git-lfs to use git in our model repository:!apt-get install git-lfs2. Preprocess dataYou need data to fine-tune DistilBERT for sentiment analysis. So, let's use 🤗Datasets library to download and preprocess the IMDB dataset so you can then use this data for training your model:from datasets import load_datasetimdb = load_dataset("imdb")IMDB is a huge dataset, so let's create smaller datasets to enable faster training and testing:small_train_dataset = imdb["train"].shuffle(seed=42).select([i for i in list(range(3000))])small_test_dataset = imdb["test"].shuffle(seed=42).select([i for i in list(range(300))])To preprocess our data, you will use DistilBERT tokenizer:from transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")Next, you will prepare the text inputs for the model for both splits of our dataset (training and test) by using the map method:def preprocess_function(examples):return tokenizer(examples["text"], truncation=True)tokenized_train = small_train_dataset.map(preprocess_function, batched=True)tokenized_test = small_test_dataset.map(preprocess_function, batched=True)To speed up training, let's use a data_collator to convert your training samples to PyTorch tensors and concatenate them with the correct amount of padding:from transformers import DataCollatorWithPaddingdata_collator = DataCollatorWithPadding(tokenizer=tokenizer)3. Training the modelNow that the preprocessing is done, you can go ahead and train your model 🚀You will be throwing away the pretraining head of the DistilBERT model and replacing it with a classification head fine-tuned for sentiment analysis. This enables you to transfer the knowledge from DistilBERT to your custom model 🔥For training, you will be using the Trainer API, which is optimized for fine-tuning Transformers🤗 models such as DistilBERT, BERT and RoBERTa.First, let's define DistilBERT as your base model:from transformers import AutoModelForSequenceClassificationmodel = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2)Then, let's define the metrics you will be using to evaluate how good is your fine-tuned model (accuracy and f1 score):import numpy as npfrom datasets import load_metricdef compute_metrics(eval_pred):load_accuracy = load_metric("accuracy")load_f1 = load_metric("f1")logits, labels = eval_predpredictions = np.argmax(logits, axis=-1)accuracy = load_accuracy.compute(predictions=predictions, references=labels)["accuracy"]f1 = load_f1.compute(predictions=predictions, references=labels)["f1"]return {"accuracy": accuracy, "f1": f1}Next, let's login to your Hugging Face account so you can manage your model repositories. notebook_login will launch a widget in your notebook where you'll need to add your Hugging Face token:from huggingface_hub import notebook_loginnotebook_login()You are almost there! Before training our model, you need to define the training arguments and define a Trainer with all the objects you constructed up to this point:from transformers import TrainingArguments, Trainerrepo_name = "finetuning-sentiment-model-3000-samples"training_args = TrainingArguments(output_dir=repo_name,learning_rate=2e-5,per_device_train_batch_size=16,per_device_eval_batch_size=16,num_train_epochs=2,weight_decay=0.01,save_strategy="epoch",push_to_hub=True,)trainer = Trainer(model=model,args=training_args,train_dataset=tokenized_train,eval_dataset=tokenized_test,tokenizer=tokenizer,data_collator=data_collator,compute_metrics=compute_metrics,)Now, it's time to fine-tune the model on the sentiment analysis dataset! 🙌 You just have to call the train() method of your Trainer: trainer.train()And voila! You fine-tuned a DistilBERT model for sentiment analysis! 🎉Training time depends on the hardware you use and the number of samples in the dataset. In our case, it took almost 10 minutes using a GPU and fine-tuning the model with 3,000 samples. The more samples you use for training your model, the more accurate it will be but training could be significantly slower.Next, let's compute the evaluation metrics to see how good your model is: trainer.evaluate()In our case, we got 88% accuracy and 89% f1 score. Quite good for a sentiment analysis model just trained with 3,000 samples!4. Analyzing new data with the modelNow that you have trained a model for sentiment analysis, let's use it to analyze new data and get 🤖 predictions! This unlocks the power of machine learning; using a model to automatically analyze data at scale, in real-time ⚡️First, let's upload the model to the Hub:trainer.push_to_hub()Now that you have pushed the model to the Hub, you can use it pipeline class to analyze two new movie reviews and see how your model predicts its sentiment with just two lines of code 🤯:from transformers import pipelinesentiment_model = pipeline(model="federicopascual/finetuning-sentiment-model-3000-samples")sentiment_model(["I love this move", "This movie sucks!"])These are the predictions from our model:[{'label': 'LABEL_1', 'score': 0.9558},{'label': 'LABEL_0', 'score': 0.9413}]In the IMDB dataset, Label 1 means positive and Label 0 is negative. Quite good! 🔥b. Training a sentiment model with AutoNLPAutoNLP is a tool to train state-of-the-art machine learning models without code. It provides a friendly and easy-to-use user interface, where you can train custom models by simply uploading your data. AutoNLP will automatically fine-tune various pre-trained models with your data, take care of the hyperparameter tuning and find the best model for your use case. All models trained with AutoNLP are deployed and ready for production.Training a sentiment analysis model using AutoNLP is super easy and it just takes a few clicks 🤯. Let's give it a try!As a first step, let's get some data! You'll use Sentiment140, a popular sentiment analysis dataset that consists of Twitter messages labeled with 3 sentiments: 0 (negative), 2 (neutral), and 4 (positive). The dataset is quite big; it contains 1,600,000 tweets. As you don't need this amount of data to get your feet wet with AutoNLP and train your first models, we have prepared a smaller version of the Sentiment140 dataset with 3,000 samples that you can download from here. This is how the dataset looks like:Sentiment 140 datasetNext, let's create a new project on AutoNLP to train 5 candidate models:Creating a new project on AutoNLPThen, upload the dataset and map the text column and target columns:Adding a dataset to AutoNLPOnce you add your dataset, go to the "Trainings" tab and accept the pricing to start training your models. AutoNLP pricing can be as low as $10 per model:Adding a dataset to AutoNLPAfter a few minutes, AutoNLP has trained all models, showing the performance metrics for all of them: Trained sentiment analysis models by AutoNLPThe best model has 77.87% accuracy 🔥 Pretty good for a sentiment analysis model for tweets trained with just 3,000 samples! All these models are automatically uploaded to the Hub and deployed for production. You can use any of these models to start analyzing new data right away by using the pipeline class as shown in previous sections of this post.4. Analyzing Tweets with Sentiment Analysis and PythonIn this last section, you'll take what you have learned so far in this post and put it into practice with a fun little project: analyzing tweets about NFTs with sentiment analysis! First, you'll use Tweepy, an easy-to-use Python library for getting tweets mentioning #NFTs using the Twitter API. Then, you will use a sentiment analysis model from the 🤗Hub to analyze these tweets. Finally, you will create some visualizations to explore the results and find some interesting insights. You can use this notebook to follow this tutorial. Let’s jump into it!1. Install dependenciesFirst, let's install all the libraries you will use in this tutorial:!pip install -q transformers tweepy wordcloud matplotlib2. Set up Twitter API credentialsNext, you will set up the credentials for interacting with the Twitter API. First, you'll need to sign up for a developer account on Twitter. Then, you have to create a new project and connect an app to get an API key and token. You can follow this step-by-step guide to get your credentials.Once you have the API key and token, let's create a wrapper with Tweepy for interacting with the Twitter API:import tweepy# Add Twitter API key and secretconsumer_key = "XXXXXX"consumer_secret = "XXXXXX"# Handling authentication with Twitterauth = tweepy.AppAuthHandler(consumer_key, consumer_secret)# Create a wrapper for the Twitter APIapi = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)3. Search for tweets using TweepyAt this point, you are ready to start using the Twitter API to collect tweets 🎉. You will use Tweepy Cursor to extract 1,000 tweets mentioning #NFTs: # Helper function for handling pagination in our search and handle rate limitsdef limit_handled(cursor):while True:try:yield cursor.next()except tweepy.RateLimitError:print('Reached rate limite. Sleeping for >15 minutes')time.sleep(15 * 61)except StopIteration:break# Define the term you will be using for searching tweetsquery = '#NFTs'query = query + ' -filter:retweets'# Define how many tweets to get from the Twitter APIcount = 1000# Let's search for tweets using Tweepysearch = limit_handled(tweepy.Cursor(api.search,q=query,tweet_mode='extended',lang='en',result_type="recent").items(count))4. Run sentiment analysis on the tweetsNow you can put our new skills to work and run sentiment analysis on your data! 🎉You will use one of the models available on the Hub fine-tuned for sentiment analysis of tweets. Like in other sections of this post, you will use the pipeline class to make the predictions with this model:from transformers import pipeline# Set up the inference pipeline using a model from the 🤗 Hubsentiment_analysis = pipeline(model="finiteautomata/bertweet-base-sentiment-analysis")# Let's run the sentiment analysis on each tweettweets = []for tweet in search:try:content = tweet.full_textsentiment = sentiment_analysis(content)tweets.append({'tweet': content, 'sentiment': sentiment[0]['label']})except:pass5. Explore the results of sentiment analysisHow are people talking about NFTs on Twitter? Are they talking mostly positively or negatively? Let's explore the results of the sentiment analysis to find out!First, let's load the results on a dataframe and see examples of tweets that were labeled for each sentiment:import pandas as pd# Load the data in a dataframedf = pd.DataFrame(tweets)pd.set_option('display.max_colwidth', None)# Show a tweet for each sentimentdisplay(df[df["sentiment"] == 'POS'].head(1))display(df[df["sentiment"] == 'NEU'].head(1))display(df[df["sentiment"] == 'NEG'].head(1))Output:Tweet: @NFTGalIery Warm, exquisite and elegant palette of charming beauty Its price is 2401 ETH. https://t.co/Ej3BfVOAqc#NFTs #NFTartists #art #Bitcoin #Crypto #OpenSeaNFT #Ethereum #BTC Sentiment: POSTweet: How much our followers made on #Crypto in December:#DAPPRadar airdrop — $200Free #VPAD tokens — $800#GasDAO airdrop — up to $1000StarSharks_SSS IDO — $3500CeloLaunch IDO — $300012 Binance XMas #NFTs — $360 TOTAL PROFIT: $8500+Join and earn with us https://t.co/fS30uj6SYx Sentiment: NEUTweet: Stupid guy #2https://t.co/8yKzYjCYIl#NFT #NFTs #nftcollector #rarible https://t.co/O4V19gMmVk Sentiment: NEGThen, let's see how many tweets you got for each sentiment and visualize these results:# Let's count the number of tweets by sentimentssentiment_counts = df.groupby(['sentiment']).size()print(sentiment_counts)# Let's visualize the sentimentsfig = plt.figure(figsize=(6,6), dpi=100)ax = plt.subplot(111)sentiment_counts.plot.pie(ax=ax, autopct='%1.1f%%', startangle=270, fontsize=12, label="")Interestingly, most of the tweets about NFTs are positive (56.1%) and almost none are negative(2.0%):Sentiment analysis result of NFTs tweetsFinally, let's see what words stand out for each sentiment by creating a word cloud:from wordcloud import WordCloudfrom wordcloud import STOPWORDS# Wordcloud with positive tweetspositive_tweets = df['tweet'][df["sentiment"] == 'POS']stop_words = ["https", "co", "RT"] + list(STOPWORDS)positive_wordcloud = WordCloud(max_font_size=50, max_words=100, background_color="white", stopwords = stop_words).generate(str(positive_tweets))plt.figure()plt.title("Positive Tweets - Wordcloud")plt.imshow(positive_wordcloud, interpolation="bilinear")plt.axis("off")plt.show()# Wordcloud with negative tweetsnegative_tweets = df['tweet'][df["sentiment"] == 'NEG']stop_words = ["https", "co", "RT"] + list(STOPWORDS)negative_wordcloud = WordCloud(max_font_size=50, max_words=100, background_color="white", stopwords = stop_words).generate(str(negative_tweets))plt.figure()plt.title("Negative Tweets - Wordcloud")plt.imshow(negative_wordcloud, interpolation="bilinear")plt.axis("off")plt.show()Some of the words associated with positive tweets include Discord, Ethereum, Join, Mars4 and Shroom:Word cloud for positive tweetsIn contrast, words associated with negative tweets include: cookies chaos, Solana, and OpenseaNFT:Word cloud for negative tweetsAnd that is it! With just a few lines of python code, you were able to collect tweets, analyze them with sentiment analysis and create some cool visualizations to analyze the results! Pretty cool, huh?5. Wrapping upSentiment analysis with Python has never been easier! Tools such as 🤗Transformers and the 🤗Hub makes sentiment analysis accessible to all developers. You can use open source, pre-trained models for sentiment analysis in just a few lines of code 🔥Do you want to train a custom model for sentiment analysis with your own data? Easy peasy! You can fine-tune a model using Trainer API to build on top of large language models and get state-of-the-art results. If you want something even easier, you can use AutoNLP to train custom machine learning models by simply uploading data.If you have questions, the Hugging Face community can help answer and/or benefit from, please ask them in the Hugging Face forum. Also, join our discord server to talk with us and with the Hugging Face community.
https://huggingface.co/blog/asr-chunking
Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers
Nicolas Patry
February 1, 2022
Wav2Vec2 is a popular pre-trained model for speech recognition.Released in September 2020by Meta AI Research, the novel architecture catalyzed progress inself-supervised pretraining for speech recognition, e.g. G. Ng etal., 2021, Chen et al,2021, Hsu et al.,2021 and Babu et al.,2021. On the Hugging Face Hub,Wav2Vec2's most popular pre-trained checkpoint currently amounts toover 250,000 monthlydownloads.Wav2Vec2 is at its core a transformers models and one caveatof transformers is that it usually has a finite amount of sequencelength it can handle. Either because it uses position encodings (notthe case here) or simply because the cost of attention in transformersis actually O(n²) in sequence_length, meaning that using very largesequence_length explodes in complexity/memory. So you cannot run with finite hardware(even a very large GPU like A100), simply run Wav2Vec2 on an hour longfile. Your program will crash. Let's try it !pip install transformersfrom transformers import pipeline# This will work on any of the thousands of models at# https://huggingface.co/models?pipeline_tag=automatic-speech-recognitionpipe = pipeline(model="facebook/wav2vec2-base-960h")# The Public Domain LibriVox file used for the test#!wget https://ia902600.us.archive.org/8/items/thecantervilleghostversion_2_1501_librivox/thecantervilleghostversion2_01_wilde_128kb.mp3 -o very_long_file.mp3pipe("very_long_file.mp3")# Crash out of memory !pipe("very_long_file.mp3", chunk_length_s=10)# This works and prints a very long string ! # This whole blogpost will explain how to make things workSimple ChunkingThe simplest way to achieve inference on very long files would be to simply chunkthe initial audio into shorter samples, let's say 10 seconds each, run inference on those, and end upwith a final reconstruction. This is efficient computationally but usually leadsto subpar results, the reason being that in order to do good inference, the modelneeds some context, so around the chunking border, inference tends to be of poorquality.Look at the following diagram:There are ways to try and work around the problem in a general fashion, butthey are never entirely robust. You can try to chunk only when you encountersilence but you may have a non silent audio for a long time (a song, or noisycafé audio). You can also try to cut only when there's no voice but it requiresanother model and this is not an entirely solved problem. You could also havea continous voice for a very long time.As it turns out, CTC structure, which is used by Wav2Vec2, can be exploitedin order to achieve very robust speech recognition even on very long fileswithout falling into those pitfalls.Chunking with strideWav2Vec2 uses the CTC algorithm, which means that every frame of audio is mappedto a single letter prediction (logit).That's the main feature we're going to use in order to add a stride.This link explains it in the image context, but it's the same concept for audio.Because of this property, we can:Start doing inference on overlapping chunksso that the model actually has proper context in the center.Drop the inferenced logits on the side. Chain the logits without their dropped sides to recover something extremely close to what the model would havepredicted on the full length audio.This is not technically 100% the same thing as running the model on the wholefile so it is not enabled by default, but as you saw in the earlier example youneed only to add chunk_length_s to your pipeline for it to work.In practice, we observed that most of the bad inference is kept withinthe strides, which get dropped before inference, leading to a properinference of the full text.Let's note that you can choose every argument of this technique:from transformers import pipelinepipe = pipeline(model="facebook/wav2vec2-base-960h")# stride_length_s is a tuple of the left and right stride length.# With only 1 number, both sides get the same stride, by default# the stride_length on one side is 1/6th of the chunk_length_soutput = pipe("very_long_file.mp3", chunk_length_s=10, stride_length_s=(4, 2))Chunking with stride on LM augmented modelsIn transformers, we alsoadded support for adding LM to Wav2Vec2 in order to boost the WER performanceof the models without even finetuning. See this excellent blogpost explaininghow it works.It turns out, that the LM works directly on the logits themselves, so wecan actually apply the exact same technique as before without any modification !So chunking large files on these LM boosted models still works out of the box.Live inferenceA very nice perk of using a CTC model like Wav2vec2, is that it is a singlepass model, so it is very fast. Especially on GPU. We can exploit that in orderto do live inference.The principle is exactly the same as regular striding, but this time we canfeed the pipeline data as it is coming in and simply use striding onfull chunks of length 10s for instance with 1s striding to get proper context.That requires running much more inference steps than simple file chunking, but it can make the live experience much better because the model can print things as you are speaking, without having to wait for X seconds before seeing something displayed.
https://huggingface.co/blog/searching-the-hub
Supercharged Searching on the Hugging Face Hub
Zachary Mueller
January 25, 2022
The huggingface_hub library is a lightweight interface that provides a programmatic approach to exploring the hosting endpoints Hugging Face provides: models, datasets, and Spaces.Up until now, searching on the Hub through this interface was tricky to pull off, and there were many aspects of it a user had to "just know" and get accustomed to. In this article, we will be looking at a few exciting new features added to huggingface_hub to help lower that bar and provide users with a friendly API to search for the models and datasets they want to use without leaving their Jupyter or Python interfaces.Before we begin, if you do not have the latest version of the huggingface_hub library on your system, please run the following cell:!pip install huggingface_hub -USituating the Problem:First, let's imagine the scenario you are in. You'd like to find all models hosted on the Hugging Face Hub for Text Classification, were trained on the GLUE dataset, and are compatible with PyTorch.You may simply just open https://huggingface.co/models and use the widgets on there. But this requires leaving your IDE and scanning those results, all of which requires a few button clicks to get you the information you need. What if there were a solution to this without having to leave your IDE? With a programmatic interface, it also could be easy to see this being integrated into workflows for exploring the Hub.This is where the huggingface_hub comes in. For those familiar with the library, you may already know that we can search for these type of models. However, getting the query right is a painful process of trial and error.Could we simplify that? Let's find out!Finding what we needFirst we'll import the HfApi, which is a class that helps us interact with the backend hosting for Hugging Face. We can interact with the models, datasets, and more through it. Along with this, we'll import a few helper classes: the ModelFilter and ModelSearchArgumentsfrom huggingface_hub import HfApi, ModelFilter, ModelSearchArgumentsapi = HfApi()These two classes can help us frame a solution to our above problem. The ModelSearchArguments class is a namespace-like one that contains every single valid parameter we can search for! Let's take a peek:>>> model_args = ModelSearchArguments()>>> model_argsAvailable Attributes or Keys:* author* dataset* language* library* license* model_name* pipeline_tagWe can see a variety of attributes available to us (more on how this magic is done later). If we were to categorize what we wanted, we could likely separate them out as:pipeline_tag (or task): Text Classificationdataset: GLUElibrary: PyTorchGiven this separation, it would make sense that we would find them within our model_args we've declared:>>> model_args.pipeline_tag.TextClassification'text-classification'>>> model_args.dataset.glue'dataset:glue'>>> model_args.library.PyTorch'pytorch'What we begin to notice though is some of the convience wrapping we perform here. ModelSearchArguments (and the complimentary DatasetSearchArguments) have a human-readable interface with formatted outputs the API wants, such as how the GLUE dataset should be searched with dataset:glue. This is key because without this "cheat sheet" of knowing how certain parameters should be written, you can very easily sit in frustration as you're trying to search for models with the API!Now that we know what the right parameters are, we can search the API easily:>>> models = api.list_models(filter = (>>> model_args.pipeline_tag.TextClassification, >>> model_args.dataset.glue, >>> model_args.library.PyTorch)>>> )>>> print(len(models))140We find that there were 140 matching models that fit our criteria! (at the time of writing this). And if we take a closer look at one, we can see that it does indeed look right:>>> models[0]ModelInfo: {modelId: Jiva/xlm-roberta-large-it-mnlisha: c6e64469ec4aa17fedbd1b2522256f90a90b5b86lastModified: 2021-12-10T14:56:38.000Ztags: ['pytorch', 'xlm-roberta', 'text-classification', 'it', 'dataset:multi_nli', 'dataset:glue', 'arxiv:1911.02116', 'transformers', 'tensorflow', 'license:mit', 'zero-shot-classification']pipeline_tag: zero-shot-classificationsiblings: [ModelFile(rfilename='.gitattributes'), ModelFile(rfilename='README.md'), ModelFile(rfilename='config.json'), ModelFile(rfilename='pytorch_model.bin'), ModelFile(rfilename='sentencepiece.bpe.model'), ModelFile(rfilename='special_tokens_map.json'), ModelFile(rfilename='tokenizer.json'), ModelFile(rfilename='tokenizer_config.json')]config: Noneprivate: Falsedownloads: 680library_name: transformerslikes: 1}It's a bit more readable, and there's no guessing involved with "Did I get this parameter right?"Did you know you can also get the information of this model programmatically with its model ID? Here's how you would do it:api.model_info('Jiva/xlm-roberta-large-it-mnli')Taking it up a NotchWe saw how we could use the ModelSearchArguments and DatasetSearchArguments to remove the guesswork from when we want to search the Hub, but what about if we have a very complex, messy query?Such as:I want to search for all models trained for both text-classification and zero-shot classification, were trained on the Multi NLI and GLUE datasets, and are compatible with both PyTorch and TensorFlow (a more exact query to get the above model). To setup this query, we'll make use of the ModelFilter class. It's designed to handle these types of situations, so we don't need to scratch our heads:>>> filt = ModelFilter(>>> task = ["text-classification", "zero-shot-classification"],>>> trained_dataset = [model_args.dataset.multi_nli, model_args.dataset.glue],>>> library = ['pytorch', 'tensorflow']>>> )>>> api.list_models(filt)[ModelInfo: {modelId: Jiva/xlm-roberta-large-it-mnlisha: c6e64469ec4aa17fedbd1b2522256f90a90b5b86lastModified: 2021-12-10T14:56:38.000Ztags: ['pytorch', 'xlm-roberta', 'text-classification', 'it', 'dataset:multi_nli', 'dataset:glue', 'arxiv:1911.02116', 'transformers', 'tensorflow', 'license:mit', 'zero-shot-classification']pipeline_tag: zero-shot-classificationsiblings: [ModelFile(rfilename='.gitattributes'), ModelFile(rfilename='README.md'), ModelFile(rfilename='config.json'), ModelFile(rfilename='pytorch_model.bin'), ModelFile(rfilename='sentencepiece.bpe.model'), ModelFile(rfilename='special_tokens_map.json'), ModelFile(rfilename='tokenizer.json'), ModelFile(rfilename='tokenizer_config.json')]config: Noneprivate: Falsedownloads: 680library_name: transformerslikes: 1}]Very quickly we see that it's a much more coordinated approach for searching through the API, with no added headache for you!What is the magic?Very briefly we'll talk about the underlying magic at play that gives us this enum-dictionary-like datatype, the AttributeDictionary.Heavily inspired by the AttrDict class from the fastcore library, the general idea is we take a normal dictionary and supercharge it for exploratory programming by providing tab-completion for every key in the dictionary. As we saw earlier, this gets even stronger when we have nested dictionaries we can explore through, such as model_args.dataset.glue!For those familiar with JavaScript, we mimic how the object class is working.This simple utility class can provide a much more user-focused experience when exploring nested datatypes and trying to understand what is there, such as the return of an API request!As mentioned before, we expand on the AttrDict in a few key ways:You can delete keys with del model_args[key] or with del model_args.keyThat clean __repr__ we saw earlierOne very important concept to note though, is that if a key contains a number or special character it must be indexed as a dictionary, and not as an object.>>> from huggingface_hub.utils.endpoint_helpers import AttributeDictionaryA very brief example of this is if we have an AttributeDictionary with a key of 3_c:>>> d = {"a":2, "b":3, "3_c":4}>>> ad = AttributeDictionary(d)>>> # As an attribute>>> ad.3_cFile "<ipython-input-6-c0fe109cf75d>", line 2ad.3_c^SyntaxError: invalid token>>> # As a dictionary key>>> ad["3_c"]4Concluding thoughtsHopefully by now you have a brief understanding of how this new searching API can directly impact your workflow and exploration of the Hub! Along with this, perhaps you know of a place in your code where the AttributeDictionary might be useful for you to use.From here, make sure to check out the official documentation on Searching the Hub Efficiently and don't forget to give us a star!
https://huggingface.co/blog/sb3
Welcome Stable-baselines3 to the Hugging Face Hub 🤗
Thomas Simonini
January 21, 2022
At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. That’s why we’re happy to announce that we integrated Stable-Baselines3 to the Hugging Face Hub.Stable-Baselines3 is one of the most popular PyTorch Deep Reinforcement Learning library that makes it easy to train and test your agents in a variety of environments (Gym, Atari, MuJoco, Procgen...).With this integration, you can now host your saved models 💾 and load powerful models from the community.In this article, we’re going to show how you can do it. InstallationTo use stable-baselines3 with Hugging Face Hub, you just need to install these 2 libraries:pip install huggingface_hubpip install huggingface_sb3Finding ModelsWe’re currently uploading saved models of agents playing Space Invaders, Breakout, LunarLander and more. On top of this, you can find all stable-baselines-3 models from the community hereWhen you found the model you need, you just have to copy the repository id:Download a model from the HubThe coolest feature of this integration is that you can now very easily load a saved model from Hub to Stable-baselines3. In order to do that you just need to copy the repo-id that contains your saved model and the name of the saved model zip file in the repo.For instancesb3/demo-hf-CartPole-v1:import gymfrom huggingface_sb3 import load_from_hubfrom stable_baselines3 import PPOfrom stable_baselines3.common.evaluation import evaluate_policy# Retrieve the model from the hub## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})## filename = name of the model zip file from the repository including the extension .zipcheckpoint = load_from_hub(repo_id="sb3/demo-hf-CartPole-v1",filename="ppo-CartPole-v1.zip",)model = PPO.load(checkpoint)# Evaluate the agent and watch iteval_env = gym.make("CartPole-v1")mean_reward, std_reward = evaluate_policy(model, eval_env, render=True, n_eval_episodes=5, deterministic=True, warn=False)print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")Sharing a model to the HubIn just a minute, you can get your saved model in the Hub.First, you need to be logged in to Hugging Face to upload a model:If you're using Colab/Jupyter Notebooks:from huggingface_hub import notebook_loginnotebook_login()Else:huggingface-cli loginThen, in this example, we train a PPO agent to play CartPole-v1 and push it to a new repo ThomasSimonini/demo-hf-CartPole-v1`from huggingface_sb3 import push_to_hubfrom stable_baselines3 import PPO# Define a PPO model with MLP policy networkmodel = PPO("MlpPolicy", "CartPole-v1", verbose=1)# Train it for 10000 timestepsmodel.learn(total_timesteps=10_000)# Save the modelmodel.save("ppo-CartPole-v1")# Push this saved model to the hf repo# If this repo does not exists it will be created## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})## filename: the name of the file == "name" inside model.save("ppo-CartPole-v1")push_to_hub(repo_id="ThomasSimonini/demo-hf-CartPole-v1",filename="ppo-CartPole-v1.zip",commit_message="Added Cartpole-v1 model trained with PPO",)Try it out and share your models with the community!What's next?In the coming weeks and months, we will be extending the ecosystem by:Integrating RL-baselines3-zooUploading RL-trained-agents models into the Hub: a big collection of pre-trained Reinforcement Learning agents using stable-baselines3Integrating other Deep Reinforcement Learning librariesImplementing Decision Transformers 🔥And more to come 🥳The best way to keep in touch is to join our discord server to exchange with us and with the community.And if you want to dive deeper, we wrote a tutorial where you’ll learn:How to train a Deep Reinforcement Learning lander agent to land correctly on the Moon 🌕 How to upload it to the Hub 🚀How to download and use a saved model from the Hub that plays Space Invaders 👾.👉 The tutorialConclusionWe're excited to see what you're working on with Stable-baselines3 and try your models in the Hub 😍.And we would love to hear your feedback 💖. 📧 Feel free to reach us.Finally, we would like to thank the SB3 team and in particular Antonin Raffin for their precious help for the integration of the library 🤗.Would you like to integrate your library to the Hub?This integration is possible thanks to the huggingface_hub library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a guide for you!
https://huggingface.co/blog/infinity-cpu-performance
Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs
Philipp Schmid, Jeff Boudier, Morgan Funtowicz
January 13, 2022
Inference Endpoints to easily deploy models on dedicated infrastructure managed by Hugging Face.Our open-source optimization libraries, 🤗 Optimum Intel and 🤗 Optimum ONNX Runtime, to get the highest efficiency out of training and running models for inference.Hugging Face Expert Acceleration Program, a commercial service for Hugging Face experts to work directly with your team to accelerate your Machine Learning roadmap and models.IntroductionTransfer learning has changed Machine Learning by reaching new levels of accuracy from Natural Language Processing (NLP) to Audio and Computer Vision tasks. At Hugging Face, we work hard to make these new complex models and large checkpoints as easily accessible and usable as possible. But while researchers and data scientists have converted to the new world of Transformers, few companies have been able to deploy these large, complex models in production at scale.The main bottleneck is the latency of predictions which can make large deployments expensive to run and real-time use cases impractical. Solving this is a difficult engineering challenge for any Machine Learning Engineering team and requires the use of advanced techniques to optimize models all the way down to the hardware.With Hugging Face Infinity, we offer a containerized solution that makes it easy to deploy low-latency, high-throughput, hardware-accelerated inference pipelines for the most popular Transformer models. Companies can get both the accuracy of Transformers and the efficiency necessary for large volume deployments, all in a simple to use package. In this blog post, we want to share detailed performance results for Infinity running on the latest generation of Intel Xeon CPU, to achieve optimal cost, efficiency, and latency for your Transformer deployments.What is Hugging Face InfinityHugging Face Infinity is a containerized solution for customers to deploy end-to-end optimized inference pipelines for State-of-the-Art Transformer models, on any infrastructure.Hugging Face Infinity consists of 2 main services:The Infinity Container is a hardware-optimized inference solution delivered as a Docker container.Infinity Multiverse is a Model Optimization Service through which a Hugging Face Transformer model is optimized for the Target Hardware. Infinity Multiverse is compatible with Infinity Container.The Infinity Container is built specifically to run on a Target Hardware architecture and exposes an HTTP /predict endpoint to run inference.Figure 1. Infinity OverviewAn Infinity Container is designed to serve 1 Model and 1 Task. A Task corresponds to machine learning tasks as defined in the Transformers Pipelines documentation. As of the writing of this blog post, supported tasks include feature extraction/document embedding, ranking, sequence classification, and token classification.You can find more information about Hugging Face Infinity at hf.co/infinity, and if you are interested in testing it for yourself, you can sign up for a free trial at hf.co/infinity-trial.BenchmarkInference performance benchmarks often only measure the execution of the model. In this blog post, and when discussing the performance of Infinity, we always measure the end-to-end pipeline including pre-processing, prediction, post-processing. Please keep this in mind when comparing these results with other latency measurements. Figure 2. Infinity End-to-End PipelineEnvironmentAs a benchmark environment, we are going to use the Amazon EC2 C6i instances, which are compute-optimized instances powered by the 3rd generation of Intel Xeon Scalable processors. These new Intel-based instances are using the ice-lake Process Technology and support Intel AVX-512, Intel Turbo Boost, and Intel Deep Learning Boost.In addition to superior performance for machine learning workloads, the Intel Ice Lake C6i instances offer great cost-performance and are our recommendation to deploy Infinity on Amazon Web Services. To learn more, visit the EC2 C6i instance page. MethodologiesWhen it comes to benchmarking BERT-like models, two metrics are most adopted:Latency: Time it takes for a single prediction of the model (pre-process, prediction, post-process)Throughput: Number of executions performed in a fixed amount of time for one benchmark configuration, respecting Physical CPU cores, Sequence Length, and Batch SizeThese two metrics will be used to benchmark Hugging Face Infinity across different setups to understand the benefits and tradeoffs in this blog post.ResultsTo run the benchmark, we created an infinity container for the EC2 C6i instance (Ice-lake) and optimized a DistilBERT model for sequence classification using Infinity Multiverse. This ice-lake optimized Infinity Container can achieve up to 34% better latency & throughput compared to existing cascade-lake-based instances, and up to 800% better latency & throughput compared to vanilla transformers running on ice-lake.The Benchmark we created consists of 192 different experiments and configurations. We ran experiments for: Physical CPU cores: 1, 2, 4, 8Sequence length: 8, 16, 32, 64, 128, 256, 384, 512Batch_size: 1, 2, 4, 8, 16, 32In each experiment, we collect numbers for:Throughput (requests per second)Latency (min, max, avg, p90, p95, p99)You can find the full data of the benchmark in this google spreadsheet: 🤗 Infinity: CPU Ice-Lake Benchmark.In this blog post, we will highlight a few results of the benchmark including the best latency and throughput configurations.In addition to this, we deployed the DistilBERT model we used for the benchmark as an API endpoint on 2 physical cores. You can test it and get a feeling for the performance of Infinity. Below you will find a curl command on how to send a request to the hosted endpoint. The API returns a x-compute-time HTTP Header, which contains the duration of the end-to-end pipeline.curl --request POST `-i` \--url https://infinity.huggingface.co/cpu/distilbert-base-uncased-emotion \--header 'Content-Type: application/json' \--data '{"inputs":"I like you. I love you"}'ThroughputBelow you can find the throughput comparison for running infinity on 2 physical cores with batch size 1, compared with vanilla transformers.Figure 3. Throughput: Infinity vs TransformersSequence LengthInfinityTransformersimprovement8248 req/sec49 req/sec+506%16212 req/sec50 req/sec+424%32150 req/sec40 req/sec+375%6497 req/sec28 req/sec+346%12855 req/sec18 req/sec+305%25627 req/sec9 req/sec+300%38417 req/sec5 req/sec+340%51212 req/sec4 req/sec+300%LatencyBelow, you can find the latency results for an experiment running Hugging Face Infinity on 2 Physical Cores with Batch Size 1. It is remarkable to see how robust and constant Infinity is, with minimal deviation for p95, p99, or p100 (max latency). This result is confirmed for other experiments as well in the benchmark. Figure 4. Latency (Batch=1, Physical Cores=2)ConclusionIn this post, we showed how Hugging Face Infinity performs on the new Intel Ice Lake Xeon CPU. We created a detailed benchmark with over 190 different configurations sharing the results you can expect when using Hugging Face Infinity on CPU, what would be the best configuration to optimize your Infinity Container for latency, and what would be the best configuration to maximize throughput.Hugging Face Infinity can deliver up to 800% higher throughput compared to vanilla transformers, and down to 1-4ms latency for sequence lengths up to 64 tokens.The flexibility to optimize transformer models for throughput, latency, or both enables businesses to either reduce the amount of infrastructure cost for the same workload or to enable real-time use cases that were not possible before. If you are interested in trying out Hugging Face Infinity sign up for your trial at hf.co/infinity-trial ResourcesHugging Face InfinityHugging Face Infinity TrialAmazon EC2 C6i instances DistilBERTDistilBERT paperDistilBERT model🤗 Infinity: CPU Ice-Lake Benchmark
https://huggingface.co/blog/wav2vec2-with-ngram
Boosting Wav2Vec2 with n-grams in 🤗 Transformers
Patrick von Platen
January 12, 2022
Wav2Vec2 is a popular pre-trained model for speech recognition.Released in September 2020by Meta AI Research, the novel architecture catalyzed progress inself-supervised pretraining for speech recognition, e.g. G. Ng etal., 2021, Chen et al,2021, Hsu et al.,2021 and Babu et al.,2021. On the Hugging Face Hub,Wav2Vec2's most popular pre-trained checkpoint currently amounts toover 250,000 monthlydownloads.Using Connectionist Temporal Classification (CTC), pre-trainedWav2Vec2-like checkpoints are extremely easy to fine-tune on downstreamspeech recognition tasks. In a nutshell, fine-tuning pre-trainedWav2Vec2 checkpoints works as follows:A single randomly initialized linear layer is stacked on top of thepre-trained checkpoint and trained to classify raw audio input to asequence of letters. It does so by:extracting audio representations from the raw audio (using CNNlayers),processing the sequence of audio representations with a stack oftransformer layers, and,classifying the processed audio representations into a sequence ofoutput letters.Previously audio classification models required an additional languagemodel (LM) and a dictionary to transform the sequence of classified audioframes to a coherent transcription. Wav2Vec2's architecture is based ontransformer layers, thus giving each processed audio representationcontext from all other audio representations. In addition, Wav2Vec2leverages the CTC algorithm forfine-tuning, which solves the problem of alignment between a varying"input audio length"-to-"output text length" ratio.Having contextualized audio classifications and no alignment problems,Wav2Vec2 does not require an external language model or dictionary toyield acceptable audio transcriptions.As can be seen in Appendix C of the officialpaper, Wav2Vec2 gives impressivedownstream performances on LibriSpeech without using a language model atall. However, from the appendix, it also becomes clear that using Wav2Vec2in combination with a language model can yield a significantimprovement, especially when the model was trained on only 10 minutes oftranscribed audio.Until recently, the 🤗 Transformers library did not offer a simple userinterface to decode audio files with a fine-tuned Wav2Vec2 and alanguage model. This has thankfully changed. 🤗 Transformers now offersan easy-to-use integration with Kensho Technologies' pyctcdecodelibrary. This blogpost is a step-by-step technical guide to explain how one can createan n-gram language model and combine it with an existing fine-tunedWav2Vec2 checkpoint using 🤗 Datasets and 🤗 Transformers.We start by:How does decoding audio with an LM differ from decoding audiowithout an LM?How to get suitable data for a language model?How to build an n-gram with KenLM?How to combine the n-gram with a fine-tuned Wav2Vec2 checkpoint?For a deep dive into how Wav2Vec2 functions - which is not necessary forthis blog post - the reader is advised to consult the followingmaterial:wav2vec 2.0: A Framework for Self-Supervised Learning of SpeechRepresentationsFine-Tune Wav2Vec2 for English ASR with 🤗TransformersAn Illustrated Tour of Wav2vec2.01. Decoding audio data with Wav2Vec2 and a language modelAs shown in 🤗 Transformers exemple docs ofWav2Vec2,audio can be transcribed as follows.First, we install datasets and transformers.pip install datasets transformersLet's load a small excerpt of the Librispeechdataset to demonstrateWav2Vec2's speech transcription capabilities.from datasets import load_datasetdataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")datasetOutput:Reusing dataset librispeech_asr (/root/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc)Dataset({features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'],num_rows: 73})We can pick one of the 73 audio samples and listen to it.audio_sample = dataset[2]audio_sample["text"].lower()Output:he tells us that at this festive season of the year with christmas and roast beef looming before us similes drawn from eating and its results occur most readily to the mindHaving chosen a data sample, we now load the fine-tuned model andprocessor.from transformers import Wav2Vec2Processor, Wav2Vec2ForCTCprocessor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")Next, we process the datainputs = processor(audio_sample["audio"]["array"], sampling_rate=audio_sample["audio"]["sampling_rate"], return_tensors="pt")forward it to the modelimport torchwith torch.no_grad():logits = model(**inputs).logitsand decode itpredicted_ids = torch.argmax(logits, dim=-1)transcription = processor.batch_decode(predicted_ids)transcription[0].lower()Output:'he tells us that at this festive season of the year with christmaus and rose beef looming before us simalyis drawn from eating and its results occur most readily to the mind'Comparing the transcription to the target transcription above, we cansee that some words sound correct, but are not spelled correctly,e.g.:christmaus vs. christmasrose vs. roastsimalyis vs. similesLet's see whether combining Wav2Vec2 with an n-gram lnguage modelcan help here.First, we need to install pyctcdecode and kenlm.pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecodeFor demonstration purposes, we have prepared a new model repositorypatrickvonplaten/wav2vec2-base-100h-with-lmwhich contains the same Wav2Vec2 checkpoint but has an additional4-gram language model for English.Instead of using Wav2Vec2Processor, this time we useWav2Vec2ProcessorWithLM to load the 4-gram model in addition tothe feature extractor and tokenizer.from transformers import Wav2Vec2ProcessorWithLMprocessor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")In constrast to decoding the audio without language model, the processornow directly receives the model's output logits instead of theargmax(logits) (called predicted_ids) above. The reason is that whendecoding with a language model, at each time step, the processor takesthe probabilities of all possible output characters into account. Let'stake a look at the dimension of the logits output.logits.shapeOutput:torch.Size([1, 624, 32])We can see that the logits correspond to a sequence of 624 vectorseach having 32 entries. Each of the 32 entries thereby stands for thelogit probability of one of the 32 possible output characters of themodel:" ".join(sorted(processor.tokenizer.get_vocab()))Output:"' </s> <pad> <s> <unk> A B C D E F G H I J K L M N O P Q R S T U V W X Y Z |"Intuitively, one can understand the decoding process ofWav2Vec2ProcessorWithLM as applying beam search through a matrix ofsize 624 $\times$ 32 probabilities while leveraging the probabilities ofthe next letters as given by the n-gram language model.OK, let's run the decoding step again. pyctcdecode language modeldecoder does not automatically convert torch tensors to numpy sowe'll have to convert them ourselves before.transcription = processor.batch_decode(logits.numpy()).texttranscription[0].lower()Output:'he tells us that at this festive season of the year with christmas and rose beef looming before us similes drawn from eating and its results occur most readily to the mind'Cool! Recalling the words facebook/wav2vec2-base-100h without alanguage model transcribed incorrectly previously, e.g.,christmaus vs. christmasrose vs. roastsimalyis vs. simileswe can take another look at the transcription offacebook/wav2vec2-base-100h with a 4-gram language model. 2 out of3 errors are corrected; christmas and similes have been correctlytranscribed.Interestingly, the incorrect transcription of rose persists. However,this should not surprise us very much. Decoding audio without a languagemodel is much more prone to yield spelling mistakes, such aschristmaus or similes (those words don't exist in the Englishlanguage as far as I know). This is because the speech recognitionsystem almost solely bases its prediction on the acoustic input it wasgiven and not really on the language modeling context of previous andsuccessive predicted letters 1 {}^1 1. If on the other hand, we add alanguage model, we can be fairly sure that the speech recognitionsystem will heavily reduce spelling errors since a well-trained n-grammodel will surely not predict a word that has spelling errors. But theword rose is a valid English word and therefore the 4-gram willpredict this word with a probability that is not insignificant.The language model on its own most likely does favor the correct wordroast since the word sequence roast beef is much more common inEnglish than rose beef. Because the final transcription is derivedfrom a weighted combination of facebook/wav2vec2-base-100h outputprobabilities and those of the n-gram language model, it is quitecommon to see incorrectly transcribed words such as rose.For more information on how you can tweak different parameters whendecoding with Wav2Vec2ProcessorWithLM, please take a look at theofficial documentationhere.1{}^1 1 Some research shows that a model such asfacebook/wav2vec2-base-100h - when sufficiently large and trained onenough data - can learn language modeling dependencies betweenintermediate audio representations similar to a language model.Great, now that you have seen the advantages adding an n-gram languagemodel can bring, let's dive into how to create an n-gram andWav2Vec2ProcessorWithLM from scratch.2. Getting data for your language modelA language model that is useful for a speech recognition system shouldsupport the acoustic model, e.g. Wav2Vec2, in predicting the next word(or token, letter) and therefore model the following distribution:P(wn∣w0t−1) \mathbf{P}(w_n | \mathbf{w}_0^{t-1}) P(wn​∣w0t−1​) with wn w_n wn​ being the next wordand w0t−1 \mathbf{w}_0^{t-1} w0t−1​ being the sequence of all previous words sincethe beginning of the utterance. Simply said, the language model shouldbe good at predicting the next word given all previously transcribedwords regardless of the audio input given to the speech recognitionsystem.As always a language model is only as good as the data it is trained on.In the case of speech recognition, we should therefore ask ourselves forwhat kind of data, the speech recognition will be used for:conversations, audiobooks, movies, speeches, , etc, ...?The language model should be good at modeling language that correspondsto the target transcriptions of the speech recognition system. Fordemonstration purposes, we assume here that we have fine-tuned apre-trainedfacebook/wav2vec2-xls-r-300mon Common Voice7in Swedish. The fine-tuned checkpoint can be foundhere. Common Voice 7 isa relatively crowd-sourced read-out audio dataset and we will evaluatethe model on its test data.Let's now look for suitable text data on the Hugging Face Hub. Wesearch all datasets for those that contain Swedishdata.Browsing a bit through the datasets, we are looking for a dataset thatis similar to Common Voice's read-out audio data. The obvious choicesof oscar andmc4 might not be the mostsuitable here because they:are generated from crawling the web, which might not be veryclean and correspond well to spoken languagerequire a lot of pre-processingare very large which is not ideal for demonstration purposeshere 😉A dataset that seems sensible here and which is relatively clean andeasy to pre-process iseuroparl_bilingualas it's a dataset that is based on discussions and talks of theEuropean parliament. It should therefore be relatively clean andcorrespond well to read-out audio data. The dataset is originally designedfor machine translation and can therefore only be accessed intranslation pairs. We will only extract the text of the targetlanguage, Swedish (sv), from the English-to-Swedish translations.target_lang="sv" # change to your target langLet's download the data.from datasets import load_datasetdataset = load_dataset("europarl_bilingual", lang1="en", lang2=target_lang, split="train")We see that the data is quite large - it has over a milliontranslations. Since it's only text data, it should be relatively easyto process though.Next, let's look at how the data was preprocessed when training thefine-tuned XLS-R checkpoint in Swedish. Looking at the run.shfile, wecan see that the following characters were removed from the officialtranscriptions:chars_to_ignore_regex = '[,?.!\-\;\:"“%‘”�—’…–]' # change to the ignored characters of your fine-tuned modelLet's do the same here so that the alphabet of our language modelmatches the one of the fine-tuned acoustic checkpoints.We can write a single map function to extract the Swedish text andprocess it right away.import redef extract_text(batch):text = batch["translation"][target_lang]batch["text"] = re.sub(chars_to_ignore_regex, "", text.lower())return batchLet's apply the .map() function. This should take roughly 5 minutes.dataset = dataset.map(extract_text, remove_columns=dataset.column_names)Great. Let's upload it to the Hub sothat we can inspect and reuse it better.You can log in by executing the following cell.from huggingface_hub import notebook_loginnotebook_login()Output:Login successfulYour token has been saved to /root/.huggingface/tokenAuthenticated through git-credential store but this isn't the helper defined on your machine.You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the defaultgit config --global credential.helper storeNext, we call 🤗 Hugging Face'spush_to_hubmethod to upload the dataset to the repo"sv_corpora_parliament_processed".dataset.push_to_hub(f"{target_lang}_corpora_parliament_processed", split="train")That was easy! The dataset viewer is automatically enabled whenuploading a new dataset, which is very convenient. You can now directlyinspect the dataset online.Feel free to look through our preprocessed dataset directly onhf-test/sv_corpora_parliament_processed.Even if we are not a native speaker in Swedish, we can see that the datais well processed and seems clean.Next, let's use the data to build a language model.3. Build an n-gram with KenLMWhile large language models based on the Transformer architecture have become the standard in NLP, it is still very common to use an n-gram LM to boost speech recognition systems - as shown in Section 1.Looking again at Table 9 of Appendix C of the official Wav2Vec2 paper, it can be noticed that using a Transformer-based LM for decoding clearly yields better results than using an n-gram model, but the difference between n-gram and Transformer-based LM is much less significant than the difference between n-gram and no LM. E.g., for the large Wav2Vec2 checkpoint that was fine-tuned on 10min only, an n-gram reduces the word error rate (WER) compared to no LM by ca. 80% while a Transformer-based LM only reduces the WER by another 23% compared to the n-gram. This relative WER reduction becomes less, the more data the acoustic model has been trained on. E.g., for the large checkpoint a Transformer-based LM reduces the WER by merely 8% compared to an n-gram LM whereas the n-gram still yields a 21% WER reduction compared to no language model.The reason why an n-gram is preferred over a Transformer-based LM is that n-grams come at a significantly smaller computational cost. For an n-gram, retrieving the probability of a word given previous words is almost only as computationally expensive as querying a look-up table or tree-like data storage - i.e. it's very fast compared to modern Transformer-based language models that would require a full forward pass to retrieve the next word probabilities.For more information on how n-grams function and why they are (still) so useful for speech recognition, the reader is advised to take a look at this excellent summary from Stanford.Great, let's see step-by-step how to build an n-gram. We will use thepopular KenLM library to do so. Let'sstart by installing the Ubuntu library prerequisites:sudo apt install build-essential cmake libboost-system-dev libboost-thread-dev libboost-program-options-dev libboost-test-dev libeigen3-dev zlib1g-dev libbz2-dev liblzma-devbefore downloading and unpacking the KenLM repo.wget -O - https://kheafield.com/code/kenlm.tar.gz | tar xzKenLM is written in C++, so we'll make use of cmake to build thebinaries.mkdir kenlm/build && cd kenlm/build && cmake .. && make -j2ls kenlm/build/binGreat, as we can see, the executable functions have successfullybeen built under kenlm/build/bin/.KenLM by default computes an n-gram with Kneser-Neysmooting.All text data used to create the n-gram is expected to be stored in atext file. We download our dataset and save it as a .txt file.from datasets import load_datasetusername = "hf-test" # change to your usernamedataset = load_dataset(f"{username}/{target_lang}_corpora_parliament_processed", split="train")with open("text.txt", "w") as file:file.write(" ".join(dataset["text"]))Now, we just have to run KenLM's lmplz command to build our n-gram,called "5gram.arpa". As it's relatively common in speech recognition,we build a 5-gram by passing the -o 5 parameter.For more information on the different n-gram LM that can be built with KenLM, one can take a look at the official website of KenLM.Executing the command below might take a minute or so.kenlm/build/bin/lmplz -o 5 <"text.txt" > "5gram.arpa"Output:=== 1/5 Counting and sorting n-grams ===Reading /content/swedish_text.txt----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100tcmalloc: large alloc 1918697472 bytes == 0x55d40d0f0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b28c51e 0x55d40b26b2eb 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baatcmalloc: large alloc 8953896960 bytes == 0x55d47f6c0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26b308 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baa****************************************************************************************************Unigram tokens 42153890 types 360209=== 2/5 Calculating and sorting adjusted counts ===Chain sizes: 1:4322508 2:1062772928 3:1992699264 4:3188318720 5:4649631744tcmalloc: large alloc 4649631744 bytes == 0x55d40d0f0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26b8d7 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baatcmalloc: large alloc 1992704000 bytes == 0x55d561ce0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26bcdd 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baatcmalloc: large alloc 3188326400 bytes == 0x55d695a86000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26bcdd 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baaStatistics:1 360208 D1=0.686222 D2=1.01595 D3+=1.336852 5476741 D1=0.761523 D2=1.06735 D3+=1.325593 18177681 D1=0.839918 D2=1.12061 D3+=1.337944 30374983 D1=0.909146 D2=1.20496 D3+=1.372355 37231651 D1=0.944104 D2=1.25164 D3+=1.344Memory estimate for binary LM:type MBprobing 1884 assuming -p 1.5probing 2195 assuming -r models -p 1.5trie 922 without quantizationtrie 518 assuming -q 8 -b 8 quantization trie 806 assuming -a 22 array pointer compressiontrie 401 assuming -a 22 -q 8 -b 8 array pointer compression and quantization=== 3/5 Calculating and sorting initial probabilities ===Chain sizes: 1:4322496 2:87627856 3:363553620 4:728999592 5:1042486228----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100####################################################################################################=== 4/5 Calculating and writing order-interpolated probabilities ===Chain sizes: 1:4322496 2:87627856 3:363553620 4:728999592 5:1042486228----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100####################################################################################################=== 5/5 Writing ARPA model ===----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100****************************************************************************************************Name:lmplz VmPeak:14181536 kB VmRSS:2199260 kB RSSMax:4160328 kB user:120.598 sys:26.6659 CPU:147.264 real:136.344Great, we have built a 5-gram LM! Let's inspect the first couple oflines.head -20 5gram.arpaOutput:\data\ngram 1=360208ngram 2=5476741ngram 3=18177681ngram 4=30374983ngram 5=37231651\1-grams:-6.770219 <unk> 00 <s> -0.11831701-4.6095004 återupptagande -1.2174699-2.2361007 av -0.79668784-4.8163533 sessionen -0.37327805-2.2251768 jag -1.4205662-4.181505 förklarar -0.56261665-3.5790775 europaparlamentets -0.63611007-4.771945 session -0.3647111-5.8043895 återupptagen -0.3058712-2.8580177 efter -0.7557702-5.199537 avbrottet -0.43322718There is a small problem that 🤗 Transformers will not be happy aboutlater on. The 5-gram correctly includes a "Unknown" or <unk>, aswell as a begin-of-sentence, <s> token, but no end-of-sentence,</s> token. This sadly has to be corrected currently after the build.We can simply add the end-of-sentence token by adding the line0 </s> -0.11831701 below the begin-of-sentence token and increasingthe ngram 1 count by 1. Because the file has roughly 100 millionlines, this command will take ca. 2 minutes.with open("5gram.arpa", "r") as read_file, open("5gram_correct.arpa", "w") as write_file:has_added_eos = Falsefor line in read_file:if not has_added_eos and "ngram 1=" in line:count=line.strip().split("=")[-1]write_file.write(line.replace(f"{count}", f"{int(count)+1}"))elif not has_added_eos and "<s>" in line:write_file.write(line)write_file.write(line.replace("<s>", "</s>"))has_added_eos = Trueelse:write_file.write(line)Let's now inspect the corrected 5-gram.head -20 5gram_correct.arpaOutput:\data\ngram 1=360209ngram 2=5476741ngram 3=18177681ngram 4=30374983ngram 5=37231651\1-grams:-6.770219 <unk> 00 <s> -0.118317010 </s> -0.11831701-4.6095004 återupptagande -1.2174699-2.2361007 av -0.79668784-4.8163533 sessionen -0.37327805-2.2251768 jag -1.4205662-4.181505 förklarar -0.56261665-3.5790775 europaparlamentets -0.63611007-4.771945 session -0.3647111-5.8043895 återupptagen -0.3058712-2.8580177 efter -0.7557702Great, this looks better! We're done at this point and all that is leftto do is to correctly integrate the "ngram" withpyctcdecode and🤗 Transformers.4. Combine an n-gram with Wav2Vec2In a final step, we want to wrap the 5-gram into aWav2Vec2ProcessorWithLM object to make the 5-gram boosted decodingas seamless as shown in Section 1. We start by downloading the currently"LM-less" processor ofxls-r-300m-sv.from transformers import AutoProcessorprocessor = AutoProcessor.from_pretrained("hf-test/xls-r-300m-sv")Next, we extract the vocabulary of its tokenizer as it represents the"labels" of pyctcdecode's BeamSearchDecoder class.vocab_dict = processor.tokenizer.get_vocab()sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}The "labels" and the previously built 5gram_correct.arpa file is allthat's needed to build the decoder.from pyctcdecode import build_ctcdecoderdecoder = build_ctcdecoder(labels=list(sorted_vocab_dict.keys()),kenlm_model_path="5gram_correct.arpa",)Output:Found entries of length > 1 in alphabet. This is unusual unless style is BPE, but the alphabet was not recognized as BPE type. Is this correct?Unigrams and labels don't seem to agree.We can safely ignore the warning and all that is left to do now is towrap the just created decoder, together with the processor'stokenizer and feature_extractor into a Wav2Vec2ProcessorWithLMclass.from transformers import Wav2Vec2ProcessorWithLMprocessor_with_lm = Wav2Vec2ProcessorWithLM(feature_extractor=processor.feature_extractor,tokenizer=processor.tokenizer,decoder=decoder)We want to directly upload the LM-boosted processor into the modelfolder ofxls-r-300m-sv to haveall relevant files in one place.Let's clone the repo, add the new decoder files and upload themafterward. First, we need to install git-lfs.sudo apt-get install git-lfs treeCloning and uploading of modeling files can be done conveniently withthe huggingface_hub's Repository class.More information on how to use the huggingface_hub to upload anyfiles, please take a look at the officialdocs.from huggingface_hub import Repositoryrepo = Repository(local_dir="xls-r-300m-sv", clone_from="hf-test/xls-r-300m-sv")Output:Cloning https://huggingface.co/hf-test/xls-r-300m-sv into local empty directory.Having cloned xls-r-300m-sv, let's save the new processor with LMinto it.processor_with_lm.save_pretrained("xls-r-300m-sv")Let's inspect the local repository. The tree command conveniently canalso show the size of the different files.tree -h xls-r-300m-sv/Output:xls-r-300m-sv/├── [ 23] added_tokens.json├── [ 401] all_results.json├── [ 253] alphabet.json├── [2.0K] config.json├── [ 304] emissions.csv├── [ 226] eval_results.json├── [4.0K] language_model│   ├── [4.1G] 5gram_correct.arpa│   ├── [ 78] attrs.json│   └── [4.9M] unigrams.txt├── [ 240] preprocessor_config.json├── [1.2G] pytorch_model.bin├── [3.5K] README.md├── [4.0K] runs│   └── [4.0K] Jan09_22-00-50_brutasse│   ├── [4.0K] 1641765760.8871996│   │   └── [4.6K] events.out.tfevents.1641765760.brutasse.31164.1│   ├── [ 42K] events.out.tfevents.1641765760.brutasse.31164.0│   └── [ 364] events.out.tfevents.1641794162.brutasse.31164.2├── [1.2K] run.sh├── [ 30K] run_speech_recognition_ctc.py├── [ 502] special_tokens_map.json├── [ 279] tokenizer_config.json├── [ 29K] trainer_state.json├── [2.9K] training_args.bin├── [ 196] train_results.json├── [ 319] vocab.json└── [4.0K] wandb├── [ 52] debug-internal.log -> run-20220109_220240-1g372i3v/logs/debug-internal.log├── [ 43] debug.log -> run-20220109_220240-1g372i3v/logs/debug.log├── [ 28] latest-run -> run-20220109_220240-1g372i3v└── [4.0K] run-20220109_220240-1g372i3v├── [4.0K] files│   ├── [8.8K] conda-environment.yaml│   ├── [140K] config.yaml│   ├── [4.7M] output.log│   ├── [5.4K] requirements.txt│   ├── [2.1K] wandb-metadata.json│   └── [653K] wandb-summary.json├── [4.0K] logs│   ├── [3.4M] debug-internal.log│   └── [8.2K] debug.log└── [113M] run-1g372i3v.wandb9 directories, 34 filesAs can be seen the 5-gram LM is quite large - it amounts to more than4 GB. To reduce the size of the n-gram and make loading faster,kenLM allows converting .arpa files to binary ones using thebuild_binary executable.Let's make use of it here.kenlm/build/bin/build_binary xls-r-300m-sv/language_model/5gram_correct.arpa xls-r-300m-sv/language_model/5gram.binOutput:Reading xls-r-300m-sv/language_model/5gram_correct.arpa----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100****************************************************************************************************SUCCESSGreat, it worked! Let's remove the .arpa file and check the size ofthe binary 5-gram LM.rm xls-r-300m-sv/language_model/5gram_correct.arpa && tree -h xls-r-300m-sv/Output:xls-r-300m-sv/├── [ 23] added_tokens.json├── [ 401] all_results.json├── [ 253] alphabet.json├── [2.0K] config.json├── [ 304] emissions.csv├── [ 226] eval_results.json├── [4.0K] language_model│   ├── [1.8G] 5gram.bin│   ├── [ 78] attrs.json│   └── [4.9M] unigrams.txt├── [ 240] preprocessor_config.json├── [1.2G] pytorch_model.bin├── [3.5K] README.md├── [4.0K] runs│   └── [4.0K] Jan09_22-00-50_brutasse│   ├── [4.0K] 1641765760.8871996│   │   └── [4.6K] events.out.tfevents.1641765760.brutasse.31164.1│   ├── [ 42K] events.out.tfevents.1641765760.brutasse.31164.0│   └── [ 364] events.out.tfevents.1641794162.brutasse.31164.2├── [1.2K] run.sh├── [ 30K] run_speech_recognition_ctc.py├── [ 502] special_tokens_map.json├── [ 279] tokenizer_config.json├── [ 29K] trainer_state.json├── [2.9K] training_args.bin├── [ 196] train_results.json├── [ 319] vocab.json└── [4.0K] wandb├── [ 52] debug-internal.log -> run-20220109_220240-1g372i3v/logs/debug-internal.log├── [ 43] debug.log -> run-20220109_220240-1g372i3v/logs/debug.log├── [ 28] latest-run -> run-20220109_220240-1g372i3v└── [4.0K] run-20220109_220240-1g372i3v├── [4.0K] files│   ├── [8.8K] conda-environment.yaml│   ├── [140K] config.yaml│   ├── [4.7M] output.log│   ├── [5.4K] requirements.txt│   ├── [2.1K] wandb-metadata.json│   └── [653K] wandb-summary.json├── [4.0K] logs│   ├── [3.4M] debug-internal.log│   └── [8.2K] debug.log└── [113M] run-1g372i3v.wandb9 directories, 34 filesNice, we reduced the n-gram by more than half to less than 2GB now. Inthe final step, let's upload all files.repo.push_to_hub(commit_message="Upload lm-boosted decoder")Output:Git LFS: (1 of 1 files) 1.85 GB / 1.85 GBCounting objects: 9, done.Delta compression using up to 2 threads.Compressing objects: 100% (9/9), done.Writing objects: 100% (9/9), 1.23 MiB | 1.92 MiB/s, done.Total 9 (delta 3), reused 0 (delta 0)To https://huggingface.co/hf-test/xls-r-300m-sv27d0c57..5a191e2 main -> mainThat's it. Now you should be able to use the 5gram for LM-boosteddecoding as shown in Section 1.As can be seen on xls-r-300m-sv's modelcardour 5gram LM-boosted decoder yields a WER of 18.85% on Common Voice's 7test set which is a relative performance of ca. 30% 🔥.
https://huggingface.co/blog/gptj-sagemaker
Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker
Philipp Schmid
January 11, 2022
Almost 6 months ago to the day, EleutherAI released GPT-J 6B, an open-source alternative to OpenAIs GPT-3. GPT-J 6B is the 6 billion parameter successor to EleutherAIs GPT-NEO family, a family of transformer-based language models based on the GPT architecture for text generation.EleutherAI's primary goal is to train a model that is equivalent in size to GPT⁠-⁠3 and make it available to the public under an open license. Over the last 6 months, GPT-J gained a lot of interest from Researchers, Data Scientists, and even Software Developers, but it remained very challenging to deploy GPT-J into production for real-world use cases and products. There are some hosted solutions to use GPT-J for production workloads, like the Hugging Face Inference API, or for experimenting using EleutherAIs 6b playground, but fewer examples on how to easily deploy it into your own environment. In this blog post, you will learn how to easily deploy GPT-J using Amazon SageMaker and the Hugging Face Inference Toolkit with a few lines of code for scalable, reliable, and secure real-time inference using a regular size GPU instance with NVIDIA T4 (~500$/m). But before we get into it, I want to explain why deploying GPT-J into production is challenging. BackgroundThe weights of the 6 billion parameter model represent a ~24GB memory footprint. To load it in float32, one would need at least 2x model size CPU RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would require at least 48GB of CPU RAM to just load the model.To make the model more accessible, EleutherAI also provides float16 weights, and transformers has new options to reduce the memory footprint when loading large language models. Combining all this it should take roughly 12.1GB of CPU RAM to load the model.from transformers import GPTJForCausalLMimport torchmodel = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B",revision="float16",torch_dtype=torch.float16,low_cpu_mem_usage=True)The caveat of this example is that it takes a very long time until the model is loaded into memory and ready for use. In my experiments, it took 3 minutes and 32 seconds to load the model with the code snippet above on a P3.2xlarge AWS EC2 instance (the model was not stored on disk). This duration can be reduced by storing the model already on disk, which reduces the load time to 1 minute and 23 seconds, which is still very long for production workloads where you need to consider scaling and reliability. For example, Amazon SageMaker has a 60s limit for requests to respond, meaning the model needs to be loaded and the predictions to run within 60s, which in my opinion makes a lot of sense to keep the model/endpoint scalable and reliable for your workload. If you have longer predictions, you could use batch-transform.In Transformers the models loaded with the from_pretrained method are following PyTorch's recommended practice, which takes around 1.97 seconds for BERT [REF]. PyTorch offers an additional alternative way of saving and loading models using torch.save(model, PATH) and torch.load(PATH).“Saving a model in this way will save the entire module using Python’s pickle module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved.” This means that when we save a model with transformers==4.13.2 it could be potentially incompatible when trying to load with transformers==4.15.0. However, loading models this way reduces the loading time by ~12x, down to 0.166s for BERT. Applying this to GPT-J means that we can reduce the loading time from 1 minute and 23 seconds down to 7.7 seconds, which is ~10.5x faster.Figure 1. Model load time of BERT and GPTJTutorialWith this method of saving and loading models, we achieved model loading performance for GPT-J compatible with production scenarios. But we need to keep in mind that we need to align: Align PyTorch and Transformers version when saving the model with torch.save(model,PATH) and loading the model with torch.load(PATH) to avoid incompatibility.Save GPT-J using torch.saveTo create our torch.load() compatible model file we load GPT-J using Transformers and the from_pretrained method, and then save it with torch.save().from transformers import AutoTokenizer,GPTJForCausalLMimport torch# load fp 16 modelmodel = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16)# save model with torch.savetorch.save(model, "gptj.pt")Now we are able to load our GPT-J model with torch.load() to run predictions. from transformers import pipelineimport torch# load modelmodel = torch.load("gptj.pt")# load tokenizertokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")# create pipelinegen = pipeline("text-generation",model=model,tokenizer=tokenizer,device=0)# run predictiongen("My Name is philipp")#[{'generated_text': 'My Name is philipp k. and I live just outside of Detroit....Create model.tar.gz for the Amazon SageMaker real-time endpointSince we can load our model quickly and run inference on it let’s deploy it to Amazon SageMaker. There are two ways you can deploy transformers to Amazon SageMaker. You can either “Deploy a model from the Hugging Face Hub” directly or “Deploy a model with model_data stored on S3”. Since we are not using the default Transformers method we need to go with the second option and deploy our endpoint with the model stored on S3. For this, we need to create a model.tar.gz artifact containing our model weights and additional files we need for inference, e.g. tokenizer.json. We provide uploaded and publicly accessible model.tar.gz artifacts, which can be used with the HuggingFaceModel to deploy GPT-J to Amazon SageMaker.See “Deploy GPT-J as Amazon SageMaker Endpoint” on how to use them.If you still want or need to create your own model.tar.gz, e.g. because of compliance guidelines, you can use the helper script convert_gpt.py for this purpose, which creates the model.tar.gz and uploads it to S3. # clone directorygit clone https://github.com/philschmid/amazon-sagemaker-gpt-j-sample.git# change directory to amazon-sagemaker-gpt-j-samplecd amazon-sagemaker-gpt-j-sample# create and upload model.tar.gzpip3 install -r requirements.txtpython3 convert_gptj.py --bucket_name {model_storage}The convert_gpt.py should print out an S3 URI similar to this. s3://hf-sagemaker-inference/gpt-j/model.tar.gz.Deploy GPT-J as Amazon SageMaker EndpointTo deploy our Amazon SageMaker Endpoint we are going to use the Amazon SageMaker Python SDK and the HuggingFaceModel class. The snippet below uses the get_execution_role which is only available inside Amazon SageMaker Notebook Instances or Studio. If you want to deploy a model outside of it check the documentation. The model_uri defines the location of our GPT-J model artifact. We are going to use the publicly available one provided by us. from sagemaker.huggingface import HuggingFaceModelimport sagemaker# IAM role with permissions to create endpointrole = sagemaker.get_execution_role()# public S3 URI to gpt-j artifactmodel_uri="s3://huggingface-sagemaker-models/transformers/4.12.3/pytorch/1.9.1/gpt-j/model.tar.gz"# create Hugging Face Model Classhuggingface_model = HuggingFaceModel(model_data=model_uri,transformers_version='4.12.3',pytorch_version='1.9.1',py_version='py38',role=role, )# deploy model to SageMaker Inferencepredictor = huggingface_model.deploy(initial_instance_count=1, # number of instancesinstance_type='ml.g4dn.xlarge' #'ml.p3.2xlarge' # ec2 instance type)If you want to use your own model.tar.gz just replace the model_uri with your S3 Uri.The deployment should take around 3-5 minutes.Run predictionsWe can run predictions using the predictor instances created by our .deploy method. To send a request to our endpoint we use the predictor.predict with our inputs.predictor.predict({"inputs": "Can you please let us know more details about your "})If you want to customize your predictions using additional kwargs like min_length, check out “Usage best practices” below. Usage best practicesWhen using generative models, most of the time you want to configure or customize your prediction to fit your needs, for example by using beam search, configuring the max or min length of the generated sequence, or adjust the temperature to reduce repetition. The Transformers library provides different strategies and kwargs to do this, the Hugging Face Inference toolkit offers the same functionality using the parameters attribute of your request payload. Below you can find examples on how to generate text without parameters, with beam search, and using custom configurations. If you want to learn about different decoding strategies check out this blog post.Default requestThis is an example of a default request using greedy search.Inference-time after the first request: 3spredictor.predict({"inputs": "Can you please let us know more details about your "})Beam search requestThis is an example of a request using beam search with 5 beams.Inference-time after the first request: 3.3spredictor.predict({"inputs": "Can you please let us know more details about your ","parameters" : {"num_beams": 5,}})Parameterized requestThis is an example of a request using a custom parameter, e.g. min_length for generating at least 512 tokens.Inference-time after the first request: 38spredictor.predict({"inputs": "Can you please let us know more details about your ","parameters" : {"max_length": 512,"temperature": 0.9,}})Few-Shot example (advanced)This is an example of how you could eos_token_id to stop the generation on a certain token, e.g. ,. or ### for few-shot predictions. Below is a few-shot example for generating tweets for keywords.Inference-time after the first request: 15-45sfrom transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")end_sequence="###"temperature=4max_generated_token_length=25prompt= """key: marketstweet: Take feedback from nature and markets, not from people.###key: childrentweet: Maybe we die so we can come back as children.###key: startupstweet: Startups shouldn’t worry about how to put out fires, they should worry about how to start them.###key: hugging facetweet:"""predictor.predict({'inputs': prompt,"parameters" : {"max_length": int(len(prompt) + max_generated_token_length),"temperature": float(temperature),"eos_token_id": int(tokenizer.convert_tokens_to_ids(end_sequence)),"return_full_text":False}})To delete your endpoint you can run. predictor.delete_endpoint()ConclusionWe successfully managed to deploy GPT-J, a 6 billion parameter language model created by EleutherAI, using Amazon SageMaker. We reduced the model load time from 3.5 minutes down to 8 seconds to be able to run scalable, reliable inference. Remember that using torch.save() and torch.load() can create incompatibility issues. If you want to learn more about scaling out your Amazon SageMaker Endpoints check out my other blog post: “MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines”.Thanks for reading! If you have any question, feel free to contact me, through Github, or on the forum. You can also connect with me on Twitter or LinkedIn.
https://huggingface.co/blog/autonlp-prodigy
Active Learning with AutoNLP and Prodigy
Abhishek Thakur
December 23, 2021
Active learning in the context of Machine Learning is a process in which you iteratively add labeled data, retrain a model and serve it to the end user. It is an endless process and requires human interaction for labeling/creating the data. In this article, we will discuss how to use AutoNLP and Prodigy to build an active learning pipeline.AutoNLPAutoNLP is a framework created by Hugging Face that helps you to build your own state-of-the-art deep learning models on your own dataset with almost no coding at all. AutoNLP is built on the giant shoulders of Hugging Face's transformers, datasets, inference-api and many other tools.With AutoNLP, you can train SOTA transformer models on your own custom dataset, fine-tune them (automatically) and serve them to the end-user. All models trained with AutoNLP are state-of-the-art and production-ready.At the time of writing this article, AutoNLP supports tasks like binary classification, regression, multi class classification, token classification (such as named entity recognition or part of speech), question answering, summarization and more. You can find a list of all the supported tasks here. AutoNLP supports languages like English, French, German, Spanish, Hindi, Dutch, Swedish and many more. There is also support for custom models with custom tokenizers (in case your language is not supported by AutoNLP).ProdigyProdigy is an annotation tool developed by Explosion (the makers of spaCy). It is a web-based tool that allows you to annotate your data in real time. Prodigy supports NLP tasks such as named entity recognition (NER) and text classification, but it's not limited to NLP! It supports Computer Vision tasks and even creating your own tasks! You can try the Prodigy demo: here.Note that Prodigy is a commercial tool. You can find out more about it here.We chose Prodigy as it is one of the most popular tools for labeling data and is infinitely customizable. It is also very easy to setup and use.DatasetNow begins the most interesting part of this article. After looking at a lot of datasets and different types of problems, we stumbled upon BBC News Classification dataset on Kaggle. This dataset was used in an inclass competition and can be accessed here.Let's take a look at this dataset:As we can see this is a classification dataset. There is a Text column which is the text of the news article and a Category column which is the class of the article. Overall, there are 5 different classes: business, entertainment, politics, sport & tech. Training a multi-class classification model on this dataset using AutoNLP is a piece of cake. Step 1: Download the dataset.Step 2: Open AutoNLP and create a new project.Step 3: Upload the training dataset and choose auto-splitting.Step 4: Accept the pricing and train your models.Please note that in the above example, we are training 15 different multi-class classification models. AutoNLP pricing can be as low as $10 per model. AutoNLP will select the best models and do hyperparameter tuning for you on its own. So, now, all we need to do is sit back, relax and wait for the results.After around 15 minutes, all models finished training and the results are ready. It seems like the best model scored 98.67% accuracy! So, we are now able to classify the articles in the dataset with an accuracy of 98.67%! But wait, we were talking about active learning and Prodigy. What happened to those? 🤔 We did use Prodigy as we will see soon. We used it to label this dataset for the named entity recognition task. Before starting the labeling part, we thought it would be cool to have a project in which we are not only able to detect the entities in news articles but also categorize them. That's why we built this classification model on existing labels.Active LearningThe dataset we used did have categories but it didn't have labels for entity recognition. So, we decided to use Prodigy to label the dataset for another task: named entity recognition.Once you have Prodigy installed, you can simply run:$ prodigy ner.manual bbc blank:en BBC_News_Train.csv --label PERSON,ORG,PRODUCT,LOCATIONLet's look at the different values:bbc is the dataset that will be created by Prodigy. blank:en is the spaCy tokenizer being used. BBC_News_Train.csv is the dataset that will be used for labeling. PERSON,ORG,PRODUCT,LOCATION is the list of labels that will be used for labeling.Once you run the above command, you can go to the prodigy web interface (usually at localhost:8080) and start labelling the dataset. Prodigy interface is very simple, intuitive and easy to use. The interface looks like the following:All you have to do is select which entity you want to label (PERSON, ORG, PRODUCT, LOCATION) and then select the text that belongs to the entity. Once you are done with one document, you can click on the green button and Prodigy will automatically provide you with next unlabelled document.Using Prodigy, we started labelling the dataset. When we had around 20 samples, we trained a model using AutoNLP. Prodigy doesn't export the data in AutoNLP format, so we wrote a quick and dirty script to convert the data into AutoNLP format:import jsonimport spacyfrom prodigy.components.db import connectdb = connect()prodigy_annotations = db.get_dataset("bbc")examples = ((eg["text"], eg) for eg in prodigy_annotations)nlp = spacy.blank("en")dataset = []for doc, eg in nlp.pipe(examples, as_tuples=True):try:doc.ents = [doc.char_span(s["start"], s["end"], s["label"]) for s in eg["spans"]]iob_tags = [f"{t.ent_iob_}-{t.ent_type_}" if t.ent_iob_ else "O" for t in doc]iob_tags = [t.strip("-") for t in iob_tags]tokens = [str(t) for t in doc]temp_data = {"tokens": tokens,"tags": iob_tags}dataset.append(temp_data)except:passwith open('data.jsonl', 'w') as outfile:for entry in dataset:json.dump(entry, outfile)outfile.write('')This will provide us with a JSONL file which can be used for training a model using AutoNLP. The steps will be same as before except we will select Token Classification task when creating the AutoNLP project. Using the initial data we had, we trained a model using AutoNLP. The best model had an accuracy of around 86% with 0 precision and recall. We knew the model didn't learn anything. It's pretty obvious, we had only around 20 samples. After labelling around 70 samples, we started getting some results. The accuracy went up to 92%, precision was 0.52 and recall around 0.42. We were getting some results, but still not satisfactory. In the following image, we can see how this model performs on an unseen sample.As you can see, the model is struggling. But it's much better than before! Previously, the model was not even able to predict anything in the same text. At least now, it's able to figure out that Bruce and David are names.Thus, we continued. We labelled a few more samples. Please note that, in each iteration, our dataset is getting bigger. All we are doing is uploading the new dataset to AutoNLP and let it do the rest.After labelling around ~150 samples, we started getting some good results. The accuracy went up to 95.7%, precision was 0.64 and recall around 0.76. Let's take a look at how this model performs on the same unseen sample.WOW! This is amazing! As you can see, the model is now performing extremely well! Its able to detect many entities in the same text. The precision and recall were still a bit low and thus we continued labeling even more data. After labeling around ~250 samples, we had the best results in terms of precision and recall. The accuracy went up to ~95.9% and precision and recall were 0.73 and 0.79 respectively. At this point, we decided to stop labelling and end the experimentation process. The following graph shows how the accuracy of best model improved as we added more samples to the dataset:Well, it's a well known fact that more relevant data will lead to better models and thus better results. With this experimentation, we successfully created a model that can not only classify the entities in the news articles but also categorize them. Using tools like Prodigy and AutoNLP, we invested our time and effort only to label the dataset (even that was made simpler by the interface prodigy offers). AutoNLP saved us a lot of time and effort: we didn't have to figure out which models to use, how to train them, how to evaluate them, how to tune the parameters, which optimizer and scheduler to use, pre-processing, post-processing etc. We just needed to label the dataset and let AutoNLP do everything else.We believe with tools like AutoNLP and Prodigy it's very easy to create data and state-of-the-art models. And since the whole process requires almost no coding at all, even someone without a coding background can create datasets which are generally not available to the public, train their own models using AutoNLP and share the model with everyone else in the community (or just use them for their own research / business).We have open-sourced the best model created using this process. You can try it here. The labelled dataset can also be downloaded here.Models are only state-of-the-art because of the data they are trained on.
https://huggingface.co/blog/gradio-joins-hf
Gradio is joining Hugging Face!
Abubakar Abid
December 21, 2021
Gradio is joining Hugging Face!Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesGradio is joining Hugging Face!
https://huggingface.co/blog/perceiver
Perceiver IO: a scalable, fully-attentional model that works on any modality
Niels Rogge
December 15, 2021
We've added Perceiver IO to Transformers, the first Transformer-based neural network that works on all kinds of modalities (text, images, audio, video, point clouds,...) and combinations thereof. Take a look at the following Spaces to view some examples:predicting optical flow between imagesclassifying images.We also provide several notebooks.Below, you can find a technical explanation of the model.IntroductionThe Transformer, originally introduced by Vaswani et al. in 2017, caused a revolution in the AI community, initially improvingstate-of-the-art (SOTA) results in machine translation. In 2018, BERTwas released, a Transformer encoder-only model that crushed the benchmarks of natural languageprocessing (NLP), most famously the GLUE benchmark. Not long after that, AI researchers started to apply the idea of BERT to other domains. To name a few examples:Wav2Vec2 by Facebook AI illustrated that the architecture could be extended to audiothe Vision Transformer (ViT) by Google AI showed that the architecture works really well for visionmost recently the Video Vision transformer (ViViT), also by Google AI, applied the architecture to video.In all of these domains, state-of-the-art results were improved dramatically, thanks to the combination of this powerful architecture with large-scale pre-training.However, there's an important limitation to the architecture of the Transformer: due to its self-attention mechanism, it scales very poorly in both compute and memory. In every layer, all inputs are used to produce queries and keys, for which a pairwise dot product is computed. Hence, it is not possible to apply self-attention on high-dimensional data without some form of preprocessing. Wav2Vec2, for example, solves this by employing a feature encoder to turn a raw waveform into a sequence of time-based features. The Vision Transformer (ViT) divides an image into a sequence of non-overlapping patches, which serve as "tokens". The Video Vision Transformer (ViViT) extracts non-overlapping, spatio-temporal“tubes” from a video, which serve as "tokens". To make the Transformer work on a particular modality, one typically discretizes it to a sequence of tokens to make it work.The PerceiverThe Perceiver aims to solve this limitation by employing the self-attention mechanism on a set of latent variables, rather than on the inputs. The inputs (which could be text, image, audio, video) are only used for doing cross-attention with the latents. This has the advantage that the bulk of compute happens in a latent space, where compute is cheap (one typically uses 256 or 512 latents). The resulting architecture has no quadratic dependence on the input size: the Transformer encoder only depends linearly on the input size, while latent attention is independent of it. In a follow-up paper, called Perceiver IO, the authors extend this idea to let the Perceiver also handle arbitrary outputs. The idea is similar: one only uses the outputs for doing cross-attention with the latents. Note that I'll use the terms "Perceiver" and "Perceiver IO" interchangeably to refer to the Perceiver IO model throughout this blog post.In the following section, we look in a bit more detail at how Perceiver IO actually works by going over its implementation in HuggingFace Transformers, a popular library that initially implemented Transformer-based models for NLP, but is now starting to implement them for other domains as well. In the sections below, we explain in detail - in terms of shapes of tensors - how the Perceiver actually pre and post processes modalities of any kind.All Perceiver variants in HuggingFace Transformers are based on the PerceiverModel class. To initialize a PerceiverModel, one can provide 3 additional instances to the model:a preprocessora decodera postprocessor.Note that each of these are optional. A preprocessor is only required in case one hasn't already embedded the inputs (such as text, image, audio, video) themselves. A decoder is only required in case one wants to decode the output of the Perceiver encoder (i.e. the last hidden states of the latents) into something more useful, such as classification logits or optical flow. A postprocessor is only required in case one wants to turn the output of the decoder into a specific feature (this is only required when doing auto-encoding, as we will see further). An overview of the architecture is depicted below. The Perceiver architecture.In other words, the inputs (which could be any modality, or a combination thereof) are first optionally preprocessed using a preprocessor. Next, the preprocessed inputs perform a cross-attention operation with the latent variables of the Perceiver encoder. In this operation, the latent variables produce queries (Q), while the preprocessed inputs produce keys and values (KV). After this operation, the Perceiver encoder employs a (repeatable) block of self-attention layers to update the embeddings of the latents. The encoder will finally produce a tensor of shape (batch_size, num_latents, d_latents), containing the last hidden states of the latents. Next, there's an optional decoder, which can be used to decode the final hidden states of the latents into something more useful, such as classification logits. This is done by performing a cross-attention operation, in which trainable embeddings are used to produce queries (Q), while the latents are used to produce keys and values (KV). Finally, there's an optional postprocessor, which can be used to postprocess the decoder outputs to specific features.Let's start off by showing how the Perceiver is implemented to work on text.Perceiver for textSuppose that one wants to apply the Perceiver to perform text classification. As the memory and time requirements of the Perceiver's self-attention mechanism don't depend on the size of the inputs, one can directly provide raw UTF-8 bytes to the model. This is beneficial, as familar Transformer-based models (like BERT and RoBERTa) all employ some form of explicit tokenization, such as WordPiece, BPE or SentencePiece, which may be harmful. For a fair comparison to BERT (which uses a sequence length of 512 subword tokens), the authors used input sequences of 2048 bytes. Let's say one also adds a batch dimension, then the inputs to the model are of shape (batch_size, 2048). The inputs contain the byte IDs (similar to the input_ids of BERT) for a single piece of text. One can use PerceiverTokenizer to turn a text into a sequence of byte IDs, padded up to a length of 2048:from transformers import PerceiverTokenizertokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver")text = "hello world"inputs = tokenizer(text, padding="max_length", return_tensors="pt").input_idsIn this case, one provides PerceiverTextPreprocessor as preprocessor to the model, which will take care of embedding the inputs (i.e. turn each byte ID into a corresponding vector), as well as adding absolute position embeddings. As decoder, one provides PerceiverClassificationDecoder to the model (which will turn the last hidden states of the latents into classification logits). No postprocessor is required. In other words, a Perceiver model for text classification (which is called PerceiverForSequenceClassification in HuggingFace Transformers) is implemented as follows:from torch import nnfrom transformers import PerceiverModelfrom transformers.models.perceiver.modeling_perceiver import PerceiverTextPreprocessor, PerceiverClassificationDecoderclass PerceiverForSequenceClassification(nn.Module):def __init__(self, config):super().__init__(config)self.perceiver = PerceiverModel(config,input_preprocessor=PerceiverTextPreprocessor(config),decoder=PerceiverClassificationDecoder(config,num_channels=config.d_latents,trainable_position_encoding_kwargs=dict(num_channels=config.d_latents, index_dims=1),use_query_residual=True,),)One can already see here that the decoder is initialized with trainable position encoding arguments. Why is that? Well, let's take a look in detail at how Perceiver IO works. At initialization, PerceiverModel internally defines a set of latent variables, as follows:from torch import nnself.latents = nn.Parameter(torch.randn(config.num_latents, config.d_latents))In the Perceiver IO paper, one uses 256 latents, and sets the dimensionality of the latents to 1280. If one also adds a batch dimension, the Perceiver has latents of shape (batch_size, 256, 1280). First, the preprocessor (which one provides at initialization) will take care of embedding the UTF-8 byte IDs to embedding vectors. Hence, PerceiverTextPreprocessor will turn the inputs of shape (batch_size, 2048) to a tensor of shape (batch_size, 2048, 768) - assuming that each byte ID is turned into a vector of size 768 (this is determined by the d_model attribute of PerceiverConfig).After this, Perceiver IO applies cross-attention between the latents (which produce queries) of shape (batch_size, 256, 1280) and the preprocessed inputs (which produce keys and values) of shape (batch_size, 2048, 768). The output of this initial cross-attention operation is a tensor that has the same shape as the queries (which are the latents, in this case). In other words, the output of the cross-attention operation is of shape (batch_size, 256, 1280). Next, a (repeatable) block of self-attention layers is applied to update the representations of the latents. Note that these don't depend on the length of the inputs (i.e. the bytes) one provided, as these were only used during the cross-attention operation. In the Perceiver IO paper, a single block of 26 self-attention layers (each of which has 8 attention heads) were used to update the representations of the latents of the text model. Note that the output after these 26 self-attention layers still has the same shape as what one initially provided as input to the encoder: (batch_size, 256, 1280). These are also called the "last hidden states" of the latents. This is very similar to the "last hidden states" of the tokens one provides to BERT. Ok, so now one has final hidden states of shape (batch_size, 256, 1280). Great, but one actually wants to turn these into classification logits of shape (batch_size, num_labels). How can we make the Perceiver output these? This is handled by PerceiverClassificationDecoder. The idea is very similar to what was done when mapping the inputs to the latent space: one uses cross-attention. But now, the latent variables will produce keys and values, and one provides a tensor of whatever shape we'd like - in this case we'll provide a tensor of shape (batch_size, 1, num_labels) which will act as queries (the authors refer to these as "decoder queries", because they are used in the decoder). This tensor will be randomly initialized at the beginning of training, and trained end-to-end. As one can see, one just provides a dummy sequence length dimension of 1. Note that the output of a QKV attention layer always has the same shape as the shape of the queries - hence the decoder will output a tensor of shape (batch_size, 1, num_labels). The decoder then simply squeezes this tensor to have shape (batch_size, num_labels) and boom, one has classification logits1.Great, isn't it? The Perceiver authors also show that it is straightforward to pre-train the Perceiver for masked language modeling, similar to BERT. This model is also available in HuggingFace Transformers, and called PerceiverForMaskedLM. The only difference with PerceiverForSequenceClassification is that it doesn't use PerceiverClassificationDecoder as decoder, but rather PerceiverBasicDecoder, to decode the latents to a tensor of shape (batch_size, 2048, 1280). After this, a language modeling head is added, which turns it into a tensor of shape (batch_size, 2048, vocab_size). The vocabulary size of the Perceiver is only 262, namely the 256 UTF-8 byte IDs, as well as 6 special tokens. By pre-training the Perceiver on English Wikipedia and C4, the authors show that it is possible to achieve an overall score of 81.8 on GLUE after fine-tuning.Perceiver for imagesNow that we've seen how to apply the Perceiver to perform text classification, it is straightforward to apply the Perceiver to do image classification. The only difference is that we'll provide a different preprocessor to the model, which will embed the image inputs. The Perceiver authors actually tried out 3 different ways of preprocessing: flattening the pixel values, applying a convolutional layer with kernel size 1 and adding learned absolute 1D position embeddings.flattening the pixel values and adding fixed 2D Fourier position embeddings.applying a 2D convolutional + maxpool layer and adding fixed 2D Fourier position embeddings.Each of these are implemented in the Transformers library, and called PerceiverForImageClassificationLearned, PerceiverForImageClassificationFourier and PerceiverForImageClassificationConvProcessing respectively. They only differ in their configuration of PerceiverImagePreprocessor. Let's take a closer look at PerceiverForImageClassificationLearned. It initializes a PerceiverModel as follows: from torch import nnfrom transformers import PerceiverModelfrom transformers.models.perceiver.modeling_perceiver import PerceiverImagePreprocessor, PerceiverClassificationDecoderclass PerceiverForImageClassificationLearned(nn.Module):def __init__(self, config):super().__init__(config)self.perceiver = PerceiverModel(config,input_preprocessor=PerceiverImagePreprocessor(config,prep_type="conv1x1",spatial_downsample=1,out_channels=256,position_encoding_type="trainable",concat_or_add_pos="concat",project_pos_dim=256,trainable_position_encoding_kwargs=dict(num_channels=256, index_dims=config.image_size ** 2),),decoder=PerceiverClassificationDecoder(config,num_channels=config.d_latents,trainable_position_encoding_kwargs=dict(num_channels=config.d_latents, index_dims=1),use_query_residual=True,),)One can see that PerceiverImagePreprocessor is initialized with prep_type = "conv1x1" and that one adds arguments for the trainable position encodings. So how does this preprocessor work in detail? Suppose that one provides a batch of images to the model. Let's say one applies center cropping to a resolution of 224 and normalization of the color channels first, such that the inputs are of shape (batch_size, num_channels, height, width) = (batch_size, 3, 224, 224). One can use PerceiverImageProcessor for this, as follows:from transformers import PerceiverImageProcessorimport requestsfrom PIL import Imageprocessor = PerceiverImageProcessor.from_pretrained("deepmind/vision-perceiver")url = 'http://images.cocodataset.org/val2017/000000039769.jpg'image = Image.open(requests.get(url, stream=True).raw)inputs = processor(image, return_tensors="pt").pixel_valuesPerceiverImagePreprocessor (with the settings defined above) will first apply a convolutional layer with kernel size (1, 1) to turn the inputs into a tensor of shape (batch_size, 256, 224, 224) - hence increasing the channel dimension. It will then place the channel dimension last - so now one has a tensor of shape (batch_size, 224, 224, 256). Next, it flattens the spatial (height + width) dimensions such that one has a tensor of shape (batch_size, 50176, 256). Next, it concatenates it with trainable 1D position embeddings. As the dimensionality of the position embeddings is defined to be 256 (see the num_channels argument above), one is left with a tensor of shape (batch_size, 50176, 512). This tensor will be used for the cross-attention operation with the latents.The authors use 512 latents for all image models, and set the dimensionality of the latents to 1024. Hence, the latents are a tensor of shape (batch_size, 512, 1024) - assuming we add a batch dimension. The cross-attention layer takes the queries of shape (batch_size, 512, 1024) and keys + values of shape (batch_size, 50176, 512) as input, and produces a tensor that has the same shape as the queries, so outputs a new tensor of shape (batch_size, 512, 1024). Next, a block of 6 self-attention layers is applied repeatedly (8 times), to produce final hidden states of the latents of shape (batch_size, 512, 1024). To turn these into classification logits, PerceiverClassificationDecoder is used, which works similarly to the one for text classification: it uses the latents as keys + values, and uses trainable position embeddings of shape (batch_size, 1, num_labels) as queries. The output of the cross-attention operation is a tensor of shape (batch_size, 1, num_labels), which is squeezed to have classification logits of shape (batch_size, num_labels).The Perceiver authors show that the model is capable of achieving strong results compared to models designed primarily for image classification (such as ResNet or ViT). After large-scale pre-training on JFT, the model that uses conv+maxpool preprocessing (PerceiverForImageClassificationConvProcessing) achieves 84.5 top-1 accuracy on ImageNet. Remarkably, PerceiverForImageClassificationLearned, the model that only employs a 1D fully learned position encoding, achieves a top-1 accuracy of 72.7 despite having no privileged information about the 2D structure of images. Perceiver for optical flowThe authors show that it's straightforward to make the Perceiver also work on optical flow, which is a decades-old problem in computer vision, with many broader applications. For an introduction to optical flow, I refer to this blog post. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. Existing algorithms are quite hand-engineered and complex, however with the Perceiver, this becomes relatively simple. The model is implemented in the Transformers library, and available as PerceiverForOpticalFlow. It is implemented as follows:from torch import nnfrom transformers import PerceiverModelfrom transformers.models.perceiver.modeling_perceiver import PerceiverImagePreprocessor, PerceiverOpticalFlowDecoderclass PerceiverForOpticalFlow(nn.Module):def __init__(self, config):super().__init__(config)fourier_position_encoding_kwargs_preprocessor = dict(num_bands=64,max_resolution=config.train_size,sine_only=False,concat_pos=True,)fourier_position_encoding_kwargs_decoder = dict(concat_pos=True, max_resolution=config.train_size, num_bands=64, sine_only=False)image_preprocessor = PerceiverImagePreprocessor(config,prep_type="patches",spatial_downsample=1,conv_after_patching=True,conv_after_patching_in_channels=54,temporal_downsample=2,position_encoding_type="fourier",# position_encoding_kwargsfourier_position_encoding_kwargs=fourier_position_encoding_kwargs_preprocessor,)self.perceiver = PerceiverModel(config,input_preprocessor=image_preprocessor,decoder=PerceiverOpticalFlowDecoder(config,num_channels=image_preprocessor.num_channels,output_image_shape=config.train_size,rescale_factor=100.0,use_query_residual=False,output_num_channels=2,position_encoding_type="fourier",fourier_position_encoding_kwargs=fourier_position_encoding_kwargs_decoder,),)As one can see, PerceiverImagePreprocessor is used as preprocessor (i.e. to prepare the 2 images for the cross-attention operation with the latents) and PerceiverOpticalFlowDecoder is used as decoder (i.e. to decode the final hidden states of the latents to an actual predicted flow). For each of the 2 frames, the authors extract a 3 x 3 patch around each pixel, leading to 3 x 3 x 3 = 27 values for each pixel (as each pixel also has 3 color channels). The authors use a training resolution of (368, 496). If one stacks 2 frames of size (368, 496) of each training example on top of each other, the inputs to the model are of shape (batch_size, 2, 27, 368, 496). The preprocessor (with the settings defined above) will first concatenate the frames along the channel dimension, leading to a tensor of shape (batch_size, 368, 496, 54) - assuming one also moves the channel dimension to be last. The authors explain in their paper (page 8) why concatenation along the channel dimension makes sense. Next, the spatial dimensions are flattened, leading to a tensor of shape (batch_size, 368*496, 54) = (batch_size, 182528, 54). Then, position embeddings (each of which have dimensionality 258) are concatenated, leading to a final preprocessed input of shape (batch_size, 182528, 322). These will be used to perform cross-attention with the latents.The authors use 2048 latents for the optical flow model (yes, 2048!), with a dimensionality of 512 for each latent. Hence, the latents have shape (batch_size, 2048, 512). After the cross-attention, one again has a tensor of the same shape (as the latents act as queries). Next, a single block of 24 self-attention layers (each of which has 16 attention heads) are applied to update the embeddings of the latents. To decode the final hidden states of the latents to an actual predicted flow, PerceiverOpticalFlowDecoder simply uses the preprocessed inputs of shape (batch_size, 182528, 322) as queries for the cross-attention operation. Next, these are projected to a tensor of shape (batch_size, 182528, 2). Finally, one rescales and reshapes this back to the original image size to get a predicted flow of shape (batch_size, 368, 496, 2). The authors claim state-of-the-art results on important benchmarks including Sintel and KITTI when training on AutoFlow, a large synthetic dataset of 400,000 annotated image pairs.The video below shows the predicted flow on 2 examples. Optical flow estimation by Perceiver IO. The colour of each pixel shows the direction and speed of motion estimated by the model, as indicated by the legend on the right.Perceiver for multimodal autoencodingThe authors also use the Perceiver for multimodal autoencoding. The goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture. The authors train the model on the Kinetics-700 dataset, in which each example consists of a sequence of images (i.e. frames), audio and a class label (one of 700 possible labels). This model is also implemented in HuggingFace Transformers, and available as PerceiverForMultimodalAutoencoding. For brevity, I will omit the code of defining this model, but important to note is that it uses PerceiverMultimodalPreprocessor to prepare the inputs for the model. This preprocessor will first use the respective preprocessor for each modality (image, audio, label) separately. Suppose one has a video of 16 frames of resolution 224x224 and 30,720 audio samples, then the modalities are preprocessed as follows: The images - actually a sequence of frames - of shape (batch_size, 16, 3, 224, 224) are turned into a tensor of shape (batch_size, 50176, 243) using PerceiverImagePreprocessor. This is a “space to depth” transformation, after which fixed 2D Fourier position embeddings are concatenated.The audio has shape (batch_size, 30720, 1) and is turned into a tensor of shape (batch_size, 1920, 401) using PerceiverAudioPreprocessor (which concatenates fixed Fourier position embeddings to the raw audio).The class label of shape (batch_size, 700) is turned into a tensor of shape (batch_size, 1, 700) using PerceiverOneHotPreprocessor. In other words, this preprocessor just adds a dummy time (index) dimension. Note that one initializes the class label with a tensor of zeros during evaluation, so as to let the model act as a video classifier.Next, PerceiverMultimodalPreprocessor will pad the preprocessed modalities with modality-specific trainable embeddings to make concatenation along the time dimension possible. In this case, the modality with the highest channel dimension is the class label (it has 700 channels). The authors enforce a minimum padding size of 4, hence each modality will be padded to have 704 channels. They can then be concatenated, hence the final preprocessed input is a tensor of shape (batch_size, 50176 + 1920 + 1, 704) = (batch_size, 52097, 704). The authors use 784 latents, with a dimensionality of 512 for each latent. Hence, the latents have shape (batch_size, 784, 512). After the cross-attention, one again has a tensor of the same shape (as the latents act as queries). Next, a single block of 8 self-attention layers (each of which has 8 attention heads) is applied to update the embeddings of the latents. Next, there is PerceiverMultimodalDecoder, which will first create output queries for each modality separately. However, as it is not possible to decode an entire video in a single forward pass, the authors instead auto-encode in chunks. Each chunk will subsample certain index dimensions for every modality. Let's say we process the video in 128 chunks, then the decoder queries will be produced as follows:For the image modality, the total size of the decoder query is 16x3x224x224 = 802,816. However, when auto-encoding the first chunk, one subsamples the first 802,816/128 = 6272 values. The shape of the image output query is (batch_size, 6272, 195) - the 195 comes from the fact that fixed Fourier position embeddings are used.For the audio modality, the total input has 30,720 values. However, one only subsamples the first 30720/128/16 = 15 values. Hence, the shape of the audio query is (batch_size, 15, 385). Here, the 385 comes from the fact that fixed Fourier position embeddings are used.For the class label modality, there's no need to subsample. Hence, the subsampled index is set to 1. The shape of the label output query is (batch_size, 1, 1024). One uses trainable position embeddings (of size 1024) for the queries.Similarly to the preprocessor, PerceiverMultimodalDecoder pads the different modalities to the same number of channels, to make concatenation of the modality-specific queries possible along the time dimension. Here, the class label has again the highest number of channels (1024), and the authors enforce a minimum padding size of 2, hence every modality will be padded to have 1026 channels. After concatenation, the final decoder query has shape (batch_size, 6272 + 15 + 1, 1026) = (batch_size, 6288, 1026). This tensor produces queries in the cross-attention operation, while the latents act as keys and values. Hence, the output of the cross-attention operation is a tensor of shape (batch_size, 6288, 1026). Next, PerceiverMultimodalDecoder employs a linear layer to reduce the output channels to get a tensor of shape (batch_size, 6288, 512). Finally, there is PerceiverMultimodalPostprocessor. This class postprocesses the output of the decoder to produce an actual reconstruction of each modality. It first splits up the time dimension of the decoder output according to the different modalities: (batch_size, 6272, 512) for image, (batch_size, 15, 512) for audio and (batch_size, 1, 512) for the class label. Next, the respective postprocessors for each modality are applied:The image post processor (which is called PerceiverProjectionPostprocessor in Transformers) simply turns the (batch_size, 6272, 512) tensor into a tensor of shape (batch_size, 6272, 3) - i.e. it projects the final dimension to RGB values.PerceiverAudioPostprocessor turns the (batch_size, 15, 512) tensor into a tensor of shape (batch_size, 240).PerceiverClassificationPostprocessor simply takes the first (and only index), to get a tensor of shape (batch_size, 700).So now one ends up with tensors containing the reconstruction of the image, audio and class label modalities respectively. As one auto-encodes an entire video in chunks, one needs to concatenate the reconstruction of each chunk to have a final reconstruction of an entire video. The figure below shows an example:Above: original video (left), reconstruction of the first 16 frames (right). Video taken from the UCF101 dataset. Below: reconstructed audio (taken from the paper). Top 5 predicted labels for the video above. By masking the class label, the Perceiver becomes a video classifier. With this approach, the model learns a joint distribution across 3 modalities. The authors do note that because the latent variables are shared across modalities and not explicitly allocated between them, the quality of reconstructions for each modality is sensitive to the weight of its loss term and other training hyperparameters. By putting stronger emphasis on classification accuracy, they are able to reach 45% top-1 accuracy while maintaining 20.7 PSNR (peak signal-to-noise ratio) for video.Other applications of the PerceiverNote that there are no limits on the applications of the Perceiver! In the original Perceiver paper, the authors showed that the architecture can be used to process 3D point clouds – a common concern for self-driving cars equipped with Lidar sensors. They trained the model on ModelNet40, a dataset of point clouds derived from 3D triangular meshes spanning 40 object categories. The model was shown to achieve a top-1 accuracy of 85.7 % on the test set, competing with PointNet++, a highly specialized model that uses extra geometric features and performs more advanced augmentations.The authors also used the Perceiver to replace the original Transformer in AlphaStar, the state-of-the-art reinforcement learning system for the complex game of StarCraft II. Without tuning any additional parameters, the authors observed that the resulting agent reached the same level of performance as the original AlphaStar agent, reaching an 87% win-rate versus the Elite bot after behavioral cloning on human data.It is important to note that the models currently implemented (such as PerceiverForImageClassificationLearned, PerceiverForOpticalFlow) are just examples of what you can do with the Perceiver. Each of these are different instances of PerceiverModel, just with a different preprocessor and/or decoder (and optionally, a postprocessor as is the case for multimodal autoencoding). People can come up with new preprocessors, decoders and postprocessors to make the model solve different problems. For instance, one could extend the Perceiver to perform named-entity recognition (NER) or question-answering similar to BERT, audio classification similar to Wav2Vec2 or object detection similar to DETR. ConclusionIn this blog post, we went over the architecture of Perceiver IO, an extension of the Perceiver by Google Deepmind, and showed its generality of handling all kinds of modalities. The big advantage of the Perceiver is that the compute and memory requirements of the self-attention mechanism don't depend on the size of the inputs and outputs, as the bulk of compute happens in a latent space (a not-too large set of vectors). Despite its task-agnostic architecture, the model is capabable of achieving great results on modalities such as language, vision, multimodal data, and point clouds. In the future, it might be interesting to train a single (shared) Perceiver encoder on several modalities at the same time, and use modality-specific preprocessors and postprocessors. As Karpathy puts it, it may well be that this architecture can unify all modalities into a shared space, with a library of encoders/decoders. Speaking of a library, the model is available in HuggingFace Transformers as of today. It will be exciting to see what people build with it, as its applications seem endless!AppendixThe implementation in HuggingFace Transformers is based on the original JAX/Haiku implementation which can be found here.The documentation of the Perceiver IO model in HuggingFace Transformers is available here.Tutorial notebooks regarding the Perceiver on several modalities can be found here.Footnotes1 Note that in the official paper, the authors used a two-layer MLP to generate the output logits, which was omitted here for brevity. ↩
https://huggingface.co/blog/codeparrot
Training CodeParrot 🦜 from Scratch
Christo
December 8, 2021
In this blog post we'll take a look at what it takes to build the technology behind GitHub CoPilot, an application that provides suggestions to programmers as they code. In this step by step guide, we'll learn how to train a large GPT-2 model called CodeParrot 🦜, entirely from scratch. CodeParrot can auto-complete your Python code - give it a spin here. Let's get to building it from scratch!Creating a Large Dataset of Source CodeThe first thing we need is a large training dataset. With the goal to train a Python code generation model, we accessed the GitHub dump available on Google's BigQuery and filtered for all Python files. The result is a 180 GB dataset with 20 million files (available here). After initial training experiments, we found that the duplicates in the dataset severely impacted the model performance. Further investigating the dataset we found that:0.1% of the unique files make up 15% of all files1% of the unique files make up 35% of all files10% of the unique files make up 66% of all filesYou can learn more about our findings in this Twitter thread. We removed the duplicates and applied the same cleaning heuristics found in the Codex paper. Codex is the model behind CoPilot and is a GPT-3 model fine-tuned on GitHub code. The cleaned dataset is still 50GB big and available on the Hugging Face Hub: codeparrot-clean. With that we can setup a new tokenizer and train a model.Initializing the Tokenizer and ModelFirst we need a tokenizer. Let's train one specifically on code so it splits code tokens well. We can take an existing tokenizer (e.g. GPT-2) and directly train it on our own dataset with the train_new_from_iterator() method. We then push it to the Hub. Note that we omit imports, arguments parsing and logging from the code examples to keep the code blocks compact. But you'll find the full code including preprocessing and downstream task evaluation here.# Iterator for Trainingdef batch_iterator(batch_size=10):for _ in tqdm(range(0, args.n_examples, batch_size)):yield [next(iter_dataset)["content"] for _ in range(batch_size)]# Base tokenizertokenizer = GPT2Tokenizer.from_pretrained("gpt2")base_vocab = list(bytes_to_unicode().values())# Load datasetdataset = load_dataset("lvwerra/codeparrot-clean", split="train", streaming=True)iter_dataset = iter(dataset)# Training and savingnew_tokenizer = tokenizer.train_new_from_iterator(batch_iterator(),vocab_size=args.vocab_size,initial_alphabet=base_vocab)new_tokenizer.save_pretrained(args.tokenizer_name, push_to_hub=args.push_to_hub)Learn more about tokenizers and how to build them in the Hugging Face course. See that inconspicuous streaming=True argument? This small change has a big impact: instead of downloading the full (50GB) dataset this will stream individual samples as needed saving a lot of disk space! Checkout the Hugging Face course for more information on streaming.Now, we initialize a new model. We’ll use the same hyperparameters as GPT-2 large (1.5B parameters) and adjust the embedding layer to fit our new tokenizer also adding some stability tweaks. The scale_attn_by_layer_idx flag makes sure we scale the attention by the layer id and reorder_and_upcast_attn mainly makes sure that we compute the attention in full precision to avoid numerical issues. We push the freshly initialized model to the same repo as the tokenizer.# Load codeparrot tokenizer trained for Python code tokenizationtokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name)# Configurationconfig_kwargs = {"vocab_size": len(tokenizer),"scale_attn_by_layer_idx": True,"reorder_and_upcast_attn": True}# Load model with config and push to hubconfig = AutoConfig.from_pretrained('gpt2-large', **config_kwargs)model = AutoModelForCausalLM.from_config(config)model.save_pretrained(args.model_name, push_to_hub=args.push_to_hub)Now that we have an efficient tokenizer and a freshly initialized model we can start with the actual training loop.Implementing the Training LoopWe train with the 🤗 Accelerate library which allows us to scale the training from our laptop to a multi-GPU machine without changing a single line of code. We just create an accelerator and do some argument housekeeping:accelerator = Accelerator()acc_state = {str(k): str(v) for k, v in accelerator.state.__dict__.items()}parser = HfArgumentParser(TrainingArguments)args = parser.parse_args()args = Namespace(**vars(args), **acc_state)samples_per_step = accelerator.state.num_processes * args.train_batch_sizeset_seed(args.seed)We are now ready to train! Let's use the huggingface_hub client library to clone the repository with the new tokenizer and model. We will checkout to a new branch for this experiment. With that setup, we can run many experiments in parallel and in the end we just merge the best one into the main branch.# Clone model repositoryif accelerator.is_main_process:hf_repo = Repository(args.save_dir, clone_from=args.model_ckpt)# Checkout new branch on repoif accelerator.is_main_process:hf_repo.git_checkout(run_name, create_branch_ok=True)We can directly load the tokenizer and model from the local repository. Since we are dealing with big models we might want to turn on gradient checkpointing to decrease the GPU memory footprint during training.# Load model and tokenizermodel = AutoModelForCausalLM.from_pretrained(args.save_dir)if args.gradient_checkpointing:model.gradient_checkpointing_enable()tokenizer = AutoTokenizer.from_pretrained(args.save_dir)Next up is the dataset. We make training simpler with a dataset that yields examples with a fixed context size. To not waste too much data (some samples are too short or too long) we can concatenate many examples with an EOS token and then chunk them.The more sequences we prepare together, the smaller the fraction of tokens we discard (the grey ones in the previous figure). Since we want to stream the dataset instead of preparing everything in advance we use an IterableDataset. The full dataset class looks as follows:class ConstantLengthDataset(IterableDataset):def __init__(self, tokenizer, dataset, infinite=False, seq_length=1024, num_of_sequences=1024, chars_per_token=3.6):self.tokenizer = tokenizerself.concat_token_id = tokenizer.bos_token_idself.dataset = datasetself.seq_length = seq_lengthself.input_characters = seq_length * chars_per_token * num_of_sequencesself.epoch = 0self.infinite = infinitedef __iter__(self):iterator = iter(self.dataset)more_examples = Truewhile more_examples:buffer, buffer_len = [], 0while True:if buffer_len >= self.input_characters:breaktry:buffer.append(next(iterator)["content"])buffer_len += len(buffer[-1])except StopIteration:if self.infinite:iterator = iter(self.dataset)self.epoch += 1logger.info(f"Dataset epoch: {self.epoch}")else:more_examples = Falsebreaktokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"]all_token_ids = []for tokenized_input in tokenized_inputs:all_token_ids.extend(tokenized_input + [self.concat_token_id])for i in range(0, len(all_token_ids), self.seq_length):input_ids = all_token_ids[i : i + self.seq_length]if len(input_ids) == self.seq_length:yield torch.tensor(input_ids)Texts in the buffer are tokenized in parallel and then concatenated. Chunked samples are then yielded until the buffer is empty and the process starts again. If we set infinite=True the dataset iterator restarts at its end.def create_dataloaders(args):ds_kwargs = {"streaming": True}train_data = load_dataset(args.dataset_name_train, split="train", streaming=True)train_data = train_data.shuffle(buffer_size=args.shuffle_buffer, seed=args.seed)valid_data = load_dataset(args.dataset_name_valid, split="train", streaming=True)train_dataset = ConstantLengthDataset(tokenizer, train_data, infinite=True, seq_length=args.seq_length)valid_dataset = ConstantLengthDataset(tokenizer, valid_data, infinite=False, seq_length=args.seq_length)train_dataloader = DataLoader(train_dataset, batch_size=args.train_batch_size)eval_dataloader = DataLoader(valid_dataset, batch_size=args.valid_batch_size)return train_dataloader, eval_dataloadertrain_dataloader, eval_dataloader = create_dataloaders(args)Before we start training we need to set up the optimizer and learning rate schedule. We don’t want to apply weight decay to biases and LayerNorm weights so we use a helper function to exclude those.def get_grouped_params(model, args, no_decay=["bias", "LayerNorm.weight"]):params_with_wd, params_without_wd = [], []for n, p in model.named_parameters():if any(nd in n for nd in no_decay): params_without_wd.append(p)else: params_with_wd.append(p)return [{"params": params_with_wd, "weight_decay": args.weight_decay},{"params": params_without_wd, "weight_decay": 0.0},]optimizer = AdamW(get_grouped_params(model, args), lr=args.learning_rate)lr_scheduler = get_scheduler(name=args.lr_scheduler_type, optimizer=optimizer,num_warmup_steps=args.num_warmup_steps,num_training_steps=args.max_train_steps,)A big question that remains is how all the data and models will be distributed across several GPUs. This sounds like a complex task but actually only requires a single line of code with 🤗 Accelerate.model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(model, optimizer, train_dataloader, eval_dataloader)Under the hood it'll use DistributedDataParallel, which means a batch is sent to each GPU worker which has its own copy of the model. There the gradients are computed and then aggregated to update the model on each worker.We also want to evaluate the model from time to time on the validation set so let’s write a function to do just that. This is done automatically in a distributed fashion and we just need to gather all the losses from the workers. We also want to report the perplexity.def evaluate(args):model.eval()losses = []for step, batch in enumerate(eval_dataloader):with torch.no_grad():outputs = model(batch, labels=batch)loss = outputs.loss.repeat(args.valid_batch_size)losses.append(accelerator.gather(loss))if args.max_eval_steps > 0 and step >= args.max_eval_steps:breakloss = torch.mean(torch.cat(losses))try:perplexity = torch.exp(loss)except OverflowError:perplexity = float("inf")return loss.item(), perplexity.item()We are now ready to write the main training loop. It will look pretty much like a normal PyTorch training loop. Here and there you can see that we use the accelerator functions rather than native PyTorch. Also, we push the model to the branch after each evaluation.# Train modelmodel.train()completed_steps = 0for step, batch in enumerate(train_dataloader, start=1):loss = model(batch, labels=batch, use_cache=False).lossloss = loss / args.gradient_accumulation_stepsaccelerator.backward(loss)if step % args.gradient_accumulation_steps == 0:accelerator.clip_grad_norm_(model.parameters(), 1.0)optimizer.step()lr_scheduler.step()optimizer.zero_grad()completed_steps += 1if step % args.save_checkpoint_steps == 0:eval_loss, perplexity = evaluate(args)accelerator.wait_for_everyone()unwrapped_model = accelerator.unwrap_model(model)unwrapped_model.save_pretrained(args.save_dir, save_function=accelerator.save)if accelerator.is_main_process:hf_repo.push_to_hub(commit_message=f"step {step}")model.train()if completed_steps >= args.max_train_steps:breakWhen we call wait_for_everyone() and unwrap_model() we make sure that all workers are ready and any model layers that have been added by prepare() earlier are removed. We also use gradient accumulation and gradient clipping that are easily implemented. Lastly, after training is complete we run a last evaluation and save the final model and push it to the hub. # Evaluate and save the last checkpointlogger.info("Evaluating and saving model after training")eval_loss, perplexity = evaluate(args)log_metrics(step, {"loss/eval": eval_loss, "perplexity": perplexity})accelerator.wait_for_everyone()unwrapped_model = accelerator.unwrap_model(model)unwrapped_model.save_pretrained(args.save_dir, save_function=accelerator.save)if accelerator.is_main_process:hf_repo.push_to_hub(commit_message="final model")Done! That's all the code to train a full GPT-2 model from scratch with as little as 150 lines. We did not show the imports and logs of the scripts to make the code a little bit more compact. Now let's actually train it!With this code we trained models for our upcoming book on Transformers and NLP: a 110M and 1.5B parameter GPT-2 model. We used a 16 x A100 GPU machine to train these models for 1 day and 1 week, respectively. Enough time to get a coffee and read a book or two!EvaluationThis is still relatively short training time for pretraining but we can already observe good downstream performance as compared to similar models. We evaluated the models on OpenAI's HumanEval benchmark that was introduced in the Codex paper. It measures the performance of code generation models on almost 200 coding challenges.Note that we trained CodeParrot on roughly 25-30B tokens whereas GPT-neo was trained on 300B tokens and Codex on 300B (GPT-3 checkpoint) + 100B (code) tokens. So you definitely get a good bang for your buck here.But what is this pass@k metric exactly? Simply put it measures the probability of at least one program passing the unittest of a programming problem given k candidate generations from the model. Naturally, the metric increases with k as more candidates can potentially solve the coding challenge. You can use the metric with datasets:from datasets import load_metriccode_eval = datasets.load_metric("code_eval")test_cases = ["assert add(2,3)==5"]candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]]pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1, 2])print(pass_at_k)# -> {'pass@1': 0.5, 'pass@2': 1.0}You can also load OpenAI's HumanEval dataset with datasets:from datasets import load_datasetdataset = load_dataset("openai_humaneval")Now let's have a look at what we get after anxiously waiting for so long!ResultsLet's start slow and check if we can prompt the model to write a function to return the size of a file given its path:Prompt:def get_files_size(filename):Completion:"""Returns the file size of a file.:param filepath: The path to the file.:type filepath: str:return: The file size.:rtype: int"""return os.path.getsize(filepath)So far so good, but let's take this a step further and try something a bit more interesting. One of the less exciting aspects of building production ready code is writing unittest. Let's see if we can use CodeParrot to do this for us!Prompt:def is_even(value):"""Returns True if value is an even number."""return value % 2 == 0# setup unit tests for is_evenimport unittestCompletion:class TestIsEven(unittest.TestCase):"""Tests for is_even function."""def test_is_even(self):"""Test is_even function."""self.assertTrue(is_even(0))self.assertTrue(is_even(1))self.assertTrue Although the test condition itself does not look quite right the model has built all the boilerplate code for us. This alone could already be quite useful and save us a couple of minutes time!But what we really want to do is build better transformer models. Let's use CodeParrot to help us do that:Prompt:from transformers import AutoTokenizer, AutoModelForSequenceClassification# build a BERT classifierCompletion:tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased')Looking at a few examples is nice but by now you probably want to try it yourself. We prepared a few online demos just for that!DemosYou can play with the models in two demos we added to Hugging Face Spaces. With the first you can quickly generate code with the model and with the second you can highlight your code with the model to spot bugs!Code GenerationCode HighlightingYou can also directly use the models from the transformers library:from transformers import pipelinepipe = pipeline('text-generation', model='lvwerra/codeparrot')pipe('def hello_world():')SummaryIn this short blog post we walked through all the steps involved for training a large GPT-2 model called CodeParrot 🦜 for code generation. Using 🤗 Accelerate we built a training script with less than 200 lines of code that we can effortlessly scale across many GPUs. With that you can now train your own GPT-2 model!This post gives a brief overview of CodeParrot 🦜, but if you are interested in diving deeper into how to pretrain this models, we recommend reading its dedicated chapter in the upcoming book on Transformers and NLP. This chapter provides many more details around building custom datasets, design considerations when training a new tokenizer, and architecture choice.
https://huggingface.co/blog/snowball-fight
Introducing Snowball Fight ☃️, our First ML-Agents Environment
Thomas Simonini
December 2, 2021
We're excited to share our first custom Deep Reinforcement Learning environment: Snowball Fight 1vs1 🎉.Snowball Fight is a game made with Unity ML-Agents, where you shoot snowballs against a Deep Reinforcement Learning agent. The game is hosted on Hugging Face Spaces. 👉 You can play it online hereIn this post, we'll cover the ecosystem we are working on for Deep Reinforcement Learning researchers and enthusiasts that use Unity ML-Agents.Unity ML-Agents at Hugging FaceThe Unity Machine Learning Agents Toolkit is an open source library that allows you to build games and simulations with Unity game engine to serve as environments for training intelligent agents.With this first step, our goal is to build an ecosystem on Hugging Face for Deep Reinforcement Learning researchers and enthusiasts that uses ML-Agents, with three features.Building and sharing custom environments. We are developing and sharing exciting environments to experiment with new problems: snowball fights, racing, puzzles... All of them will be open source and hosted on the Hugging Face's Hub.Allowing you to easily host your environments, save models and share them on the Hugging Face Hub. We have already published the Snowball Fight training environment here, but there will be more to come!You can now easily host your demos on Spaces and showcase your results quickly with the rest of the ecosystem.Be part of the conversation: join our discord server!If you're using ML-Agents or interested in Deep Reinforcement Learning and want to be part of the conversation, you can join our discord server. We just added two channels (and we'll add more in the future):Deep Reinforcement LearningML-AgentsOur discord is the place where you can exchange about Hugging Face, NLP, Deep RL, and more! It's also in this discord that we'll announce all our new environments and features in the future.What's next?In the coming weeks and months, we will be extending the ecosystem by:Writing some technical tutorials on ML-Agents.Working on a Snowball Fight 2vs2 version, where the agents will collaborate in teams using MA-POCA, a new Deep Reinforcement Learning algorithm that trains cooperative behaviors in a team.And we're building new custom environments that will be hosted in Hugging Face.ConclusionWe're excited to see what you're working on with ML-Agents and how we can build features and tools that help you to empower your work.Don't forget to join our discord server to be alerted of the new features.
https://huggingface.co/blog/graphcore-getting-started
Getting Started with Hugging Face Transformers for IPUs with Optimum
Tim Santos, Julien Simon
November 30, 2021
Transformer models have proven to be extremely efficient on a wide range of machine learning tasks, such as natural language processing, audio processing, and computer vision. However, the prediction speed of these large models can make them impractical for latency-sensitive use cases like conversational applications or search. Furthermore, optimizing their performance in the real world requires considerable time, effort and skills that are beyond the reach of many companies and organizations. Luckily, Hugging Face has introduced Optimum, an open source library which makes it much easier to reduce the prediction latency of Transformer models on a variety of hardware platforms. In this blog post, you will learn how to accelerate Transformer models for the Graphcore Intelligence Processing Unit (IPU), a highly flexible, easy-to-use parallel processor designed from the ground up for AI workloads.Optimum Meets Graphcore IPU Through this partnership between Graphcore and Hugging Face, we are now introducing BERT as the first IPU-optimized model. We will be introducing many more of these IPU-optimized models in the coming months, spanning applications such as vision, speech, translation and text generation.Graphcore engineers have implemented and optimized BERT for our IPU systems using Hugging Face transformers to help developers easily train, fine-tune and accelerate their state-of-the-art models.Getting started with IPUs and Optimum Let’s use BERT as an example to help you get started with using Optimum and IPUs.In this guide, we will use an IPU-POD16 system in Graphcloud, Graphcore’s cloud-based machine learning platform and follow PyTorch setup instructions found in Getting Started with Graphcloud.Graphcore’s Poplar SDK is already installed on the Graphcloud server. If you have a different setup, you can find the instructions that apply to your system in the PyTorch for the IPU: User Guide.Set up the Poplar SDK Environment You will need to run the following commands to set several environment variables that enable Graphcore tools and Poplar libraries. On the latest system running Poplar SDK version 2.3 on Ubuntu 18.04, you can find in the folder /opt/gc/poplar_sdk-ubuntu_18_04-2.3.0+774-b47c577c2a/. You would need to run both enable scripts for Poplar and PopART (Poplar Advanced Runtime) to use PyTorch:$ cd /opt/gc/poplar_sdk-ubuntu_18_04-2.3.0+774-b47c577c2a/$ source poplar-ubuntu_18_04-2.3.0+774-b47c577c2a/enable.sh$ source popart-ubuntu_18_04-2.3.0+774-b47c577c2a/enable.shSet up PopTorch for the IPU PopTorch is part of the Poplar SDK. It provides functions that allow PyTorch models to run on the IPU with minimal code changes. You can create and activate a PopTorch environment following the guide Setting up PyTorch for the IPU:$ virtualenv -p python3 ~/workspace/poptorch_env$ source ~/workspace/poptorch_env/bin/activate$ pip3 install -U pip$ pip3 install /opt/gc/poplar_sdk-ubuntu_18_04-2.3.0+774-b47c577c2a/poptorch-<sdk-version>.whlInstall Optimum Graphcore Now that your environment has all the Graphcore Poplar and PopTorch libraries available, you need to install the latest 🤗 Optimum Graphcore package in this environment. This will be the interface between the 🤗 Transformers library and Graphcore IPUs.Please make sure that the PopTorch virtual environment you created in the previous step is activated. Your terminal should have a prefix showing the name of the poptorch environment like below:(poptorch_env) user@host:~/workspace/poptorch_env$ pip3 install optimum[graphcore] optunaClone Optimum Graphcore Repository The Optimum Graphcore repository contains the sample code for using Optimum models in IPU. You should clone the repository and change the directory to the example/question-answering folder which contains the IPU implementation of BERT.$ git clone https://github.com/huggingface/optimum-graphcore.git$ cd optimum-graphcore/examples/question-answeringNow, we will use run_qa.py to fine-tune the IPU implementation of BERT on the SQUAD1.1 dataset. Run a sample to fine-tune BERT on SQuAD1.1 The run_qa.py script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library), as it uses special features of those tokenizers. This is the case for our BERT model, and you should pass its name as the input argument to --model_name_or_path. In order to use the IPU, Optimum will look for the ipu_config.json file from the path passed to the argument --ipu_config_name. $ python3 run_qa.py \ --ipu_config_name=./ \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --output_dir output \ --overwrite_output_dir \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \--learning_rate 6e-5 \--num_train_epochs 3 \--max_seq_length 384 \--doc_stride 128 \--seed 1984 \--lr_scheduler_type linear \--loss_scaling 64 \--weight_decay 0.01 \--warmup_ratio 0.1 \--output_dir /tmp/debug_squad/A closer look at Optimum-Graphcore Getting the data A very simple way to get datasets is to use the Hugging Face Datasets library, which makes it easy for developers to download and share datasets on the Hugging Face hub. It also has pre-built data versioning based on git and git-lfs, so you can iterate on updated versions of the data by just pointing to the same repo. Here, the dataset comes with the training and validation files, and dataset configs to help facilitate which inputs to use in each model execution phase. The argument --dataset_name==squad points to SQuAD v1.1 on the Hugging Face Hub. You could also provide your own CSV/JSON/TXT training and evaluation files as long as they follow the same format as the SQuAD dataset or another question-answering dataset in Datasets library.Loading the pretrained model and tokenizer To turn words into tokens, this script will require a fast tokenizer. It will show an error if you didn't pass one. For reference, here's the list of supported tokenizers. # Tokenizer check: this script requires a fast tokenizer. if not isinstance(tokenizer, PreTrainedTokenizerFast): raise ValueError("This example script only works for models that have a fast tokenizer. Checkout the big table of models "at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this " "requirement" )The argument ```--model_name_or_path==bert-base-uncased`` loads the bert-base-uncased model implementation available in the Hugging Face Hub.From the Hugging Face Hub description:"BERT base model (uncased): Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English."Training and Validation You can now use the IPUTrainer class available in Optimum to leverage the entire Graphcore software and hardware stack, and train your models in IPUs with minimal code changes. Thanks to Optimum, you can plug-and-play state of the art hardware to train your state of the art models. In order to train and validate the BERT model, you can pass the arguments --do_train and --do_eval to the run_qa.py script. After executing the script with the hyper-parameters above, you should see the following training and validation results:"epoch": 3.0,"train_loss": 0.9465060763888888,"train_runtime": 368.4015,"train_samples": 88524,"train_samples_per_second": 720.877,"train_steps_per_second": 2.809The validation step yields the following results:***** eval metrics ***** epoch = 3.0 eval_exact_match = 80.6623 eval_f1 = 88.2757 eval_samples = 10784You can see the rest of the IPU BERT implementation in the Optimum-Graphcore: SQuAD Examples.Resources for Optimum Transformers on IPU Systems Optimum-Graphcore: SQuAD ExamplesGraphcore Hugging Face Models & DatasetsGitHub Tutorial: BERT Fine-tuning on IPU using Hugging Face transformers Graphcore Developer PortalGraphcore GitHubGraphcore SDK Containers on Docker Hub
https://huggingface.co/blog/data-measurements-tool
Introducing the 🤗 Data Measurements Tool: an Interactive Tool for Looking at Datasets
Sasha Luccioni, Yacine Jernite, Margaret Mitchell
November 29, 2021
tl;dr: We made a tool you can use online to build, measure, and compare datasets.Click to access the 🤗 Data Measurements Tool here.As developers of a fast-growing unified repository for Machine Learning datasets (Lhoest et al. 2021), the 🤗 Hugging Face team has been working on supporting good practices for dataset documentation (McMillan-Major et al., 2021). While static (if evolving) documentation represents a necessary first step in this direction, getting a good sense of what is actually in a dataset requires well-motivated measurements and the ability to interact with it, dynamically visualizing different aspects of interest. To this end, we introduce an open-source Python library and no-code interface called the 🤗 Data Measurements Tool, using our Dataset and Spaces Hubs paired with the great Streamlit tool. This can be used to help understand, build, curate, and compare datasets.What is the 🤗 Data Measurements Tool?The Data Measurements Tool (DMT) is an interactive interface and open-source library that lets dataset creators and users automatically calculate metrics that are meaningful and useful for responsible data development.Why have we created this tool?Thoughtful curation and analysis of Machine Learning datasets is often overlooked in AI development. Current norms for “big data” in AI (Luccioni et al., 2021, Dodge et al., 2021) include using data scraped from various websites, with little or no attention paid to concrete measurements of what the different data sources represent, nor the nitty-gritty details of how they may influence what a model learns. Although dataset annotation approaches can help to curate datasets that are more in line with a developer’s goals, the methods for “measuring” different aspects of these datasets are fairly limited (Sambasivan et al., 2021).A new wave of research in AI has called for a fundamental paradigm shift in how the field approaches ML datasets (Paullada et al., 2020, Denton et al., 2021). This includes defining fine-grained requirements for dataset creation from the start (Hutchinson et al., 2021), curating datasets in light of problematic content and bias concerns (Yang et al., 2020, Prabhu and Birhane, 2020), and making explicit the values inherent in dataset construction and maintenance (Scheuerman et al., 2021, Birhane et al., 2021). Although there is general agreement that dataset development is a task that people from many different disciplines should be able to inform, in practice there is often a bottleneck in interfacing with the raw data itself, which tends to require complex coding skills in order to analyze and query the dataset. Despite this, there are few tools openly available to the public to enable people from different disciplines to measure, interrogate, and compare datasets. We aim to help fill this gap. We learn and build from recent tools such as Know Your Data and Data Quality for AI, as well as research proposals for dataset documentation such as Vision and Language Datasets (Ferraro et al., 2015), Datasheets for Datasets (Gebru et al, 2018), and Data Statements (Bender & Friedman 2019). The result is an open-source library for dataset measurements, and an accompanying no-code interface for detailed dataset analysis.When can I use the 🤗 Data Measurements Tool?The 🤗 Data Measurements Tool can be used iteratively for exploring one or more existing NLP datasets, and will soon support iterative development of datasets from scratch. It provides actionable insights informed by research on datasets and responsible dataset development, allowing users to hone in on both high-level information and specific items.What can I learn using the 🤗 Data Measurements Tool?Dataset BasicsFor a high-level overview of the datasetThis begins to answer questions like “What is this dataset? Does it have missing items?”. You can use this as “sanity checks” that the dataset you’re working with is as you expect it to be.A description of the dataset (from the Hugging Face Hub)Number of missing values or NaNsDescriptive StatisticsTo look at the surface characteristics of the datasetThis begins to answer questions like “What kind of language is in this dataset? How diverse is it?”The dataset vocabulary size and word distribution, for both open- and closed-class words.The dataset label distribution and information about class (im)balance.The mean, median, range, and distribution of instance lengths.The number of duplicates in the dataset and how many times they are repeated.You can use these widgets to check whether what is most and least represented in the dataset make sense for the goals of the dataset. These measurements are intended to inform whether the dataset can be useful in capturing a variety of contexts or if what it captures is more limited, and to measure how ''balanced'' the labels and instance lengths are. You can also use these widgets to identify outliers and duplicates you may want to remove.Distributional StatisticsTo measure the language patterns in the datasetThis begins to answer questions like “How does the language behave in this dataset?”Adherence to Zipf’s law, which provides measurements of how closely the distribution over words in the dataset fits to the expected distribution of words in natural language.You can use this to figure out whether your dataset represents language as it tends to behave in the natural world or if there are things that are more unnatural about it. If you’re someone who enjoys optimization, then you can view the alpha value this widget calculates as a value to get as close as possible to 1 during dataset development. Further details on alpha values following Zipf’s law in different languages is available here.In general, an alpha greater than 2 or a minimum rank greater than 10 (take with a grain of salt) means that your distribution is relatively unnatural for natural language. This can be a sign of mixed artefacts in the dataset, such as HTML markup. You can use this information to clean up your dataset or to guide you in determining how further language you add to the dataset should be distributed.Comparison statisticsThis begins to answer questions like “What kinds of topics, biases, and associations are in this dataset?”Embedding clusters to pinpoint any clusters of similar language in the dataset.Taking in the diversity of text represented in a dataset can be challenging when it is made up of hundreds to hundreds of thousands of sentences. Grouping these text items based on a measure of similarity can help users gain some insights into their distribution. We show a hierarchical clustering of the text fields in the dataset based on a Sentence-Transformer model and a maximum dot product single-linkage criterion. To explore the clusters, you can:hover over a node to see the 5 most representative examples (deduplicated)enter an example in the text box to see which leaf clusters it is most similar toselect a cluster by ID to show all of its examplesThe normalized pointwise mutual information (nPMI) between word pairs in the dataset, which may be used to identify problematic stereotypes.You can use this as a tool in dealing with dataset “bias”, where here the term “bias” refers to stereotypes and prejudices for identity groups along the axes of gender and sexual orientation. We will add further terms in the near future.What is the status of 🤗 Data Measurements Tool development?We currently present the alpha version (v0) of the tool, demonstrating its usefulness on a handful of popular English-language datasets (e.g. SQuAD, imdb, C4, ...) available on the Dataset Hub, with the functionalities described above. The words that we selected for nPMI visualization are a subset of identity terms that came up frequently in the datasets that we were working with.In coming weeks and months, we will be extending the tool to:Cover more languages and datasets present in the 🤗 Datasets library.Provide support for user-provided datasets and iterative dataset building.Add more features and functionalities to the tool itself. For example, we will make it possible to add your own terms for the nPMI visualization so you can pick the words that matter most to you.AcknowledgementsThank you to Thomas Wolf for initiating this work, as well as other members of the 🤗 team (Quentin, Lewis, Sylvain, Nate, Julien C., Julien S., Clément, Omar, and many others!) for their help and support.
https://huggingface.co/blog/accelerating-pytorch
Accelerating PyTorch distributed fine-tuning with Intel technologies
Julien Simon
November 19, 2021
For all their amazing performance, state of the art deep learning models often take a long time to train. In order to speed up training jobs, engineering teams rely on distributed training, a divide-and-conquer technique where clustered servers each keep a copy of the model, train it on a subset of the training set, and exchange results to converge to a final model.Graphical Processing Units (GPUs) have long been the de facto choice to train deep learning models. However, the rise of transfer learning is changing the game. Models are now rarely trained from scratch on humungous datasets. Instead, they are frequently fine-tuned on specific (and smaller) datasets, in order to build specialized models that are more accurate than the base model for particular tasks. As these training jobs are much shorter, using a CPU-based cluster can prove to be an interesting option that keeps both training time and cost under control.What this post is aboutIn this post, you will learn how to accelerate PyTorch training jobs by distributing them on a cluster of Intel Xeon Scalable CPU servers, powered by the Ice Lake architecture and running performance-optimized software libraries. We will build the cluster from scratch using virtual machines, and you should be able to easily replicate the demo on your own infrastructure, either in the cloud or on premise.Running a text classification job, we will fine-tune a BERT model on the MRPC dataset (one of the tasks included in the GLUE benchmark). The MRPC dataset contains 5,800 sentence pairs extracted from news sources, with a label telling us whether the two sentences in each pair are semantically equivalent. We picked this dataset for its reasonable training time, and trying other GLUE tasks is just a parameter away.Once the cluster is up and running, we will run a baseline job on a single server. Then, we will scale it to 2 servers and 4 servers and measure the speed-up.Along the way, we will cover the following topics:Listing the required infrastructure and software building blocks,Setting up our cluster,Installing dependencies,Running a single-node job,Running a distributed job.Let's get to work!Using Intel serversFor best performance, we will use Intel servers based on the Ice Lake architecture, which supports hardware features such as Intel AVX-512 and Intel Vector Neural Network Instructions (VNNI). These features accelerate operations typically found in deep learning training and inference. You can learn more about them in this presentation (PDF).All three major cloud providers offer virtual machines powered by Intel Ice Lake CPUs:Amazon Web Services: Amazon EC2 M6iand C6i instances.Azure: Dv5/Dsv5-series, Ddv5/Ddsv5-series and Edv5/Edsv5-series virtual machines.Google Cloud Platform: N2 Compute Engine virtual machines.Of course, you can also use your own servers. If they are based on the Cascade Lake architecture (Ice Lake's predecessor), they're good to go as Cascade Lake also includes AVX-512 and VNNI.Using Intel performance librariesTo leverage AVX-512 and VNNI in PyTorch, Intel has designed the Intel extension for PyTorch. This software library provides out of the box speedup for training and inference, so we should definitely install it.When it comes to distributed training, the main performance bottleneck is often networking. Indeed, the different nodes in the cluster need to periodically exchange model state information to stay in sync. As transformers are large models with billions of parameters (sometimes much more), the volume of information is significant, and things only get worse as the number of nodes increase. Thus, it's important to use a communication library optimized for deep learning.In fact, PyTorch includes the torch.distributed package, which supports different communication backends. Here, we'll use the Intel oneAPI Collective Communications Library (oneCCL), an efficient implementation of communication patterns used in deep learning (all-reduce, etc.). You can learn about the performance of oneCCL versus other backends in this PyTorch blog post.Now that we're clear on building blocks, let's talk about the overall setup of our training cluster.Setting up our clusterIn this demo, I'm using Amazon EC2 instances running Amazon Linux 2 (c6i.16xlarge, 64 vCPUs, 128GB RAM, 25Gbit/s networking). Setup will be different in other environments, but steps should be very similar.Please keep in mind that you will need 4 identical instances, so you may want to plan for some sort of automation to avoid running the same setup 4 times. Here, I will set up one instance manually, create a new Amazon Machine Image (AMI) from this instance, and use this AMI to launch three identical instances.From a networking perspective, we will need the following setup: Open port 22 for ssh access on all instances for setup and debugging.Configure password-less ssh between the master instance (the one you'll launch training from) and all other instances (master included).Open all TCP ports on all instances for oneCCL communication inside the cluster. Please make sure NOT to open these ports to the external world. AWS provides a convenient way to do this by only allowing connections from instances running a particular security group. Here's how my setup looks.Now, let's provision the first instance manually. I first create the instance itself, attach the security group above, and add 128GB of storage. To optimize costs, I have launched it as a spot instance. Once the instance is up, I connect to it with ssh in order to install dependencies.Installing dependenciesHere are the steps we will follow:Install Intel toolkits,Install the Anaconda distribution,Create a new conda environment,Install PyTorch and the Intel extension for PyTorch,Compile and install oneCCL,Install the transformers library.It looks like a lot, but there's nothing complicated. Here we go!Installing Intel toolkitsFirst, we download and install the Intel OneAPI base toolkit as well as the AI toolkit. You can learn about them on the Intel website.wget https://registrationcenter-download.intel.com/akdlm/irc_nas/18236/l_BaseKit_p_2021.4.0.3422_offline.shsudo bash l_BaseKit_p_2021.4.0.3422_offline.shwget https://registrationcenter-download.intel.com/akdlm/irc_nas/18235/l_AIKit_p_2021.4.0.1460_offline.shsudo bash l_AIKit_p_2021.4.0.1460_offline.sh Installing AnacondaThen, we download and install the Anaconda distribution.wget https://repo.anaconda.com/archive/Anaconda3-2021.05-Linux-x86_64.shsh Anaconda3-2021.05-Linux-x86_64.shCreating a new conda environmentWe log out and log in again to refresh paths. Then, we create a new conda environment to keep things neat and tidy.yes | conda create -n transformer python=3.7.9 -c anacondaeval "$(conda shell.bash hook)"conda activate transformeryes | conda install pip cmakeInstalling PyTorch and the Intel extension for PyTorchNext, we install PyTorch 1.9 and the Intel extension toolkit. Versions must match.yes | conda install pytorch==1.9.0 cpuonly -c pytorchpip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stableCompiling and installing oneCCLThen, we install some native dependencies required to compile oneCCL.sudo yum -y updatesudo yum install -y git cmake3 gcc gcc-c++Next, we clone the oneCCL repository, build the library and install it. Again, versions must match.source /opt/intel/oneapi/mkl/latest/env/vars.shgit clone https://github.com/intel/torch-ccl.gitcd torch-cclgit checkout ccl_torch1.9git submodule syncgit submodule update --init --recursivepython setup.py installcd ..Installing the transformers libraryNext, we install the transformers library and dependencies required to run GLUE tasks.pip install transformers datasetsyes | conda install scipy scikit-learnFinally, we clone a fork of the transformersrepository containing the example we're going to run.git clone https://github.com/kding1/transformers.gitcd transformersgit checkout dist-sigoptWe're done! Let's run a single-node job.Launching a single-node jobTo get a baseline, let's launch a single-node job running the run_glue.py script in transformers/examples/pytorch/text-classification. This should work on any of the instances, and it's a good sanity check before proceeding to distributed training.python run_glue.py \--model_name_or_path bert-base-cased --task_name mrpc \--do_train --do_eval --max_seq_length 128 \--per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 \--output_dir /tmp/mrpc/ --overwrite_output_dir TrueThis job takes 7 minutes and 46 seconds. Now, let's set up distributed jobs with oneCCL and speed things up!Setting up a distributed job with oneCCLThree steps are required to run a distributed training job:List the nodes of the training cluster,Define environment variables,Modify the training script.Listing the nodes of the training clusterOn the master instance, in transformers/examples/pytorch/text-classification, we create a text file named hostfile. This file stores the names of the nodes in the cluster (IP addresses would work too). The first line should point to the master instance. Here's my file:ip-172-31-28-17.ec2.internalip-172-31-30-87.ec2.internalip-172-31-29-11.ec2.internalip-172-31-20-77.ec2.internalDefining environment variablesNext, we need to set some environment variables on the master node, most notably its IP address. You can find more information on oneCCL variables in the documentation.for nic in eth0 eib0 hib0 enp94s0f0; domaster_addr=$(ifconfig $nic 2>/dev/null | grep netmask | awk '{print $2}'| cut -f2 -d:)if [ "$master_addr" ]; thenbreakfidoneexport MASTER_ADDR=$master_addrsource /home/ec2-user/anaconda3/envs/transformer/lib/python3.7/site-packages/torch_ccl-1.3.0+43f48a1-py3.7-linux-x86_64.egg/torch_ccl/env/setvars.shexport LD_LIBRARY_PATH=/home/ec2-user/anaconda3/envs/transformer/lib/python3.7/site-packages/torch_ccl-1.3.0+43f48a1-py3.7-linux-x86_64.egg/:$LD_LIBRARY_PATHexport LD_PRELOAD="${CONDA_PREFIX}/lib/libtcmalloc.so:${CONDA_PREFIX}/lib/libiomp5.so"export CCL_WORKER_COUNT=4export CCL_WORKER_AFFINITY="0,1,2,3,32,33,34,35"export CCL_ATL_TRANSPORT=ofiexport ATL_PROGRESS_MODE=0Modifying the training scriptThe following changes have already been applied to our training script (run_glue.py) in order to enable distributed training. You would need to apply similar changes when using your own training code.Import the torch_cclpackage.Receive the address of the master node and the local rank of the node in the cluster.+import torch_ccl+import datasetsimport numpy as npfrom datasets import load_dataset, load_metric@@ -47,7 +49,7 @@ from transformers.utils.versions import require_version# Will error if the minimal version of Transformers is not installed. Remove at your own risks.-check_min_version("4.13.0.dev0")+# check_min_version("4.13.0.dev0")require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/text-classification/requirements.txt")@@ -191,6 +193,17 @@ def main():# or by passing the --help flag to this script.# We now keep distinct sets of args, for a cleaner separation of concerns.+ # add local rank for cpu-dist+ sys.argv.append("--local_rank")+ sys.argv.append(str(os.environ.get("PMI_RANK", -1)))++ # ccl specific environment variables+ if "ccl" in sys.argv:+ os.environ["MASTER_ADDR"] = os.environ.get("MASTER_ADDR", "127.0.0.1")+ os.environ["MASTER_PORT"] = "29500"+ os.environ["RANK"] = str(os.environ.get("PMI_RANK", -1))+ os.environ["WORLD_SIZE"] = str(os.environ.get("PMI_SIZE", -1))+parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):Setup is now complete. Let's scale our training job to 2 nodes and 4 nodes.Running a distributed job with oneCCLOn the master node, I use mpirunto launch a 2-node job: -np (number of processes) is set to 2 and -ppn (process per node) is set to 1. Hence, the first two nodes in hostfile will be selected.mpirun -f hostfile -np 2 -ppn 1 -genv I_MPI_PIN_DOMAIN=[0xfffffff0] \-genv OMP_NUM_THREADS=28 python run_glue.py \--model_name_or_path distilbert-base-uncased --task_name mrpc \--do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 \--learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/mrpc/ \--overwrite_output_dir True --xpu_backend ccl --no_cuda TrueWithin seconds, a job starts on the first two nodes. The job completes in 4 minutes and 39 seconds, a 1.7x speedup.Setting -np to 4 and launching a new job, I now see one process running on each node of the cluster.Training completes in 2 minutes and 36 seconds, a 3x speedup.One last thing. Changing --task_name to qqp, I also ran the Quora Question Pairs GLUE task, which is based on a much larger dataset (over 400,000 training samples). The fine-tuning times were:Single-node: 11 hours 22 minutes,2 nodes: 6 hours and 38 minutes (1.71x),4 nodes: 3 hours and 51 minutes (2.95x).It looks like the speedup is pretty consistent. Feel free to keep experimenting with different learning rates, batch sizes and oneCCL settings. I'm sure you can go even faster!ConclusionIn this post, you've learned how to build a distributed training cluster based on Intel CPUs and performance libraries, and how to use this cluster to speed up fine-tuning jobs. Indeed, transfer learning is putting CPU training back into the game, and you should definitely consider it when designing and building your next deep learning workflows.Thanks for reading this long post. I hope you found it informative. Feedback and questions are welcome at julsimon@huggingface.co. Until next time, keep learning!Julien
https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers
Patrick von Platen
November 15, 2021
New (11/2021): This blog post has been updated to feature XLSR'ssuccessor, called XLS-R.Wav2Vec2 is a pretrained model for Automatic Speech Recognition(ASR) and was released in September2020by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after thesuperior performance of Wav2Vec2 was demonstrated on one of the mostpopular English datasets for ASR, calledLibriSpeech,Facebook AI presented a multi-lingual version of Wav2Vec2, calledXLSR. XLSR stands for cross-lingualspeech representations and refers to model's ability to learn speechrepresentations that are useful across multiple languages.XLSR's successor, simply called XLS-R (refering to the''XLM-Rfor Speech''), was released in November 2021 by ArunBabu, Changhan Wang, Andros Tjandra, et al. XLS-R used almost half amillion hours of audio data in 128 languages for self-supervisedpre-training and comes in sizes ranging from 300 milion up to twobillion parameters. You can find the pretrained checkpoints on the 🤗Hub:Wav2Vec2-XLS-R-300MWav2Vec2-XLS-R-1BWav2Vec2-XLS-R-2BSimilar to BERT's masked language modelingobjective, XLS-R learnscontextualized speech representations by randomly masking featurevectors before passing them to a transformer network duringself-supervised pre-training (i.e. diagram on the left below).For fine-tuning, a single linear layer is added on top of thepre-trained network to train the model on labeled data of audiodownstream tasks such as speech recognition, speech translation andaudio classification (i.e. diagram on the right below).XLS-R shows impressive improvements over previous state-of-the-artresults on both speech recognition, speech translation andspeaker/language identification, cf. with Table 3-6, Table 7-10, andTable 11-12 respectively of the official paper.SetupIn this blog, we will give an in-detail explanation of how XLS-R -more specifically the pre-trained checkpointWav2Vec2-XLS-R-300M - can be fine-tuned for ASR.For demonstration purposes, we fine-tune the model on the low resourceASR dataset of CommonVoice that contains onlyca. 4h of validated training data.XLS-R is fine-tuned using Connectionist Temporal Classification (CTC),which is an algorithm that is used to train neural networks forsequence-to-sequence problems, such as ASR and handwriting recognition.I highly recommend reading the well-written blog post SequenceModeling with CTC (2017) by AwniHannun.Before we start, let's install datasets and transformers. Also, weneed the torchaudio to load audio files and jiwer to evaluate ourfine-tuned model using the word error rate(WER) metric 1 {}^1 1.!pip install datasets==1.18.3!pip install transformers==4.11.3!pip install huggingface_hub==0.1!pip install torchaudio!pip install librosa!pip install jiwerWe strongly suggest to upload your training checkpoints directly to theHugging Face Hub while training. The Hugging FaceHub has integrated version control so you canbe sure that no model checkpoint is getting lost during training.To do so you have to store your authentication token from the HuggingFace website (sign up here if youhaven't already!)from huggingface_hub import notebook_loginnotebook_login()Print Output:Login successfulYour token has been saved to /root/.huggingface/tokenThen you need to install Git-LFS to upload your model checkpoints:apt install git-lfs1 {}^1 1 In the paper, the modelwas evaluated using the phoneme error rate (PER), but by far the mostcommon metric in ASR is the word error rate (WER). To keep this notebookas general as possible we decided to evaluate the model using WER.Prepare Data, Tokenizer, Feature ExtractorASR models transcribe speech to text, which means that we both need afeature extractor that processes the speech signal to the model's inputformat, e.g. a feature vector, and a tokenizer that processes themodel's output format to text.In 🤗 Transformers, the XLS-R model is thus accompanied by both atokenizer, calledWav2Vec2CTCTokenizer,and a feature extractor, calledWav2Vec2FeatureExtractor.Let's start by creating the tokenizer to decode the predicted outputclasses to the output transcription.Create Wav2Vec2CTCTokenizerA pre-trained XLS-R model maps the speech signal to a sequence ofcontext representations as illustrated in the figure above. However, forspeech recognition the model has to to map this sequence of contextrepresentations to its corresponding transcription which means that alinear layer has to be added on top of the transformer block (shown inyellow in the diagram above). This linear layer is used to classifyeach context representation to a token class analogous to howa linear layer is added on top of BERT's embeddingsfor further classification after pre-training (cf. with 'BERT' section of the following blogpost).after pretraining a linear layer is added on top of BERT's embeddingsfor further classification - cf. with 'BERT' section of this blogpost.The output size of this layer corresponds to the number of tokens in thevocabulary, which does not depend on XLS-R's pretraining task, butonly on the labeled dataset used for fine-tuning. So in the first step,we will take a look at the chosen dataset of Common Voice and define avocabulary based on the transcriptions.First, let's go to Common Voice officialwebsite and pick alanguage to fine-tune XLS-R on. For this notebook, we will use Turkish.For each language-specific dataset, you can find a language codecorresponding to your chosen language. On CommonVoice, look for the field"Version". The language code then corresponds to the prefix before theunderscore. For Turkish, e.g. the language code is "tr".Great, now we can use 🤗 Datasets' simple API to download the data. Thedataset name is "common_voice", the configuration name corresponds tothe language code, which is "tr" in our case.Common Voice has many different splits including invalidated, whichrefers to data that was not rated as "clean enough" to be considereduseful. In this notebook, we will only make use of the splits "train","validation" and "test".Because the Turkish dataset is so small, we will merge both thevalidation and training data into a training dataset and only use thetest data for validation.from datasets import load_dataset, load_metric, Audiocommon_voice_train = load_dataset("common_voice", "tr", split="train+validation")common_voice_test = load_dataset("common_voice", "tr", split="test")Many ASR datasets only provide the target text, 'sentence' for eachaudio array 'audio' and file 'path'. Common Voice actually providesmuch more information about each audio file, such as the 'accent',etc. Keeping the notebook as general as possible, we only consider thetranscribed text for fine-tuning.common_voice_train = common_voice_train.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])Let's write a short function to display some random samples of thedataset and run it a couple of times to get a feeling for thetranscriptions.from datasets import ClassLabelimport randomimport pandas as pdfrom IPython.display import display, HTMLdef show_random_elements(dataset, num_examples=10):assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."picks = []for _ in range(num_examples):pick = random.randint(0, len(dataset)-1)while pick in picks:pick = random.randint(0, len(dataset)-1)picks.append(pick)df = pd.DataFrame(dataset[picks])display(HTML(df.to_html()))Print Output:IdxSentence1Jonuz, kısa süreli görevi kabul eden tek adaydı.2Biz umudumuzu bu mücadeleden almaktayız.3Sergide beş Hırvat yeniliği sergilendi.4Herşey adıyla bilinmeli.5Kuruluş özelleştirmeye hazır.6Yerleşim yerlerinin manzarası harika.7Olayların failleri bulunamadı.8Fakat bu çabalar boşa çıktı.9Projenin değeri iki virgül yetmiş yedi milyon avro.10Büyük yeniden yapım projesi dört aşamaya bölündü.Alright! The transcriptions look fairly clean. Having translated thetranscribed sentences, it seems that the language corresponds more towritten-out text than noisy dialogue. This makes sense considering thatCommon Voice is acrowd-sourced read speech corpus.We can see that the transcriptions contain some special characters, suchas ,.?!;:. Without a language model, it is much harder to classifyspeech chunks to such special characters because they don't reallycorrespond to a characteristic sound unit. E.g., the letter "s" hasa more or less clear sound, whereas the special character "." doesnot. Also in order to understand the meaning of a speech signal, it isusually not necessary to include special characters in thetranscription.Let's simply remove all characters that don't contribute to themeaning of a word and cannot really be represented by an acoustic soundand normalize the text.import rechars_to_remove_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'def remove_special_characters(batch):batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower()return batchcommon_voice_train = common_voice_train.map(remove_special_characters)common_voice_test = common_voice_test.map(remove_special_characters)Let's look at the processed text labels again.show_random_elements(common_voice_train.remove_columns(["path","audio"]))Print Output:IdxTranscription1birisi beyazlar için dediler2maktouf'un cezası haziran ayında sona erdi3orijinalin aksine kıyafetler çıkarılmadı4bunların toplam değeri yüz milyon avroyu buluyor5masada en az iki seçenek bulunuyor6bu hiç de haksız bir heveslilik değil7bu durum bin dokuz yüz doksanlarda ülkenin bölünmesiyle değişti8söz konusu süre altı ay9ancak bedel çok daha yüksek olabilir10başkent fira bir tepenin üzerinde yer alıyorGood! This looks better. We have removed most special characters fromtranscriptions and normalized them to lower-case only.Before finalizing the pre-processing, it is always advantageous toconsult a native speaker of the target language to see whether the textcan be further simplified. For this blog post,Merve was kind enough to take a quicklook and noted that "hatted" characters - like â - aren't reallyused anymore in Turkish and can be replaced by their "un-hatted"equivalent, e.g. a.This means that we should replace a sentence like"yargı sistemi hâlâ sağlıksız" to "yargı sistemi hala sağlıksız".Let's write another short mapping function to further simplify the textlabels. Remember, the simpler the text labels, the easier it is for themodel to learn to predict those labels.def replace_hatted_characters(batch):batch["sentence"] = re.sub('[â]', 'a', batch["sentence"])batch["sentence"] = re.sub('[î]', 'i', batch["sentence"])batch["sentence"] = re.sub('[ô]', 'o', batch["sentence"])batch["sentence"] = re.sub('[û]', 'u', batch["sentence"])return batchcommon_voice_train = common_voice_train.map(replace_hatted_characters)common_voice_test = common_voice_test.map(replace_hatted_characters)In CTC, it is common to classify speech chunks into letters, so we willdo the same here. Let's extract all distinct letters of the trainingand test data and build our vocabulary from this set of letters.We write a mapping function that concatenates all transcriptions intoone long transcription and then transforms the string into a set ofchars. It is important to pass the argument batched=True to themap(...) function so that the mapping function has access to alltranscriptions at once.def extract_all_chars(batch):all_text = " ".join(batch["sentence"])vocab = list(set(all_text))return {"vocab": [vocab], "all_text": [all_text]}vocab_train = common_voice_train.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_train.column_names)vocab_test = common_voice_test.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_test.column_names)Now, we create the union of all distinct letters in the training datasetand test dataset and convert the resulting list into an enumerateddictionary.vocab_list = list(set(vocab_train["vocab"][0]) | set(vocab_test["vocab"][0]))vocab_dict = {v: k for k, v in enumerate(sorted(vocab_list))}vocab_dictPrint Output:{' ': 0,'a': 1,'b': 2,'c': 3,'d': 4,'e': 5,'f': 6,'g': 7,'h': 8,'i': 9,'j': 10,'k': 11,'l': 12,'m': 13,'n': 14,'o': 15,'p': 16,'q': 17,'r': 18,'s': 19,'t': 20,'u': 21,'v': 22,'w': 23,'x': 24,'y': 25,'z': 26,'ç': 27,'ë': 28,'ö': 29,'ü': 30,'ğ': 31,'ı': 32,'ş': 33,'̇': 34}Cool, we see that all letters of the alphabet occur in the dataset(which is not really surprising) and we also extracted the specialcharacters "" and '. Note that we did not exclude those specialcharacters because:The model has to learn to predict when a word is finished or else themodel prediction would always be a sequence of chars which would make itimpossible to separate words from each other.One should always keep in mind that pre-processing is a very importantstep before training your model. E.g., we don't want our model todifferentiate between a and A just because we forgot to normalizethe data. The difference between a and A does not depend on the"sound" of the letter at all, but more on grammatical rules - e.g.use a capitalized letter at the beginning of the sentence. So it issensible to remove the difference between capitalized andnon-capitalized letters so that the model has an easier time learning totranscribe speech.To make it clearer that " " has its own token class, we give it a morevisible character |. In addition, we also add an "unknown" token sothat the model can later deal with characters not encountered in CommonVoice's training set.vocab_dict["|"] = vocab_dict[" "]del vocab_dict[" "]Finally, we also add a padding token that corresponds to CTC's "blanktoken". The "blank token" is a core component of the CTC algorithm.For more information, please take a look at the "Alignment" sectionhere.vocab_dict["[UNK]"] = len(vocab_dict)vocab_dict["[PAD]"] = len(vocab_dict)len(vocab_dict)Cool, now our vocabulary is complete and consists of 39 tokens, whichmeans that the linear layer that we will add on top of the pretrainedXLS-R checkpoint will have an output dimension of 39.Let's now save the vocabulary as a json file.import jsonwith open('vocab.json', 'w') as vocab_file:json.dump(vocab_dict, vocab_file)In a final step, we use the json file to load the vocabulary into aninstance of the Wav2Vec2CTCTokenizer class.from transformers import Wav2Vec2CTCTokenizertokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")If one wants to re-use the just created tokenizer with the fine-tunedmodel of this notebook, it is strongly advised to upload the tokenizerto the Hugging Face Hub. Let's call the repo to whichwe will upload the files "wav2vec2-large-xlsr-turkish-demo-colab":repo_name = "wav2vec2-large-xls-r-300m-tr-colab"and upload the tokenizer to the 🤗 Hub.tokenizer.push_to_hub(repo_name)Great, you can see the just created repository underhttps://huggingface.co/<your-username>/wav2vec2-large-xls-r-300m-tr-colabCreate Wav2Vec2FeatureExtractorSpeech is a continuous signal, and, to be treated by computers, it firsthas to be discretized, which is usually called sampling. Thesampling rate hereby plays an important role since it defines how manydata points of the speech signal are measured per second. Therefore,sampling with a higher sampling rate results in a better approximationof the real speech signal but also necessitates more values persecond.A pretrained checkpoint expects its input data to have been sampled moreor less from the same distribution as the data it was trained on. Thesame speech signals sampled at two different rates have a very differentdistribution. For example, doubling the sampling rate results in data pointsbeing twice as long. Thus, before fine-tuning a pretrained checkpoint ofan ASR model, it is crucial to verify that the sampling rate of the datathat was used to pretrain the model matches the sampling rate of thedataset used to fine-tune the model.XLS-R was pretrained on audio data ofBabel,Multilingual LibriSpeech(MLS),Common Voice,VoxPopuli, andVoxLingua107 at a sampling rate of16kHz. Common Voice, in its original form, has a sampling rate of 48kHz,thus we will have to downsample the fine-tuning data to 16kHz in thefollowing.A Wav2Vec2FeatureExtractor object requires the following parameters tobe instantiated:feature_size: Speech models take a sequence of feature vectors asan input. While the length of this sequence obviously varies, thefeature size should not. In the case of Wav2Vec2, the feature sizeis 1 because the model was trained on the raw speech signal 2 {}^2 2.sampling_rate: The sampling rate at which the model is trained on.padding_value: For batched inference, shorter inputs need to bepadded with a specific valuedo_normalize: Whether the input should bezero-mean-unit-variance normalized or not. Usually, speech modelsperform better when normalizing the inputreturn_attention_mask: Whether the model should make use of anattention_mask for batched inference. In general, XLS-R modelscheckpoints should always use the attention_mask.from transformers import Wav2Vec2FeatureExtractorfeature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)Great, XLS-R's feature extraction pipeline is thereby fully defined!For improved user-friendliness, the feature extractor and tokenizer arewrapped into a single Wav2Vec2Processor class so that one only needsa model and processor object.from transformers import Wav2Vec2Processorprocessor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)Next, we can prepare the dataset.Preprocess DataSo far, we have not looked at the actual values of the speech signal butjust the transcription. In addition to sentence, our datasets includetwo more column names path and audio. path states the absolutepath of the audio file. Let's take a look.common_voice_train[0]["path"]XLS-R expects the input in the format of a 1-dimensional array of 16kHz. This means that the audio file has to be loaded and resampled.Thankfully, datasets does this automatically by calling the othercolumn audio. Let try it out.common_voice_train[0]["audio"]{'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,-8.8930130e-05, -3.8027763e-05, -2.9146671e-05], dtype=float32),'path': '/root/.cache/huggingface/datasets/downloads/extracted/05be0c29807a73c9b099873d2f5975dae6d05e9f7d577458a2466ecb9a2b0c6b/cv-corpus-6.1-2020-12-11/tr/clips/common_voice_tr_21921195.mp3','sampling_rate': 48000}Great, we can see that the audio file has automatically been loaded.This is thanks to the new "Audio"featureintroduced in datasets == 1.18.3, which loads and resamples audiofiles on-the-fly upon calling.In the example above we can see that the audio data is loaded with asampling rate of 48kHz whereas 16kHz are expected by the model. We canset the audio feature to the correct sampling rate by making use ofcast_column:common_voice_train = common_voice_train.cast_column("audio", Audio(sampling_rate=16_000))common_voice_test = common_voice_test.cast_column("audio", Audio(sampling_rate=16_000))Let's take a look at "audio" again.common_voice_train[0]["audio"]{'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,-7.4556941e-05, -1.4621433e-05, -5.7861507e-05], dtype=float32),'path': '/root/.cache/huggingface/datasets/downloads/extracted/05be0c29807a73c9b099873d2f5975dae6d05e9f7d577458a2466ecb9a2b0c6b/cv-corpus-6.1-2020-12-11/tr/clips/common_voice_tr_21921195.mp3','sampling_rate': 16000}This seemed to have worked! Let's listen to a couple of audio files tobetter understand the dataset and verify that the audio was correctlyloaded.import IPython.display as ipdimport numpy as npimport randomrand_int = random.randint(0, len(common_voice_train)-1)print(common_voice_train[rand_int]["sentence"])ipd.Audio(data=common_voice_train[rand_int]["audio"]["array"], autoplay=True, rate=16000)Print Output:sunulan bütün teklifler i̇ngilizce idiIt seems like the data is now correctly loaded and resampled.It can be heard, that the speakers change along with their speakingrate, accent, and background environment, etc. Overall, the recordingssound acceptably clear though, which is to be expected from acrowd-sourced read speech corpus.Let's do a final check that the data is correctly prepared, by printingthe shape of the speech input, its transcription, and the correspondingsampling rate.rand_int = random.randint(0, len(common_voice_train)-1)print("Target text:", common_voice_train[rand_int]["sentence"])print("Input array shape:", common_voice_train[rand_int]["audio"]["array"].shape)print("Sampling rate:", common_voice_train[rand_int]["audio"]["sampling_rate"])Print Output:Target text: makedonya bu yıl otuz adet tyetmiş iki tankı aldıInput array shape: (71040,)Sampling rate: 16000Good! Everything looks fine - the data is a 1-dimensional array, thesampling rate always corresponds to 16kHz, and the target text isnormalized.Finally, we can leverage Wav2Vec2Processor to process the data to theformat expected by Wav2Vec2ForCTC for training. To do so let's makeuse of Dataset'smap(...)function.First, we load and resample the audio data, simply by callingbatch["audio"]. Second, we extract the input_values from the loadedaudio file. In our case, the Wav2Vec2Processor only normalizes thedata. For other speech models, however, this step can include morecomplex feature extraction, such as Log-Mel featureextraction.Third, we encode the transcriptions to label ids.Note: This mapping function is a good example of how theWav2Vec2Processor class should be used. In "normal" context, callingprocessor(...) is redirected to Wav2Vec2FeatureExtractor's callmethod. When wrapping the processor into the as_target_processorcontext, however, the same method is redirected toWav2Vec2CTCTokenizer's call method. For more information please checkthedocs.def prepare_dataset(batch):audio = batch["audio"]# batched output is "un-batched"batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]batch["input_length"] = len(batch["input_values"])with processor.as_target_processor():batch["labels"] = processor(batch["sentence"]).input_idsreturn batchLet's apply the data preparation function to all examples.common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names)common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names)Note: Currently datasets make use oftorchaudio andlibrosa for audio loadingand resampling. If you wish to implement your own costumized dataloading/sampling, feel free to just make use of the "path" columninstead and disregard the "audio" column.Long input sequences require a lot of memory. XLS-R is based onself-attention. The memory requirement scales quadratically with theinput length for long input sequences (cf. withthisreddit post). In case this demo crashes with an "Out-of-memory" errorfor you, you might want to uncomment the following lines to filter allsequences that are longer than 5 seconds for training.#max_input_length_in_sec = 5.0#common_voice_train = common_voice_train.filter(lambda x: x < max_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=["input_length"])Awesome, now we are ready to start training!TrainingThe data is processed so that we are ready to start setting up thetraining pipeline. We will make use of 🤗'sTrainerfor which we essentially need to do the following:Define a data collator. In contrast to most NLP models, XLS-R has amuch larger input length than output length. E.g., a sample ofinput length 50000 has an output length of no more than 100. Giventhe large input sizes, it is much more efficient to pad the trainingbatches dynamically meaning that all training samples should only bepadded to the longest sample in their batch and not the overalllongest sample. Therefore, fine-tuning XLS-R requires a specialpadding data collator, which we will define belowEvaluation metric. During training, the model should be evaluated onthe word error rate. We should define a compute_metrics functionaccordinglyLoad a pretrained checkpoint. We need to load a pretrainedcheckpoint and configure it correctly for training.Define the training configuration.After having fine-tuned the model, we will correctly evaluate it on thetest data and verify that it has indeed learned to correctly transcribespeech.Set-up TrainerLet's start by defining the data collator. The code for the datacollator was copied from thisexample.Without going into too many details, in contrast to the common datacollators, this data collator treats the input_values and labelsdifferently and thus applies to separate padding functions on them(again making use of XLS-R processor's context manager). This isnecessary because in speech input and output are of different modalitiesmeaning that they should not be treated by the same padding function.Analogous to the common data collators, the padding tokens in the labelswith -100 so that those tokens are not taken into account whencomputing the loss.import torchfrom dataclasses import dataclass, fieldfrom typing import Any, Dict, List, Optional, Union@dataclassclass DataCollatorCTCWithPadding:"""Data collator that will dynamically pad the inputs received.Args:processor (:class:`~transformers.Wav2Vec2Processor`)The processor used for proccessing the data.padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):Select a strategy to pad the returned sequences (according to the model's padding side and padding index)among:* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a singlesequence if provided).* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to themaximum acceptable input length for the model if that argument is not provided.* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences ofdifferent lengths)."""processor: Wav2Vec2Processorpadding: Union[bool, str] = Truedef __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:# split inputs and labels since they have to be of different lengths and need# different padding methodsinput_features = [{"input_values": feature["input_values"]} for feature in features]label_features = [{"input_ids": feature["labels"]} for feature in features]batch = self.processor.pad(input_features,padding=self.padding,return_tensors="pt",)with self.processor.as_target_processor():labels_batch = self.processor.pad(label_features,padding=self.padding,return_tensors="pt",)# replace padding with -100 to ignore loss correctlylabels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)batch["labels"] = labelsreturn batchdata_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)Next, the evaluation metric is defined. As mentioned earlier, thepredominant metric in ASR is the word error rate (WER), hence we willuse it in this notebook as well.wer_metric = load_metric("wer")The model will return a sequence of logit vectors: y1,…,ym \mathbf{y}_1, \ldots, \mathbf{y}_m y1​,…,ym​ with y1=fθ(x1,…,xn)[0] \mathbf{y}_1 = f_{\theta}(x_1, \ldots, x_n)[0] y1​=fθ​(x1​,…,xn​)[0] and n>>m n >> m n>>m.A logit vector y1 \mathbf{y}_1 y1​ contains the log-odds for each word in thevocabulary we defined earlier, thus len(yi)= \text{len}(\mathbf{y}_i) = len(yi​)=config.vocab_size. We are interested in the most likely prediction ofthe model and thus take the argmax(...) of the logits. Also, wetransform the encoded labels back to the original string by replacing-100 with the pad_token_id and decoding the ids while making surethat consecutive tokens are not grouped to the same token in CTCstyle 1 {}^1 1.def compute_metrics(pred):pred_logits = pred.predictionspred_ids = np.argmax(pred_logits, axis=-1)pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_idpred_str = processor.batch_decode(pred_ids)# we do not want to group tokens when computing the metricslabel_str = processor.batch_decode(pred.label_ids, group_tokens=False)wer = wer_metric.compute(predictions=pred_str, references=label_str)return {"wer": wer}Now, we can load the pretrained checkpoint ofWav2Vec2-XLS-R-300M.The tokenizer's pad_token_id must be to define the model'spad_token_id or in the case of Wav2Vec2ForCTC also CTC's blanktoken 2 {}^2 2. To save GPU memory, we enable PyTorch's gradientcheckpointing and alsoset the loss reduction to "mean".Because the dataset is quite small (~6h of training data) and becauseCommon Voice is quite noisy, fine-tuning Facebook'swav2vec2-xls-r-300m checkpoint seems to require somehyper-parameter tuning. Therefore, I had to play around a bit withdifferent values for dropout,SpecAugment's masking dropout rate,layer dropout, and the learning rate until training seemed to be stableenough.Note: When using this notebook to train XLS-R on another language ofCommon Voice those hyper-parameter settings might not work very well.Feel free to adapt those depending on your use case.from transformers import Wav2Vec2ForCTCmodel = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-xls-r-300m", attention_dropout=0.0,hidden_dropout=0.0,feat_proj_dropout=0.0,mask_time_prob=0.05,layerdrop=0.0,ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id,vocab_size=len(processor.tokenizer),)The first component of XLS-R consists of a stack of CNN layers that areused to extract acoustically meaningful - but contextually independent -features from the raw speech signal. This part of the model has alreadybeen sufficiently trained during pretraining and as stated in thepaper does not need to befine-tuned anymore. Thus, we can set the requires_grad to False forall parameters of the feature extraction part.model.freeze_feature_extractor()In a final step, we define all parameters related to training. To givemore explanation on some of the parameters:group_by_length makes training more efficient by grouping trainingsamples of similar input length into one batch. This cansignificantly speed up training time by heavily reducing the overallnumber of useless padding tokens that are passed through the modellearning_rate and weight_decay were heuristically tuned untilfine-tuning has become stable. Note that those parameters stronglydepend on the Common Voice dataset and might be suboptimal for otherspeech datasets.For more explanations on other parameters, one can take a look at thedocs.During training, a checkpoint will be uploaded asynchronously to the Hubevery 400 training steps. It allows you to also play around with thedemo widget even while your model is still training.Note: If one does not want to upload the model checkpoints to theHub, simply set push_to_hub=False.from transformers import TrainingArgumentstraining_args = TrainingArguments(output_dir=repo_name,group_by_length=True,per_device_train_batch_size=16,gradient_accumulation_steps=2,evaluation_strategy="steps",num_train_epochs=30,gradient_checkpointing=True,fp16=True,save_steps=400,eval_steps=400,logging_steps=400,learning_rate=3e-4,warmup_steps=500,save_total_limit=2,push_to_hub=True,)Now, all instances can be passed to Trainer and we are ready to starttraining!from transformers import Trainertrainer = Trainer(model=model,data_collator=data_collator,args=training_args,compute_metrics=compute_metrics,train_dataset=common_voice_train,eval_dataset=common_voice_test,tokenizer=processor.feature_extractor,)1 {}^1 1 To allow models to become independent of the speaker rate, inCTC, consecutive tokens that are identical are simply grouped as asingle token. However, the encoded labels should not be grouped whendecoding since they don't correspond to the predicted tokens of themodel, which is why the group_tokens=False parameter has to be passed.If we wouldn't pass this parameter a word like "hello" wouldincorrectly be encoded, and decoded as "helo".2 {}^2 2 The blank token allows the model to predict a word, such as"hello" by forcing it to insert the blank token between the two l's.A CTC-conform prediction of "hello" of our model would be[PAD] [PAD] "h" "e" "e" "l" "l" [PAD] "l" "o" "o" [PAD].TrainingTraining will take multiple hours depending on the GPU allocated to thisnotebook. While the trained model yields somewhat satisfying results onCommon Voice's test data of Turkish, it is by no means an optimallyfine-tuned model. The purpose of this notebook is just to demonstratehow to fine-tune XLS-R XLSR-Wav2Vec2's on an ASR dataset.Depending on what GPU was allocated to your google colab it might bepossible that you are seeing an "out-of-memory" error here. In thiscase, it's probably best to reduce per_device_train_batch_size to 8or even less and increasegradient_accumulation.trainer.train()Print Output:Training LossEpochStepValidation LossWer3.88423.674000.67940.70000.41157.348000.43040.45480.194611.0112000.44660.42160.130814.6816000.45260.39610.099718.3520000.45670.36960.078422.0224000.41930.34420.063325.6928000.41530.33470.049829.3632000.40770.3195The training loss and validation WER go down nicely.You can now upload the result of the training to the Hub, just executethis instruction:trainer.push_to_hub()You can now share this model with all your friends, family, favoritepets: they can all load it with the identifier"your-username/the-name-you-picked" so for instance:from transformers import AutoModelForCTC, Wav2Vec2Processormodel = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-large-xls-r-300m-tr-colab")processor = Wav2Vec2Processor.from_pretrained("patrickvonplaten/wav2vec2-large-xls-r-300m-tr-colab")For more examples of how XLS-R can be fine-tuned, please take a look at the official 🤗 Transformers examples.EvaluationAs a final check, let's load the model and verify that it indeed haslearned to transcribe Turkish speech.Let's first load the pretrained checkpoint.model = Wav2Vec2ForCTC.from_pretrained(repo_name).to("cuda")processor = Wav2Vec2Processor.from_pretrained(repo_name)Now, we will just take the first example of the test set, run it throughthe model and take the argmax(...) of the logits to retrieve thepredicted token ids.input_dict = processor(common_voice_test[0]["input_values"], return_tensors="pt", padding=True)logits = model(input_dict.input_values.to("cuda")).logitspred_ids = torch.argmax(logits, dim=-1)[0]It is strongly recommended to pass the sampling_rate argument to this function.Failing to do so can result in silent errors that might be hard to debug.We adapted common_voice_test quite a bit so that the dataset instancedoes not contain the original sentence label anymore. Thus, we re-usethe original dataset to get the label of the first example.common_voice_test_transcription = load_dataset("common_voice", "tr", data_dir="./cv-corpus-6.1-2020-12-11", split="test")Finally, we can decode the example.print("Prediction:")print(processor.decode(pred_ids))print("Reference:")print(common_voice_test_transcription[0]["sentence"].lower())Print Output:pred_strtarget_texthatta küçük şeyleri için bir büyt bir şeyleri kolluyor veyınıki çuk şeyler için bir bir mizi inciltiyoruzhayatta küçük şeyleri kovalıyor ve yine küçük şeyler için birbirimizi incitiyoruz.Alright! The transcription can definitely be recognized from ourprediction, but it is not perfect yet. Training the model a bit longer,spending more time on the data preprocessing, and especially using alanguage model for decoding would certainly improve the model's overallperformance.For a demonstration model on a low-resource language, the results arequite acceptable however 🤗.
https://huggingface.co/blog/bert-cpu-scaling-part-2
Scaling up BERT-like model Inference on modern CPU - Part 2
Ella Charlaix, Jeff Boudier, Morgan Funtowicz, Michael Benayoun
November 4, 2021
Introduction: Using Intel Software to Optimize AI Efficiency on CPUAs we detailed in our previous blog post, Intel Xeon CPUs provide a set of features especially designed for AI workloads such as AVX512 or VNNI (Vector Neural Network Instructions) for efficient inference using integer quantized neural network for inference along with additional system tools to ensure the work is being done in the most efficient way. In this blog post, we will focus on software optimizations and give you a sense of the performances of the new Ice Lake generation of Xeon CPUs from Intel. Our goal is to give you a full picture of what’s available on the software side to make the most out of your Intel hardware. As in the previous blog post, we show the performance with benchmark results and charts, along with new tools to make all these knobs and features easy to use.Back in April, Intel launched its latest generation of Intel Xeon processors, codename Ice Lake, targeting more efficient and performant AI workloads. More precisely, Ice Lake Xeon CPUs can achieve up to 75% faster inference on a variety of NLP tasks when comparing against the previous generation of Cascade Lake Xeon processors. This is achieved by a combination of both hardware and software improvements, such as new instructions and PCIe 4.0 featured on the new Sunny Cove architecture to supports Machine Learning and Deep Learning workloads. Last but not least, Intel worked on dedicated optimizations for various frameworks which now come with Intel’s flavors like Intel’s Extension for Scikit Learn, Intel TensorFlow and Intel PyTorch Extension.All these features are very low-level in the stack of what Data Scientists and Machine Learning Engineers use in their day-to-day toolset. In a vast majority of situations, it is more common to rely on higher level frameworks and libraries to handle multi-dimensional arrays manipulation such as PyTorch and TensorFlow and make use of highly tuned mathematical operators such as BLAS (Basic Linear Algebra Subroutines) for the computational part.In this area, Intel plays an essential role by providing software components under the oneAPI umbrella which makes it very easy to use highly efficient linear algebra routines through Intel oneMKL (Math Kernel Library), higher-level parallelization framework with Intel OpenMP or the Threading Building Blocks (oneTBB).Also, oneAPI provides some domain-specific libraries such as Intel oneDNN for deep neural network primitives (ReLU, fully-connected, etc.) or oneCCL for collective communication especially useful when using distributed setups to access efficient all-reduce operations over multiple hosts.Some of these libraries, especially MKL or oneDNN, are natively included in frameworks such as PyTorch and TensorFlow (since 2.5.0) to bring all the performance improvements to the end user out of the box. When one would like to target very specific hardware features, Intel provides custom versions of the most common software, especially optimized for the Intel platform. This is for instance the case with TensorFlow, for which Intel provides custom, highly tuned and optimized versions of the framework,or with the Intel PyTorch Extension (IPEX) framework which can be considered as a feature laboratory before upstreaming to PyTorch.Deep Dive: Leveraging advanced Intel features to improve AI performancesPerformance tuning knobsAs highlighted above, we are going to cover a new set of tunable items to improve the performance of our AI application. From a high-level point of view, every machine learning and deep learning framework is made of the same ingredients:A structural way of representing data in memory (vector, matrices, etc.)Implementation of mathematical operatorsEfficient parallelization of the computations on the target hardwareIn addition to the points listed above, deep learning frameworks provide ways to represent data flow and dependencies to compute gradients. This falls out of the scope of this blog post, and it leverages the same components as the ones listed above!Figure 1. Intel libraries overview under the oneAPI umbrella1. Memory allocation and management librariesThis blog post will deliberately skip the first point about the data representation as it is something rather framework specific. For reference, PyTorch uses its very own implementation, called ATen, while TensorFlow relies on the open source library Eigen for this purpose.While it’s very complex to apply generic optimizations to different object structures and layouts, there is one area where we can have an impact: Memory Allocation. As a short reminder, memory allocation here refers to the process of programmatically asking the operating system a dynamic (unknown beforehand) area on the system where we will be able to store items into, such as the malloc and derived in C or the new operator in C++. Memory efficiency, both in terms of speed but also in terms of fragmentation, is a vast scientific and engineering subject with multiple solutions depending on the task and underlying hardware. Over the past years we saw more and more work in this area, with notably:jemalloc (Facebook - 2005)mimalloc (Microsoft - 2019)tcmalloc (Google - 2020)Each pushes forward different approaches to improve aspects of the memory allocation and management on various software.2. Efficient parallelization of computationsNow that we have an efficient way to represent our data, we need a way to take the most out of the computational hardware at our disposal. Interestingly, when it comes to inference, CPUs have a potential advantage over GPUs in the sense they are everywhere, and they do not require specific application components and administration staff to operate them.Modern CPUs come with many cores and complex mechanisms to increase the general performances of software. Yet, as we highlighted on the first blog post, they also have features which can be tweaked depending on the kind of workload (CPU or I/O bound) you target, to further improve performances for your application. Still, implementing parallel algorithms might not be as simple as throwing more cores to do the work. Many factors, such as data structures used, concurrent data access, CPU caches invalidation - all of which might prevent your algorithm from being effectively faster. As a reference talk, we recommend the talk from Scott Meyers: CPU Caches and Why You Care if you are interested in diving more into the subject.Thankfully, there are libraries which make the development process of such parallel algorithms easier and less error-prone. Among the most common parallel libraries we can mention OpenMP and TBB (Threading Building Blocks), which work at various levels, from programming API in C/C++ to environment variable tuning and dynamic scheduling. On Intel hardware, it is advised to use the Intel implementation of the OpenMP specification often referred as "IOMP" available as part of the Intel oneAPI toolkit.Figure 2. Code snippet showing parallel computation done through OpenMP3. Optimized mathematical operatorsNow that we covered the necessary building blocks for designing efficient data structures and parallel algorithms, the last remaining piece is the one running the computation, the one implementing the variety of mathematical operators and neural network layers to do what we love most, designing neural networks! 😊In every programmer toolkit, there are multiple levels which can bring mathematical operations support, which can then be optimized differently depending on various factors such as the data storage layout being used (Contiguous memory, Chunked, Packed, etc.), the data format representing each scalar element (Float32, Integer, Long, Bfloat16, etc.) and of course the various instructions being supported by your processor.Nowadays, almost all processors support basic mathematical operations on scalar items (one single item at time) or in vectorized mode (meaning they operate on multiple items within the same CPU instructions, referred as SIMD “Single Instruction Multiple Data”).Famous sets of SIMD instructions are SSE2, AVX, AVX2 and the AVX-512 present on the latest generations of Intel CPUs being able to operate over 16 bytes of content within a single CPU clock.Most of the time, one doesn't have to worry too much about the actual assembly being generated to execute a simple element-wise addition between two vectors, but if you do, again there are some libraries which allow you to go one level higher than writing code calling CPU specific intrinsic to implement efficient mathematical kernels. This is for instance what Intel’s MKL “Math Kernel Library” provides, along with the famous BLAS “Basic Linear Algebra Subroutines” interface to implement all the basic operations for linear algebra.Finally, on top of this, one can find some domain specific libraries such as Intel's oneDNN which brings all the most common and essential building blocks required to implement neural network layers. Intel MKL and oneDNN are natively integrated within the PyTorch framework, where it can enable some performance speedup for certain operations such as Linear + ReLU or Convolution. On the TensorFlow side, oneDNN can be enabled by setting the environment variable TF_ENABLE_ONEDNN_OPTS=1 (TensorFlow >= 2.5.0) to achieve similar machinery under the hood.More Efficient AI Processing on latest Intel Ice Lake CPUsIn order to report the performances of the Ice Lake product lineup we will closely follow the methodology we used for the first blog post of this series. As a reminder, we will adopt the exact same schema to benchmark the various setups we will highlight through this second blog post. More precisely, the results presented in the following sections are based on:PyTorch: 1.9.0TensorFlow: 2.5.0Batch Sizes: 1, 4, 8, 16, 32, 128Sequence Lengths: 8, 16, 32, 64, 128, 384, 512We will present the results through metrics accepted by the field to establish the performances of the proposed optimizations: Latency: Time it takes to execute a single inference request (i.e., “forward call”) through the model, expressed in millisecond.Throughput: Number of inference requests (i.e., “forward calls”) the system can sustain within a defined period, expressed in call/sec.We will also provide an initial baseline showing out-of-the-box results and a second baseline applying all the different optimizations we highlighted in the first blogpost. Everything was run on an Intel provided cloud instance featuring the Ice Lake Xeon Platinum 8380 CPU operating on Ubuntu 20.04.2 LTS.You can find the same processors on the various cloud providers: AWS m6i / c6i instancesAzure Ev5 / Dv5 seriesFigure 3. Intel Ice Lake Xeon 8380 SpecificationsEstablishing the baselineAs mentioned previously, the baselines will be composed of two different setups: - Out-of-the-box: We are running the workloads as-is, without any tuning- Optimized: We apply the various knobs present in Blog #1Also, from the comments we had about the previous blog post, we wanted to change the way we present the framework within the resulting benchmarks. As such, through the rest of this second blog post, we will split framework benchmarking results according to the following:Frameworks using “eager” mode for computations (PyTorch, TensorFlow)Frameworks using “graph” mode for computations (TorchScript, TensorFlow Graph, Intel Tensorflow)Baseline: Eager frameworks latenciesFrameworks operating in eager mode usually discover the actual graph while executing it. More precisely, the actual computation graph is not known beforehand and you gradually (eagerly) execute one operatorwhich will become the input of the next one, etc. until you reach leaf nodes (outputs).These frameworks usually provide more flexibility in the algorithm you implement at the cost of increased runtime overheadand slightly potential more memory usage to keep track of all the required elements for the backward pass.Last but not least, it is usually harder through these frameworks to enable graph optimizations such as operator fusion.For instance, many deep learning libraries such as oneDNN have optimized kernels for Convolution + ReLU but you actually needto know before executing the graph that this pattern will occur within the sequence of operation, which is, by design, notsomething possible within eager frameworks.Figure 4. PyTorch latencies with respect to the number of cores involvedFigure 5. Google's TensorFlow latencies with respect to the number of cores involvedFigure 6. Google's TensorFlow with oneDNN enabled latencies with respect to the number of cores involvedFigure 7. Intel TensorFlow latencies with respect to the number of cores involvedThe global trend highlights the positive impact of the number of cores on the observed latencies. In most of the cases, increasing the number of cores reduces the computation time across the different workload sizes. Still, putting more cores to the task doesn't result in monotonic latency reductions, there is always a trade-off between the workload’s size and the number of resources you allocate to execute the job.As you can see on the charts above, one very common pattern tends to arise from using all the cores available on systems with more than one CPU (more than one socket). The inter-socket communication introduces a significant latency overhead and results in very little improvement to increased latency overall. Also, this inter-socket communication overhead tends to be less and less perceptive as the workload becomes larger, meaning the usage of all computational resources benefits from using all the available cores. In this domain, it seems PyTorch (Figure 1.) and Intel TensorFlow (Figure 4.) seem to have slightly better parallelism support, as showed on the sequence length 384 and 512 for which using all the cores still reduces the observed latency.Baseline: Graph frameworks latenciesThis time we compare performance when using frameworks in “Graph” mode, where the graph is fully known beforehand,and all the allocations and optimizations such as graph pruning and operators fusing can be made.Figure 8. TorchScript latencies with respect to the number of cores involvedFigure 9. Google's TensorFlow latencies with respect to the number of cores involvedFigure 10. Google's TensorFlow with oneDNN enabled latencies with respect to the number of cores involvedFigure 11. Intel TensorFlow latencies with respect to the number of cores involvedThis is often referred to as “tracing” the graph and, as you can see here, the results are not that different from TorchScript (Graph execution mode from PyTorch) vs TensorFlow(s). All TensorFlow implementations seem to perform better than TorchScript when the parallelization is limited (low number of cores involved in the intra operation computations) but this seems not to scale efficiently as we increase the computation resources, whereas TorchScript seems to be able to better leverage the power of modern CPUs. Still, the margin between all these frameworks in most cases very limited.Tuning the Memory Allocator: Can this impact the latencies observed?One crucial component every program dynamically allocating memory relies on is the memory allocator. If you are familiar with C/C++ programming this component provides the low bits to malloc/free or new/delete. Most of the time you don’t have to worry too much about it and the default ones (glibc for instance on most Linux distributions) will provide great performances out of the box. Still, in some situations it might not provide the most efficient performances, as these default allocators are most of the time designed to be “good” most of the time, and not fine-tuned for specific workloads or parallelism. So, what are the alternatives, and when are they more suitable than the default ones? Well, again, it depends on the kind of context around your software. Possible situations are a heavy number of allocations/de-allocations causing fragmentation over time, specific hardware and/or architecture you’re executing your software on and finally the level of parallelism of your application.Do you see where this is going? Deep learning and by extension all the applications doing heavy computations are heavily multi-threaded, that’s also the case for software libraries such as PyTorch, TensorFlow and any other frameworks targeting Machine Learning workloads. The default memory allocator strategies often rely on global memory pools which require the usage of synchronization primitives to operate, increasing the overall pressure on the system, reducing the performance of your application.Some recent works by companies such as Google, Facebook and Microsoft provided alternative memory allocation strategies implemented in custom memory allocator libraries one can easily integrate directly within its software components or use dynamic shared library preload to swap the library being used to achieve the allocation/de-allocation. Among these libraries, we can cite a few of them such as tcmalloc, jemalloc and mimalloc.Figure 12. Various memory allocators benchmarked on different tasksThrough this blog post we will only focus on benchmarking tcmalloc and jemalloc as potential memory allocators drop-in candidates. To be fully transparent, for the scope of the results below we used tcmalloc as part of the gperftools package available on Ubuntu distributions version 2.9 and jemalloc 5.1.0-1.Memory allocator benchmarksAgain, we first compare performance against frameworks executing in an eager fashion. This is potentially the use case where the allocator can play the biggest role: As the graph is unknown before its execution, each framework must manage the memory required for each operation when it meets the actual execution of the above node, no planning ahead possible. In this context, the allocator is a major component due to all the system calls to allocate and reclaim memory.Figure 13. PyTorch memory allocator and cores scaling latenciesFigure 14. Google's TensorFlow memory allocator and cores scaling latenciesFigure 15. Google's TensorFlow with oneDNN enabled memory allocator and cores scaling latenciesFigure 16. Intel TensorFlow memory allocator and cores scaling latenciesAs per the graph above, you can notice that the standard library allocator (glibc) is often behind performance-wise but provides reasonable performance. Jemalloc allocator is sometimes the fastest around but in very specific situations, where the concurrency is not that high, this can be explained by the underlying structure jemalloc uses internally which is out of the scope of this blog, but you can read the Facebook Engineering blog if you want to know more about it.Finally, tcmalloc seems to be the one providing generally best performances across all the workloads benchmarked here. Again, tcmalloc has a different approach than Jemalloc in the way it allocates resources, especially tcmalloc maintains a pool of memory segments locally for each thread, which reduces the necessity to have global, exclusive, critical paths. Again, for more details, I invite you to read the full blog by Google Abseil team.Now, back to the graph mode where we benchmark framework having an omniscient representation of the overall computation graph.Figure 17. TorchScript memory allocator and cores scaling latenciesFigure 18. Google's TensorFlow memory allocator and cores scaling latenciesFigure 19. Google's TensorFlow with oneDNN enabled memory allocator and cores scaling latenciesFigure 20. Intel TensorFlow memory allocator and cores scaling latenciesThis time, by knowing the underlying structure of the operator flows and matrix shapes involved then the framework can plan and reserve the required resources beforehand. In this context, and as it is shown in the chart above, the difference between framework is very small and there is no clear winner between jemalloc and tcmalloc. Of course, glibc is still slightly behind as a general-purpose memory allocator, but the margin is less significant than in the eager setup.To sum it up, tuning the memory allocator can provide an interesting item to grab the last milliseconds' improvement at the end of the optimization process, especially if you are already using traced computation graphs.OpenMPIn the previous section we talked about the memory management within machine learning software involving mostly CPU-bound workloads. Such software often relies on intermediary frameworks such as PyTorch or TensorFlow for Deep Learning which commonly abstract away all the underlying, highly parallelized, operator implementations. Writing such highly parallel and optimized algorithms is a real engineering challenge, and it requires a very low-level understanding of all the actual elements coming into play operated by the CPU (synchronization, memory cache, cache validity, etc.). In this context, it is very important to be able to leverage primitives to implement such powerful algorithms, reducing the delivery time and computation time by a large margincompared to implementing everything from scratch.There are many libraries available which provide such higher-level features to accelerate the development of algorithms. Among the most common, one can look at OpenMP, Thread Building Blocks and directly from the C++ when targeting a recent version of the standard. In the following part of this blog post, we will restrict ourselves to OpenMP and especially comparing the GNU, open source and community-based implementation, to the Intel OpenMP one. The latter especially targets Intel CPUs and is optimized to provide best of class performances when used as a drop-in replacement against the GNU OpenMP one.OpenMP exposes many environment variables to automatically configure the underlying resources which will be involved in the computations, such as the number of threads to use to dispatch computation to (intra-op threads), the way the system scheduler should bind each of these threads with respect to the CPU resources (threads, cores, sockets) and some other variables which bring further control to the user. Intel OpenMP exposes more of these environment variables to provide the user even more flexibility to adjust the performance of its software.Figure 21. OpenMP vs Intel OpenMP latencies running PyTorchFigure 22. OpenMP vs Intel OpenMP latencies running PyTorchAs stated above, tuning OpenMP is something you can start to tweak when you tried all the other, system related, tuning knobs. It can bring a final speed up to you model with just a single environment variable to set. Also, it is important to note that tuning OpenMP library will only work within software that uses the OpenMP API internally. More specially, now only PyTorch and TorchScript really make usage of OpenMP and thus benefit from OpenMP backend tuning. This also explains why we reported latencies only for these two frameworks.Automatic Performances Tuning: Bayesian Optimization with Intel SigOptAs mentioned above, many knobs can be tweaked to improve latency and throughput on Intel CPUs, but because there are many, tuning all of them to get optimal performance can be cumbersome. For instance, in our experiments, the following knobs were tuned:The number of cores: although using as many cores as you have is often a good idea, it does not always provide the best performance because it also means more communication between the different threads. On top of that, having better performance with fewer cores can be very useful as it allows to run multiple instances at the same time, resulting in both better latency and throughput.The memory allocator: which memory allocator out of the default malloc, Google's tcmalloc and Facebook's jemalloc provides the best performance?The parallelism library: which parallelism library out of GNU OpenMP and Intel OpenMP provides the best performance?Transparent Huge Pages: does enabling Transparent Huge Pages (THP) on the system provide better performance?KMP block time parameter: sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping.Of course, the brute force approach, consisting of trying out all the possibilities will provide the best knob values to use to get optimal performance but, the size of the search space being N x 3 x 2 x 2 x 2 = 24N, it can take a lot of time: on a machine with 80 physical cores, this means trying out at most 24 x 80 = 1920 different setups! 😱Fortunately, Intel's SigOpt, through Bayesian optimization, allows us to make these tuning experiments both faster and more convenient to analyse, while providing similar performance than the brute force approach.When we analyse the relative difference between the absolute best latency and what SigOpt provides, we observe that although it is often not as good as brute force (except for sequence length = 512 in that specific case),it gives very close performance, with 8.6% being the biggest gap on this figure.Figure 23. Absolute best latency found by SigOpt automatic tuning vs brute forceFigure 24. Relative best latency found by SigOpt automatic tuning vs brute forceSigOpt is also very useful for analysis: it provides a lot of figures and valuable information.First, it gives the best value it was able to find, the corresponding knobs, and the history of trials and how it improved as trials went, for example, with sequence length = 20:Figure 25. SigOpt best value reportingFigure 26. SigOpt best value reportingIn this specific setup, 16 cores along with the other knobs were able to give the best results, that is very important to know, because as mentioned before,that means that multiple instances of the model can be run in parallel while still having the best latency for each.It also shows that it had converged at roughly 20 trials, meaning that maybe 25 trials instead of 40 would have been enough.A wide range of other valuable information is available, such as Parameter Importance:As expected, the number of cores is, by far, the most important parameter, but the others play a part too, and it is very experiment dependent. For instance, for the sequence length = 512 experiment, this was the Parameter Importance:Figure 27. SigOpt best value for Batch Size = 1, Sequence Length = 20Figure 28. SigOpt best value for Batch Size = 1, Sequence Length = 512Here not only the impact of using OpenMP vs Intel OpenMP was bigger than the impact of the allocator, the relative importance of each knob is more balanced than in the sequence length = 20 experiment.And many more figures, often interactive, are available on SigOpt such as:2D experiment history, allowing to compare knobs vs knobs or knobs vs objectives3D experiment history, allowing to do the same thing as the 2D experiment history with one more knob / objective.Conclusion - Accelerating Transformers for ProductionIn this post, we showed how the new Intel Ice Lake Xeon CPUs are suitable for running AI workloads at scale along with the software elements you can swap and tune in order to exploit the full potential of the hardware.All these items are to be considered after setting-up the various lower-level knobs detailed in the previous blog to maximize the usage of all the cores and resources.At Hugging Face, we are on a mission to democratize state-of-the-art Machine Learning, and a critical part of our work is to make these state-of-the-art models as efficient as possible, to use less energy and memory at scale, and to be more affordable to run by companies of all sizes. Our collaboration with Intel through the 🤗 Hardware Partner Program enables us to make advanced efficiency and optimization techniques easily available to the community, through our new 🤗 Optimum open source library dedicated to production performance.For companies looking to accelerate their Transformer models inference, our new 🤗 Infinity product offers a plug-and-play containerized solution, achieving down to 1ms latency on GPU and 2ms on Intel Xeon Ice Lake CPUs.If you found this post interesting or useful to your work, please consider giving Optimum a star. And if this post was music to your ears, consider joining our Machine Learning Optimization team!
https://huggingface.co/blog/course-launch-event
Course Launch Community Event
Sylvain Gugger
October 26, 2021
We are excited to share that after a lot of work from the Hugging Face team, part 2 of the Hugging Face Course will be released on November 15th! Part 1 focused on teaching you how to use a pretrained model, fine-tune it on a text classification task then upload the result to the Model Hub. Part 2 will focus on all the other common NLP tasks: token classification, language modeling (causal and masked), translation, summarization and question answering. It will also take a deeper dive in the whole Hugging Face ecosystem, in particular 🤗 Datasets and 🤗 Tokenizers.To go with this release, we are organizing a large community event to which you are invited! The program includes two days of talks, then team projects focused on fine-tuning a model on any NLP task ending with live demos like this one. Those demos will go nicely in your portfolio if you are looking for a new job in Machine Learning. We will also deliver a certificate of completion to all the participants that achieve building one of them.AWS is sponsoring this event by offering free compute to participants via Amazon SageMaker. To register, please fill out this form. You will find below more details on the two days of talks.Day 1 (November 15th): A high-level view of Transformers and how to train themThe first day of talks will focus on a high-level presentation of Transformers models and the tools we can use to train or fine-tune them.Thomas Wolf: Transfer Learning and the birth of the Transformers libraryThomas Wolf is co-founder and Chief Science Officer of HuggingFace. The tools created by Thomas Wolf and the Hugging Face team are used across more than 5,000 research organisations including Facebook Artificial Intelligence Research, Google Research, DeepMind, Amazon Research, Apple, the Allen Institute for Artificial Intelligence as well as most university departments. Thomas Wolf is the initiator and senior chair of the largest research collaboration that has ever existed in Artificial Intelligence: “BigScience”, as well as a set of widely used libraries and tools. Thomas Wolf is also a prolific educator and a thought leader in the field of Artificial Intelligence and Natural Language Processing, a regular invited speaker to conferences all around the world (https://thomwolf.io).Margaret Mitchell: On Values in ML DevelopmentMargaret Mitchell is a researcher working on Ethical AI, currently focused on the ins and outs of ethics-informed AI development in tech. She has published over 50 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google's Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master's in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She likes gardening, dogs, and cats.Jakob Uszkoreit: It Ain't Broke So Don't Fix Let's Break ItJakob Uszkoreit is the co-founder of Inceptive. Inceptive designs RNA molecules for vaccines and therapeutics using large-scale deep learning in a tight loop with high throughput experiments with the goal of making RNA-based medicines more accessible, more effective and more broadly applicable. Previously, Jakob worked at Google for more than a decade, leading research and development teams in Google Brain, Research and Search working on deep learning fundamentals, computer vision, language understanding and machine translation.Jay Alammar: A gentle visual intro to Transformers modelsJay Alammar, Cohere. Through his popular ML blog, Jay has helped millions of researchers and engineers visually understand machine learning tools and concepts from the basic (ending up in numPy, pandas docs) to the cutting-edge (Transformers, BERT, GPT-3).Matthew Watson: NLP workflows with KerasMatthew Watson is a machine learning engineer on the Keras team, with a focus on high-level modeling APIs. He studied Computer Graphics during undergrad and a Masters at Stanford University. An almost English major who turned towards computer science, he is passionate about working across disciplines and making NLP accessible to a wider audience.Chen Qian: NLP workflows with KerasChen Qian is a software engineer from Keras team, with a focus on high-level modeling APIs. Chen got a Master degree of Electrical Engineering from Stanford University, and he is especially interested in simplifying code implementations of ML tasks and large-scale ML.Mark Saroufim: How to Train a Model with PytorchMark Saroufim is a Partner Engineer at Pytorch working on OSS production tools including TorchServe and Pytorch Enterprise. In his past lives, Mark was an Applied Scientist and Product Manager at Graphcore, yuri.ai, Microsoft and NASA's JPL. His primary passion is to make programming more fun.Day 2 (November 16th): The tools you will useDay 2 will be focused on talks by the Hugging Face, Gradio, and AWS teams, showing you the tools you will use.Lewis Tunstall: Simple Training with the 🤗 Transformers TrainerLewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of an upcoming O’Reilly book on Transformers and you can follow him on Twitter (@_lewtun) for NLP tips and tricks!Matthew Carrigan: New TensorFlow Features for 🤗 Transformers and 🤗 DatasetsMatt is responsible for TensorFlow maintenance at Transformers, and will eventually lead a coup against the incumbent PyTorch faction which will likely be co-ordinated via his Twitter account @carrigmat.Lysandre Debut: The Hugging Face Hub as a means to collaborate on and share Machine Learning projectsLysandre is a Machine Learning Engineer at Hugging Face where he is involved in many open source projects. His aim is to make Machine Learning accessible to everyone by developing powerful tools with a very simple API.Sylvain Gugger: Supercharge your PyTorch training loop with 🤗 AccelerateSylvain is a Research Engineer at Hugging Face and one of the core maintainers of 🤗 Transformers and the developer behind 🤗 Accelerate. He likes making model training more accessible.Lucile Saulnier: Get your own tokenizer with 🤗 Transformers & 🤗 TokenizersLucile is a machine learning engineer at Hugging Face, developing and supporting the use of open source tools. She is also actively involved in many research projects in the field of Natural Language Processing such as collaborative training and BigScience.Merve Noyan: Showcase your model demos with 🤗 SpacesMerve is a developer advocate at Hugging Face, working on developing tools and building content around them to democratize machine learning for everyone.Abubakar Abid: Building Machine Learning Applications FastAbubakar Abid is the CEO of Gradio. He received his Bachelor's of Science in Electrical Engineering and Computer Science from MIT in 2015, and his PhD in Applied Machine Learning from Stanford in 2021. In his role as the CEO of Gradio, Abubakar works on making machine learning models easier to demo, debug, and deploy.Mathieu Desvé: AWS ML Vision: Making Machine Learning Accessible to all CustomersTechnology enthusiast, maker on my free time. I like challenges and solving problem of clients and users, and work with talented people to learn every day. Since 2004, I work in multiple positions switching from frontend, backend, infrastructure, operations and managements. Try to solve commons technical and managerial issues in agile manner.Philipp Schmid: Managed Training with Amazon SageMaker and 🤗 TransformersPhilipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.
https://huggingface.co/blog/large-language-models
Large Language Models: A New Moore's Law?
Julien Simon
October 26, 2021
A few days ago, Microsoft and NVIDIA introduced Megatron-Turing NLG 530B, a Transformer-based model hailed as "the world’s largest and most powerful generative language model."This is an impressive show of Machine Learning engineering, no doubt about it. Yet, should we be excited about this mega-model trend? I, for one, am not. Here's why.This is your Brain on Deep LearningResearchers estimate that the human brain contains an average of 86 billion neurons and 100 trillion synapses. It's safe to assume that not all of them are dedicated to language either. Interestingly, GPT-4 is expected to have about 100 trillion parameters... As crude as this analogy is, shouldn't we wonder whether building language models that are about the size of the human brain is the best long-term approach?Of course, our brain is a marvelous device, produced by millions of years of evolution, while Deep Learning models are only a few decades old. Still, our intuition should tell us that something doesn't compute (pun intended).Deep Learning, Deep Pockets?As you would expect, training a 530-billion parameter model on humongous text datasets requires a fair bit of infrastructure. In fact, Microsoft and NVIDIA used hundreds of DGX A100 multi-GPU servers. At $199,000 a piece, and factoring in networking equipment, hosting costs, etc., anyone looking to replicate this experiment would have to spend close to $100 million dollars. Want fries with that?Seriously, which organizations have business use cases that would justify spending $100 million on Deep Learning infrastructure? Or even $10 million? Very few. So who are these models for, really?That Warm Feeling is your GPU ClusterFor all its engineering brilliance, training Deep Learning models on GPUs is a brute force technique. According to the spec sheet, each DGX server can consume up to 6.5 kilowatts. Of course, you'll need at least as much cooling power in your datacenter (or your server closet). Unless you're the Starks and need to keep Winterfell warm in winter, that's another problem you'll have to deal with. In addition, as public awareness grows on climate and social responsibility issues, organizations need to account for their carbon footprint. According to this 2019 study from the University of Massachusetts, "training BERT on GPU is roughly equivalent to a trans-American flight".BERT-Large has 340 million parameters. One can only extrapolate what the footprint of Megatron-Turing could be... People who know me wouldn't call me a bleeding-heart environmentalist. Still, some numbers are hard to ignore.So?Am I excited by Megatron-Turing NLG 530B and whatever beast is coming next? No. Do I think that the (relatively small) benchmark improvement is worth the added cost, complexity and carbon footprint? No. Do I think that building and promoting these huge models is helping organizations understand and adopt Machine Learning ? No.I'm left wondering what's the point of it all. Science for the sake of science? Good old marketing? Technological supremacy? Probably a bit of each. I'll leave them to it, then.Instead, let me focus on pragmatic and actionable techniques that you can all use to build high quality Machine Learning solutions.Use Pretrained ModelsIn the vast majority of cases, you won't need a custom model architecture. Maybe you'll want a custom one (which is a different thing), but there be dragons. Experts only!A good starting point is to look for models that have been pretrained for the task you're trying to solve (say, summarizing English text).Then, you should quickly try out a few models to predict your own data. If metrics tell you that one works well enough, you're done! If you need a little more accuracy, you should consider fine-tuning the model (more on this in a minute).Use Smaller ModelsWhen evaluating models, you should pick the smallest one that can deliver the accuracy you need. It will predict faster and require fewer hardware resources for training and inference. Frugality goes a long way.It's nothing new either. Computer Vision practitioners will remember when SqueezeNet came out in 2017, achieving a 50x reduction in model size compared to AlexNet, while meeting or exceeding its accuracy. How clever that was!Downsizing efforts are also under way in the Natural Language Processing community, using transfer learning techniques such as knowledge distillation. DistilBERT is perhaps its most widely known achievement. Compared to the original BERT model, it retains 97% of language understanding while being 40% smaller and 60% faster. You can try it here. The same approach has been applied to other models, such as Facebook's BART, and you can try DistilBART here.Recent models from the Big Science project are also very impressive. As visible in this graph included in the research paper, their T0 model outperforms GPT-3 on many tasks while being 16x smaller.You can try T0 here. This is the kind of research we need more of!Fine-Tune ModelsIf you need to specialize a model, there should be very few reasons to train it from scratch. Instead, you should fine-tune it, that is to say train it only for a few epochs on your own data. If you're short on data, maybe of one these datasets can get you started.You guessed it, that's another way to do transfer learning, and it'll help you save on everything!Less data to collect, store, clean and annotate,Faster experiments and iterations,Fewer resources required in production.In other words: save time, save money, save hardware resources, save the world! If you need a tutorial, the Hugging Face course will get you started in no time.Use Cloud-Based InfrastructureLike them or not, cloud companies know how to build efficient infrastructure. Sustainability studies show that cloud-based infrastructure is more energy and carbon efficient than the alternative: see AWS, Azure, and Google. Earth.org says that while cloud infrastructure is not perfect, "[it's] more energy efficient than the alternative and facilitates environmentally beneficial services and economic growth."Cloud certainly has a lot going for it when it comes to ease of use, flexibility and pay as you go. It's also a little greener than you probably thought. If you're short on GPUs, why not try fine-tune your Hugging Face models on Amazon SageMaker, AWS' managed service for Machine Learning? We've got plenty of examples for you.Optimize Your ModelsFrom compilers to virtual machines, software engineers have long used tools that automatically optimize their code for whatever hardware they're running on. However, the Machine Learning community is still struggling with this topic, and for good reason. Optimizing models for size and speed is a devilishly complex task, which involves techniques such as:Specialized hardware that speeds up training (Graphcore, Habana) and inference (Google TPU, AWS Inferentia).Pruning: remove model parameters that have little or no impact on the predicted outcome.Fusion: merge model layers (say, convolution and activation).Quantization: storing model parameters in smaller values (say, 8 bits instead of 32 bits)Fortunately, automated tools are starting to appear, such as the Optimum open source library, and Infinity, a containerized solution that delivers Transformers accuracy at 1-millisecond latency.ConclusionLarge language model size has been increasing 10x every year for the last few years. This is starting to look like another Moore's Law. We've been there before, and we should know that this road leads to diminishing returns, higher cost, more complexity, and new risks. Exponentials tend not to end well. Remember Meltdown and Spectre? Do we want to find out what that looks like for AI?Instead of chasing trillion-parameter models (place your bets), wouldn't all be better off if we built practical and efficient solutions that all developers can use to solve real-world problems?Interested in how Hugging Face can help your organization build and deploy production-grade Machine Learning solutions? Get in touch at julsimon@huggingface.co (no recruiters, no sales pitches, please).
https://huggingface.co/blog/1b-sentence-embeddings
Train a Sentence Embedding Model with 1 Billion Training Pairs
Antoine SIMOULIN
October 25, 2021
Sentence embedding is a method that maps sentences to vectors of real numbers. Ideally, these vectors would capture the semantic of a sentence and be highly generic. Such representations could then be used for many downstream applications such as clustering, text mining, or question answering.We developed state-of-the-art sentence embedding models as part of the project "Train the Best Sentence Embedding Model Ever with 1B Training Pairs". This project took place during the Community week using JAX/Flax for NLP & CV, organized by Hugging Face. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as guidance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks!Training methodologyModelUnlike words, we can not define a finite set of sentences. Sentence embedding methods, therefore, compose inner words to compute the final representation. For example, SentenceBert model (Reimers and Gurevych, 2019) uses Transformer, the cornerstone of many NLP applications, followed by a pooling operation over the contextualized word vectors. (c.f. Figure below.)Multiple Negative Ranking LossThe parameters from the composition module are usually learned using a self-supervised objective. For the project, we used a contrastive training method illustrated in the figure below. We constitute a dataset with sentence pairs (ai,pi) (a_i, p_i) (ai​,pi​) such that sentences from the pair have a close meaning. For example, we consider pairs such as (query, answer-passage), (question, duplicate_question),(paper title, cited paper title). Our model is then trained to map pairs (ai,pi) (a_i , p_i) (ai​,pi​) to close vectors while assigning unmatched pairs (ai,pj),i≠j (a_i , p_j), i eq j (ai​,pj​),i=j to distant vectors in the embedding space. This training method is also called training with in-batch negatives, InfoNCE or NTXentLoss.Formally, given a batch of training samples, the model optimises the following loss function:−1n∑i=1nexp(sim(ai,pi))∑jexp(sim(ai,pj))-\frac{1}{n}\sum_{i=1}^n\frac{exp(sim(a_i, p_i))}{\sum_j exp(sim(a_i, p_j))}−n1​i=1∑n​∑j​exp(sim(ai​,pj​))exp(sim(ai​,pi​))​An illustrative example can be seen below. The model first embeds each sentence from every pair in the batch. Then, we compute a similarity matrix between every possible pair (ai,pj) (a_i, p_j) (ai​,pj​). We then compare the similarity matrix with the ground truth, which indicates the original pairs. Finally, we perform the comparison using the cross entropy loss.Intuitively, the model should assign high similarity to the sentences « How many people live in Berlin? » and « Around 3.5 million people live in Berlin » and low similarity to other negative answers such as « The capital of France is Paris » as detailed in the Figure below.In the loss equation, sim indicates a similarity function between (a,p) (a, p) (a,p). The similarity function could be either the Cosine-Similarity or the Dot-Product operator. Both methods have their pros and cons summarized below (Thakur et al., 2021, Bachrach et al., 2014):Cosine-similarityDot-productVector has highest similarity to itself since cos(a,a)=1 cos(a, a)=1 cos(a,a)=1.Other vectors can have higher dot-products dot(a,a)<dot(a,b) dot(a, a) < dot (a, b) dot(a,a)<dot(a,b).With normalised vectors it is equal to the dot product. The max vector length is equals 1.It might be slower with certain approximate nearest neighbour methods since the max vector not known.With normalised vectors, it is proportional to euclidian distance. It works with k-means clustering.It does not work with k-means clustering.In practice, we used a scaled similarity because score differences tends to be too small and apply a scaling factor C C C such that simscaled(a,b)=C∗sim(a,b) sim_{scaled}(a, b) = C * sim(a, b) simscaled​(a,b)=C∗sim(a,b) with typically C=20 C = 20 C=20 (Henderson and al., 2020, Radford and al., 2021).Improving Quality with Better BatchesIn our method, we build batches of sample pairs (ai,pi) (a_i , p_i) (ai​,pi​). We consider all other samples from the batch, (ai,pj),i≠j (a_i , p_j), i eq j (ai​,pj​),i=j, as negatives sample pairs. The batch composition is therefore a key training aspect. Given the literature in the domain, we mainly focused on three main aspects of the batch.1. Size mattersIn contrastive learning, a larger batch size is synonymous with better performance. As shown in the Figure extracted from Qu and al., (2021), a larger batch size increases the results.2. Hard NegativesIn the same figure, we observe that including hard negatives also improves performance. Hard negatives are sample pj p_j pj​ which are hard to distinguish from pi p_i pi​. In our example, it could be the pairs « What is the capital of France? » and « What is the capital of the US? » which have a close semantic content and requires precisely understanding the full sentence to be answered correctly. On the contrary, the samples « What is the capital of France? » and «How many Star Wars movies is there?» are less difficult to distinguish since they do not refer to the same topic.3. Cross dataset batchesWe concatenated multiple datasets to train our models. We built a large batch and gathered samples from the same batch dataset to limit the topic distribution and favor hard negatives. However, we also mix at least two datasets in the batch to learn a global structure between topics and not only a local structure within a topic.Training infrastructure and dataAs mentioned earlier, the quantity of data and the batch size directly impact the model performances. As part of the project, we benefited from efficient hardware infrastructure. We trained our models on TPUs which are compute units developed by Google and super efficient for matrix multiplications. TPUs have some hardware specificities which might require some specific code implementation.Additionally, we trained models on a large corpus as we concatenated multiple datasets up to 1 billion sentence pairs! All datasets used are detailed for each model in the model card.ConclusionYou can find all models and datasets we created during the challenge in our HuggingFace repository. We trained 20 general-purpose Sentence Transformers models such as Mini-LM (Wang and al., 2020), RoBERTa (liu and al., 2019), DistilBERT (Sanh and al., 2020) and MPNet (Song and al., 2020). Our models achieve SOTA on multiple general-purpose Sentence Similarity evaluation tasks. We also shared 8 datasets specialized for Question Answering, Sentence-Similarity, and Gender Evaluation. General sentence embeddings might be used for many applications. We built a Spaces demo to showcase several applications:The sentence similarity module compares the similarity of the main text with other texts of your choice. In the background, the demo extracts the embedding for each text and computes the similarity between the source sentence and the other using cosine similarity.Asymmetric QA compares the answer likeliness of a given query with answer candidates of your choice.Search / Cluster returns nearby answers from a query. For example, if you input « python », it will retrieve closest sentences using dot-product distance.Gender Bias Evaluation report inherent gender bias in training set via random sampling of the sentences. Given an anchor text without mentioning gender for target occupation and 2 propositions with gendered pronouns, we compare if models assign a higher similarity to a given proposition and therefore evaluate their proportion to favor a specific gender.The Community week using JAX/Flax for NLP & CV has been an intense and highly rewarding experience! The quality of Google’s Flax, JAX, and Cloud and Hugging Face team members' guidance and their presence helped us all learn a lot. We hope all projects had as much fun as we did in ours. Whenever you have questions or suggestions, don’t hesitate to contact us!
https://huggingface.co/blog/the-age-of-ml-as-code
The Age of Machine Learning As Code Has Arrived
Julien Simon
October 20, 2021
The 2021 edition of the State of AI Report came out last week. So did the Kaggle State of Machine Learning and Data Science Survey. There's much to be learned and discussed in these reports, and a couple of takeaways caught my attention."AI is increasingly being applied to mission critical infrastructure like national electric grids and automated supermarket warehousing calculations during pandemics. However, there are questions about whether the maturity of the industry has caught up with the enormity of its growing deployment."There's no denying that Machine Learning-powered applications are reaching into every corner of IT. But what does that mean for companies and organizations? How do we build rock-solid Machine Learning workflows? Should we all hire 100 Data Scientists ? Or 100 DevOps engineers?"Transformers have emerged as a general purpose architecture for ML. Not just for Natural Language Processing, but also Speech, Computer Vision or even protein structure prediction."Old timers have learned the hard way that there is no silver bullet in IT. Yet, the Transformer architecture is indeed very efficient on a wide variety of Machine Learning tasks. But how can we all keep up with the frantic pace of innovation in Machine Learning? Do we really need expert skills to leverage these state of the art models? Or is there a shorter path to creating business value in less time?Well, here's what I think.Machine Learning For The Masses!Machine Learning is everywhere, or at least it's trying to be. A few years ago, Forbes wrote that "Software ate the world, now AI is eating Software", but what does this really mean? If it means that Machine Learning models should replace thousands of lines of fossilized legacy code, then I'm all for it. Die, evil business rules, die!Now, does it mean that Machine Learning will actually replace Software Engineering? There's certainly a lot of fantasizing right now about AI-generated code, and some techniques are certainly interesting, such as finding bugs and performance issues. However, not only shouldn't we even consider getting rid of developers, we should work on empowering as many as we can so that Machine Learning becomes just another boring IT workload (and boring technology is great). In other words, what we really need is for Software to eat Machine Learning!Things are not different this timeFor years, I've argued and swashbuckled that decade-old best practices for Software Engineering also apply to Data Science and Machine Learning: versioning, reusability, testability, automation, deployment, monitoring, performance, optimization, etc. I felt alone for a while, and then the Google cavalry unexpectedly showed up:"Do machine learning like the great engineer you are, not like the great machine learning expert you aren't." - Rules of Machine Learning, GoogleThere's no need to reinvent the wheel either. The DevOps movement solved these problems over 10 years ago. Now, the Data Science and Machine Learning community should adopt and adapt these proven tools and processes without delay. This is the only way we'll ever manage to build robust, scalable and repeatable Machine Learning systems in production. If calling it MLOps helps, fine: I won't argue about another buzzword.It's really high time we stopped considering proof of concepts and sandbox A/B tests as notable achievements. They're merely a small stepping stone toward production, which is the only place where assumptions and business impact can be validated. Every Data Scientist and Machine Learning Engineer should obsess about getting their models in production, as quickly and as often as possible. An okay production model beats a great sandbox model every time.Infrastructure? So what?It's 2021. IT infrastructure should no longer stand in the way. Software has devoured it a while ago, abstracting it away with cloud APIs, infrastructure as code, Kubeflow and so on. Yes, even on premises.The same is quickly happening for Machine Learning infrastructure. According to the Kaggle survey, 75% of respondents use cloud services, and over 45% use an Enterprise ML platform, with Amazon SageMaker, Databricks and Azure ML Studio taking the top 3 spots.With MLOps, software-defined infrastructure and platforms, it's never been easier to drag all these great ideas out of the sandbox, and to move them to production. To answer my original question, I'm pretty sure you need to hire more ML-savvy Software and DevOps engineers, not more Data Scientists. But deep down inside, you kind of knew that, right?Now, let's talk about Transformers.Transformers! Transformers! Transformers! (Ballmer style)Says the State of AI report: "The Transformer architecture has expanded far beyond NLP and is emerging as a general purpose architecture for ML". For example, recent models like Google's Vision Transformer, a convolution-free transformer architecture, and CoAtNet, which mixes transformers and convolution, have set new benchmarks for image classification on ImageNet, while requiring fewer compute resources for training.Transformers also do very well on audio (say, speech recognition), as well as on point clouds, a technique used to model 3D environments like autonomous driving scenes.The Kaggle survey echoes this rise of Transformers. Their usage keeps growing year over year, while RNNs, CNNs and Gradient Boosting algorithms are receding.On top of increased accuracy, Transformers also keep fulfilling the transfer learning promise, allowing teams to save on training time and compute costs, and to deliver business value quicker.With Transformers, the Machine Learning world is gradually moving from "Yeehaa!! Let's build and train our own Deep Learning model from scratch" to "Let's pick a proven off the shelf model, fine-tune it on our own data, and be home early for dinner."It's a Good Thing in so many ways. State of the art is constantly advancing, and hardly anyone can keep up with its relentless pace. Remember that Google Vision Transformer model I mentioned earlier? Would you like to test it here and now? With Hugging Face, it's the simplest thing.How about the latest zero-shot text generation models from the Big Science project?You can do the same with another 16,000+ models and 1,600+ datasets, with additional tools for inference, AutoNLP, latency optimization, and hardware acceleration. We can also help you get your project off the ground, from modeling to production.Our mission at Hugging Face is to make Machine Learning as friendly and as productive as possible, for beginners and experts alike. We believe in writing as little code as possible to train, optimize, and deploy models. We believe in built-in best practices. We believe in making infrastructure as transparent as possible. We believe that nothing beats high quality models in production, fast.Machine Learning as Code, right here, right now!A lot of you seem to agree. We have over 52,000 stars on Github. For the first year, Hugging Face is also featured in the Kaggle survey, with usage already over 10%.Thank you all. And yeah, we're just getting started.Interested in how Hugging Face can help your organization build and deploy production-grade Machine Learning solutions? Get in touch at julsimon@huggingface.co (no recruiters, no sales pitches, please).
https://huggingface.co/blog/fine-tune-clip-rsicd
Fine tuning CLIP with Remote Sensing (Satellite) images and captions
Arto, Dev Vidhani, Goutham, Mayank Bhaskar, Sujit Pal
October 13, 2021
Fine tuning CLIP with Remote Sensing (Satellite) images and captionsIn July this year, Hugging Face organized a Flax/JAX Community Week, and invited the community to submit projects to train Hugging Face transformers models in the areas of Natural Language Processing (NLP) and Computer Vision (CV).Participants used Tensor Processing Units (TPUs) with Flax and JAX. JAX is a linear algebra library (like numpy) that can do automatic differentiation (Autograd) and compile down to XLA, and Flax is a neural network library and ecosystem for JAX. TPU compute time was provided free by Google Cloud, who co-sponsored the event.Over the next two weeks, teams participated in lectures from Hugging Face and Google, trained one or more models using JAX/Flax, shared them with the community, and provided a Hugging Face Spaces demo showcasing the capabilities of their model. Approximately 100 teams participated in the event, and it resulted in 170 models and 36 demos.Our team, like probably many others, is a distributed one, spanning 12 time zones. Our common thread is that we all belong to the TWIML Slack Channel, where we came together based on a shared interest in Artificial Intelligence (AI) and Machine Learning (ML) topics. We fine-tuned the CLIP Network from OpenAI with satellite images and captions from the RSICD dataset. The CLIP network learns visual concepts by being trained with image and caption pairs in a self-supervised manner, by using text paired with images found across the Internet. During inference, the model can predict the most relevant image given a text description or the most relevant text description given an image. CLIP is powerful enough to be used in zero-shot manner on everyday images. However, we felt that satellite images were sufficiently different from everyday images that it would be useful to fine-tune CLIP with them. Our intuition turned out to be correct, as the evaluation results (described below) shows. In this post, we describe details of our training and evaluation process, and our plans for future work on this project.The goal of our project was to provide a useful service and demonstrate how to use CLIP for practical use cases. Our model can be used by applications to search through large collections of satellite images using textual queries. Such queries could describe the image in totality (for example, beach, mountain, airport, baseball field, etc) or search or mention specific geographic or man-made features within these images. CLIP can similarly be fine-tuned for other domains as well, as shown by the medclip-demo team for medical images.The ability to search through large collections of images using text queries is an immensely powerful feature, and can be used as much for social good as for malign purposes. Possible applications include national defense and anti-terrorism activities, the ability to spot and address effects of climate change before they become unmanageable, etc. Unfortunately, this power can also be misused, such as for military and police surveillance by authoritarian nation-states, so it does raise some ethical questions as well.You can read about the project on our project page, download our trained model to use for inference on your own data, or see it in action on our demo.TrainingDatasetWe fine-tuned the CLIP model primarily with the RSICD dataset. This dataset consists of about 10,000 images collected from Google Earth, Baidu Map, MapABC, and Tianditu. It is provided freely to the research community to advance remote sensing captioning via Exploring Models and Data for Remote Sensing Image Caption Generation (Lu et al, 2017). The images are (224, 224) RGB images at various resolutions, and each image has up to 5 captions associated with it.Some examples of images from the RSICD datasetIn addition, we used the UCM Dataset and the Sydney dataset for training, The UCM dataset is based on the UC Merced Land Use dataset. It consists of 2100 images belonging to 21 classes (100 images per class), and each image has 5 captions. The Sydney dataset contains images of Sydney, Australia from Google Earth. It contains 613 images belonging to 7 classes. Images are (500, 500) RGB and provides 5 captions for each image. We used these additional datasets because we were not sure if the RSICD dataset would be large enough to fine-tune CLIP.ModelOur model is just the fine-tuned version of the original CLIP model shown below. Inputs to the model are a batch of captions and a batch of images passed through the CLIP text encoder and image encoder respectively. The training process uses contrastive learning to learn a joint embedding representation of image and captions. In this embedding space, images and their respective captions are pushed close together, as are similar images and similar captions. Conversely, images and captions for different images, or dissimilar images and captions, are likely to be pushed further apart.CLIP Training and Inference (Image Credit: CLIP: Connecting Text and Images (https://openai.comclip/))Data AugmentationIn order to regularize our dataset and prevent overfitting due to the size of the dataset, we used both image and text augmentation.Image augmentation was done inline using built-in transforms from Pytorch's Torchvision package. The transformations used were Random Cropping, Random Resizing and Cropping, Color Jitter, and Random Horizontal and Vertical flipping.We augmented the text with backtranslation to generate captions for images with less than 5 unique captions per image. The Marian MT family of models from Hugging Face was used to translate the existing captions into French, Spanish, Italian, and Portuguese and back to English to fill out the captions for these images.As shown in these loss plots below, image augmentation reduced overfitting significantly, and text and image augmentation reduced overfitting even further.Evaluation and Training loss plots comparing (top) no augmentation vs image augmentation, and (bottom) image augmentation vs text+image augmentationEvaluationMetricsA subset of the RSICD test set was used for evaluation. We found 30 categories of images in this subset. The evaluation was done by comparing each image with a set of 30 caption sentences of the form "An aerial photograph of {category}". The model produced a ranked list of the 30 captions, from most relevant to least relevant. Categories corresponding to captions with the top k scores (for k=1, 3, 5, and 10) were compared with the category provided via the image file name. The scores are averaged over the entire set of images used for evaluation and reported for various values of k, as shown below.The baseline model represents the pre-trained openai/clip-vit-base-path32 CLIP model. This model was fine-tuned with captions and images from the RSICD dataset, which resulted in a significant performance boost, as shown below.Our best model was trained with image and text augmentation, with batch size 1024 (128 on each of the 8 TPU cores), and the Adam optimizer with learning rate 5e-6. We trained our second base model with the same hyperparameters, except that we used the Adafactor optimizer with learning rate 1e-4. You can download either model from their model repos linked to in the table below.Model-namek=1k=3k=5k=10baseline0.5720.7450.8370.939bs128x8-lr1e-4-augs/ckpt-20.8190.9500.9740.994bs128x8-lr1e-4-imgaugs/ckpt-20.8120.9420.9700.991bs128x8-lr1e-4-imgaugs-textaugs/ckpt-420.8430.9580.9770.993bs128x8-lr5e-5-imgaugs-textaugs/ckpt-80.8310.9590.9770.994bs128x8-lr5e-5-imgaugs/ckpt-40.7460.9060.9560.989bs128x8-lr5e-5-imgaugs-textaugs-2/ckpt-40.8110.9450.9720.993bs128x8-lr5e-5-imgaugs-textaugs-3/ckpt-50.8230.9460.9710.992bs128x8-lr5e-5-wd02/ckpt-40.8200.9460.9650.990bs128x8-lr5e-6-adam/ckpt-110.8830.9680.9820.9981 - our best model, 2 - our second best modelDemoYou can access the CLIP-RSICD Demo here. It uses our fine-tuned CLIP model to provide the following functionality:Text to Image searchImage to Image searchFind text feature in imageThe first two functionalities use the RSICD test set as its image corpus. They are encoded using our best fine-tuned CLIP model and stored in a NMSLib index which allows Approximate Nearest Neighbor based retrieval. For text-to-image and image-to-image search respectively, the query text or image are encoded with our model and matched against the image vectors in the corpus. For the third functionality, we divide the incoming image into patches and encode them, encode the queried text feature, match the text vector with each image patch vector, and return the probability of finding the feature in each patch.Future WorkWe are grateful that we have been given an opportunity to further refine our model. Some ideas we have for future work are as follows:Construct a sequence to sequence model using a CLIP encoder and a GPT-3 decoder and train it for image captioning.Fine-tune the model on more image caption pairs from other datasets and investigate if we can improve its performance.Investigate how fine-tuning affects the performance of model on non-RSICD image caption pairs.Investigate the capability of the fine-tuned model to classify outside the categories it has been fine-tuned on.Evaluate the model using other criteria such as image classification.
https://huggingface.co/blog/streamlit-spaces
Hosting your Models and Datasets on Hugging Face Spaces using Streamlit
Merve Noyan
October 5, 2021
Showcase your Datasets and Models using Streamlit on Hugging Face SpacesStreamlit allows you to visualize datasets and build demos of Machine Learning models in a neat way. In this blog post we will walk you through hosting models and datasets and serving your Streamlit applications in Hugging Face Spaces. Building demos for your modelsYou can load any Hugging Face model and build cool UIs using Streamlit. In this particular example we will recreate "Write with Transformer" together. It's an application that lets you write anything using transformers like GPT-2 and XLNet. We will not dive deep into how the inference works. You only need to know that you need to specify some hyperparameter values for this particular application. Streamlit provides many components for you to easily implement custom applications. We will use some of them to receive necessary hyperparameters inside the inference code.The .text_area component creates a nice area to input sentences to be completed.The Streamlit .sidebar method enables you to accept variables in a sidebar. The slider is used to take continuous values. Don't forget to give slider a step, otherwise it will treat the values as integers. You can let the end-user input integer vaues with number_input .import streamlit as st# adding the text that will show in the text box as defaultdefault_value = "See how a modern neural network auto-completes your text 🤗 This site, built by the Hugging Face team, lets you write a whole document directly from your browser, and you can trigger the Transformer anywhere using the Tab key. Its like having a smart machine that completes your thoughts 😀 Get started by typing a custom snippet, check out the repository, or try one of the examples. Have fun!"sent = st.text_area("Text", default_value, height = 275)max_length = st.sidebar.slider("Max Length", min_value = 10, max_value=30)temperature = st.sidebar.slider("Temperature", value = 1.0, min_value = 0.0, max_value=1.0, step=0.05)top_k = st.sidebar.slider("Top-k", min_value = 0, max_value=5, value = 0)top_p = st.sidebar.slider("Top-p", min_value = 0.0, max_value=1.0, step = 0.05, value = 0.9)num_return_sequences = st.sidebar.number_input('Number of Return Sequences', min_value=1, max_value=5, value=1, step=1)The inference code returns the generated output, you can print the output using simple st.write.st.write(generated_sequences[-1])Here's what our replicated version looks like.You can checkout the full code here.Showcase your Datasets and Data VisualizationsStreamlit provides many components to help you visualize datasets. It works seamlessly with 🤗 Datasets, pandas, and visualization libraries such as matplotlib, seaborn and bokeh. Let's start by loading a dataset. A new feature in Datasets, called streaming, allows you to work immediately with very large datasets, eliminating the need to download all of the examples and load them into memory.from datasets import load_datasetimport streamlit as stdataset = load_dataset("merve/poetry", streaming=True)df = pd.DataFrame.from_dict(dataset["train"])If you have structured data like mine, you can simply use st.dataframe(df) to show your dataset. There are many Streamlit components to plot data interactively. One such component is st.barchart() , which I used to visualize the most used words in the poem contents. st.write("Most appearing words including stopwords")st.bar_chart(words[0:50])If you'd like to use libraries like matplotlib, seaborn or bokeh, all you have to do is to put st.pyplot() at the end of your plotting script.st.write("Number of poems for each author")sns.catplot(x="author", data=df, kind="count", aspect = 4)plt.xticks(rotation=90)st.pyplot()You can see the interactive bar chart, dataframe component and hosted matplotlib and seaborn visualizations below. You can check out the code here.Hosting your Projects in Hugging Face SpacesYou can simply drag and drop your files as shown below. Note that you need to include your additional dependencies in the requirements.txt. Also note that the version of Streamlit you have on your local is the same. For seamless usage, refer to Spaces API reference. There are so many components and packages you can use to demonstrate your models, datasets, and visualizations. You can get started here.
https://huggingface.co/blog/gradio-spaces
Showcase Your Projects in Spaces using Gradio
Merve Noyan
October 5, 2021
It's so easy to demonstrate a Machine Learning project thanks to Gradio. In this blog post, we'll walk you through:the recent Gradio integration that helps you demo models from the Hub seamlessly with few lines of code leveraging the Inference API.how to use Hugging Face Spaces to host demos of your own models.Hugging Face Hub Integration in GradioYou can demonstrate your models in the Hub easily. You only need to define the Interface that includes:The repository ID of the model you want to infer withA description and titleExample inputs to guide your audienceAfter defining your Interface, just call .launch() and your demo will start running. You can do this in Colab, but if you want to share it with the community a great option is to use Spaces!Spaces are a simple, free way to host your ML demo apps in Python. To do so, you can create a repository at https://huggingface.co/new-space and select Gradio as the SDK. Once done, you can create a file called app.py, copy the code below, and your app will be up and running in a few seconds!import gradio as grdescription = "Story generation with GPT-2"title = "Generate your own story"examples = [["Adventurer is approached by a mysterious stranger in the tavern for a new quest."]]interface = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator",description=description,examples=examples)interface.launch()You can play with the Story Generation model hereUnder the hood, Gradio calls the Inference API which supports Transformers as well as other popular ML frameworks such as spaCy, SpeechBrain and Asteroid. This integration supports different types of models, image-to-text, speech-to-text, text-to-speech and more. You can check out this example BigGAN ImageNet text-to-image model here. Implementation is below.import gradio as grdescription = "BigGAN text-to-image demo."title = "BigGAN ImageNet"interface = gr.Interface.load("huggingface/osanseviero/BigGAN-deep-128", description=description,title = title,examples=[["american robin"]])interface.launch()Serving Custom Model Checkpoints with Gradio in Hugging Face SpacesYou can serve your models in Spaces even if the Inference API does not support your model. Just wrap your model inference in a Gradio Interface as described below and put it in Spaces. Mix and Match Models!Using Gradio Series, you can mix-and-match different models! Here, we've put a French to English translation model on top of the story generator and a English to French translation model at the end of the generator model to simply make a French story generator.import gradio as grfrom gradio.mix import Seriesdescription = "Generate your own D&D story!"title = "French Story Generator using Opus MT and GPT-2"translator_fr = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-fr-en")story_gen = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator")translator_en = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-fr")examples = [["L'aventurier est approché par un mystérieux étranger, pour une nouvelle quête."]]Series(translator_fr, story_gen, translator_en, description = description,title = title,examples=examples, inputs = gr.inputs.Textbox(lines = 10)).launch()You can check out the French Story Generator hereUploading your Models to the SpacesYou can serve your demos in Hugging Face thanks to Spaces! To do this, simply create a new Space, and then drag and drop your demos or use Git. Easily build your first demo with Spaces here!
https://huggingface.co/blog/summer-at-huggingface
Summer At Hugging Face 😎
Hugging Face
September 24, 2021
Summer is now officially over and these last few months have been quite busy at Hugging Face. From new features in the Hub to research and Open Source development, our team has been working hard to empower the community through open and collaborative technology. In this blog post you'll catch up on everything that happened at Hugging Face in June, July and August!This post covers a wide range of areas our team has been working on, so don't hesitate to skip to the parts that interest you the most 🤗New FeaturesCommunityOpen SourceSolutionsResearch New Features In the last few months, the Hub went from 10,000 public model repositories to over 16,000 models! Kudos to our community for sharing so many amazing models with the world. And beyond the numbers, we have a ton of cool new features to share with you! Spaces Beta (hf.co/spaces) Spaces is a simple and free solution to host Machine Learning demo applications directly on your user profile or your organization hf.co profile. We support two awesome SDKs that let you build cool apps easily in Python: Gradio and Streamlit. In a matter of minutes you can deploy an app and share it with the community! 🚀Spaces lets you set up secrets, permits custom requirements, and can even be managed directly from GitHub repos. You can sign up for the beta at hf.co/spaces. Here are some of our favorites!Create recipes with the help of Chef TransformerTranscribe speech to text with HuBERTDo segmentation in a video with the DINO modelUse Paint Transformer to make paintings from a given pictureOr you can just explore any of the over 100 existing Spaces! Share Some Love You can now like any model, dataset, or Space on http://huggingface.co, meaning you can share some love with the community ❤️. You can also keep an eye on who's liking what by clicking on the likes box 👀. Go ahead and like your own repos, we're not judging 😉. TensorBoard Integration In late June, we launched a TensorBoard integration for all our models. If there are TensorBoard traces in the repo, an automatic, free TensorBoard instance is launched for you. This works with both public and private repositories and for any library that has TensorBoard traces! Metrics In July, we added the ability to list evaluation metrics in model repos by adding them to their model card📈. If you add an evaluation metric under the model-index section of your model card, it will be displayed proudly in your model repo.If that wasn't enough, these metrics will be automatically linked to the corresponding Papers With Code leaderboard. That means as soon as you share your model on the Hub, you can compare your results side-by-side with others in the community. 💪Check out this repo as an example, paying close attention to model-index section of its model card to see how you can do this yourself and find the metrics in Papers with Code automatically. New Widgets The Hub has 18 widgets that allow users to try out models directly in the browser.With our latest integrations to Sentence Transformers, we also introduced two new widgets: feature extraction and sentence similarity. The latest audio classification widget enables many cool use cases: language identification, street sound detection 🚨, command recognition, speaker identification, and more! You can try this out with transformers and speechbrain models today! 🔊 (Beware, when you try some of the models, you might need to bark out loud)You can try our early demo of structured data classification with Scikit-learn. And finally, we also introduced new widgets for image-related models: text to image, image classification, and object detection. Try image classification with Google's ViT model here and object detection with Facebook AI's DETR model here! More Features That's not everything that has happened in the Hub. We've introduced new and improved documentation of the Hub. We also introduced two widely requested features: users can now transfer/rename repositories and directly upload new files to the Hub. Community Hugging Face Course In June, we launched the first part of our free online course! The course teaches you everything about the 🤗 Ecosystem: Transformers, Tokenizers, Datasets, Accelerate, and the Hub. You can also find links to the course lessons in the official documentation of our libraries. The live sessions for all chapters can be found on our YouTube channel. Stay tuned for the next part of the course which we'll be launching later this year! JAX/FLAX Sprint In July we hosted our biggest community event ever with almost 800 participants! In this event co-organized with the JAX/Flax and Google Cloud teams, compute-intensive NLP, Computer Vision, and Speech projects were made accessible to a wider audience of engineers and researchers by providing free TPUv3s. The participants created over 170 models, 22 datasets, and 38 Spaces demos 🤯. You can explore all the amazing demos and projects here.There were talks around JAX/Flax, Transformers, large-scale language modeling, and more! You can find all recordings here. We're really excited to share the work of the 3 winning teams!Dall-e mini. DALL·E mini is a model that generates images from any prompt you give! DALL·E mini is 27 times smaller than the original DALL·E and still has impressive results. DietNerf. DietNerf is a 3D neural view synthesis model designed for few-shot learning of 3D scene reconstruction using 2D views. This is the first Open Source implementation of the "Putting Nerf on a Diet" paper. CLIP RSIC. CLIP RSIC is a CLIP model fine-tuned on remote sensing image data to enable zero-shot satellite image classification and captioning. This project demonstrates how effective fine-tuned CLIP models can be for specialized domains. Apart from these very cool projects, we're excited about how these community events enable training large and multi-modal models for multiple languages. For example, we saw the first ever Open Source big LMs for some low-resource languages like Swahili, Polish and Marathi. Bonus On top of everything we just shared, our team has been doing lots of other things. Here are just some of them:📖 This 3-part video series shows the theory on how to train state-of-the-art sentence embedding models. We presented at PyTorch Community Voices and participated in a QA (video).Hugging Face has collaborated with NLP in Spanish and SpainAI in a Spanish course that teaches concepts and state-of-the art architectures as well as their applications through use cases.We presented at MLOps World Demo Days. Open Source New in Transformers Summer has been an exciting time for 🤗 Transformers! The library reached 50,000 stars, 30 million total downloads, and almost 1000 contributors! 🤩So what's new? JAX/Flax is now the 3rd supported framework with over 5000 models in the Hub! You can find actively maintained examples for different tasks such as text classification. We're also working hard on improving our TensorFlow support: all our examples have been reworked to be more robust, TensorFlow idiomatic, and clearer. This includes examples such as summarization, translation, and named entity recognition.You can now easily publish your model to the Hub, including automatically authored model cards, evaluation metrics, and TensorBoard instances. There is also increased support for exporting models to ONNX with the new transformers.onnx module. python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/The last 4 releases introduced many new cool models!DETR can do fast end-to-end object detection and image segmentation. Check out some of our community tutorials!ByT5 is the first tokenizer-free model in the Hub! You can find all available checkpoints here.CANINE is another tokenizer-free encoder-only model by Google AI, operating directly at the character level. You can find all (multilingual) checkpoints here.HuBERT shows exciting results for downstream audio tasks such as command classification and emotion recognition. Check the models here.LayoutLMv2 and LayoutXLM are two incredible models capable of parsing document images (like PDFs) by incorporating text, layout, and visual information. We built a Space demo so you can directly try it out! Demo notebooks can be found here.BEiT by Microsoft Research makes self-supervised Vision Transformers outperform supervised ones, using a clever pre-training objective inspired by BERT.RemBERT, a large multilingual Transformer that outperforms XLM-R (and mT5 with a similar number of parameters) in zero-shot transfer.Splinter which can be used for few-shot question answering. Given only 128 examples, Splinter is able to reach ~73% F1 on SQuAD, outperforming MLM-based models by 24 points!The Hub is now integrated into transformers, with the ability to push to the Hub configuration, model, and tokenizer files without leaving the Python runtime! The Trainer can now push directly to the Hub every time a checkpoint is saved: New in Datasets You can find 1400 public datasets in https://huggingface.co/datasets thanks to the awesome contributions from all our community. 💯The support for datasets keeps growing: it can be used in JAX, process parquet files, use remote files, and has wider support for other domains such as Automatic Speech Recognition and Image Classification.Users can also directly host and share their datasets to the community simply by uploading their data files in a repository on the Dataset Hub.What are the new datasets highlights? Microsoft CodeXGlue datasets for multiple coding tasks (code completion, generation, search, etc), huge datasets such as C4 and MC4, and many more such as RussianSuperGLUE and DISFL-QA. Welcoming new Libraries to the Hub Apart from having deep integration with transformers-based models, the Hub is also building great partnerships with Open Source ML libraries to provide free model hosting and versioning. We've been achieving this with our huggingface_hub Open-Source library as well as new Hub documentation. All spaCy canonical pipelines can now be found in the official spaCy organization, and any user can share their pipelines with a single command python -m spacy huggingface-hub. To read more about it, head to https://huggingface.co/blog/spacy. You can try all canonical spaCy models directly in the Hub in the demo Space!Another exciting integration is Sentence Transformers. You can read more about it in the blog announcement: you can find over 200 models in the Hub, easily share your models with the rest of the community and reuse models from the community.But that's not all! You can now find over 100 Adapter Transformers in the Hub and try out Speechbrain models with widgets directly in the browser for different tasks such as audio classification. If you're interested in our collaborations to integrate new ML libraries to the Hub, you can read more about them here. Solutions Coming soon: InfinityTransformers latency down to 1ms? 🤯🤯🤯We have been working on a really sleek solution to achieve unmatched efficiency for state-of-the-art Transformer models, for companies to deploy in their own infrastructure.Infinity comes as a single-container and can be deployed in any production environment.It can achieve 1ms latency for BERT-like models on GPU and 4-10ms on CPU 🤯🤯🤯Infinity meets the highest security requirements and can be integrated into your system without the need for internet access. You have control over all incoming and outgoing traffic.⚠️ Join us for a live announcement and demo on Sep 28, where we will be showcasing Infinity for the first time in public!NEW: Hardware AccelerationHugging Face is partnering with leading AI hardware accelerators such as Intel, Qualcomm and GraphCore to make state-of-the-art production performance accessible and extend training capabilities on SOTA hardware. As the first step in this journey, we introduced a new Open Source library: 🤗 Optimum - the ML optimization toolkit for production performance 🏎. Learn more in this blog post. NEW: Inference on SageMakerWe launched a new integration with AWS to make it easier than ever to deploy 🤗 Transformers in SageMaker 🔥. Pick up the code snippet right from the 🤗 Hub model page! Learn more about how to leverage transformers in SageMaker in our docs or check out these video tutorials.For questions reach out to us on the forum: https://discuss.huggingface.co/c/sagemaker/17NEW: AutoNLP In Your BrowserWe released a new AutoNLP experience: a web interface to train models straight from your browser! Now all it takes is a few clicks to train, evaluate and deploy 🤗 Transformers models on your own data. Try it out - NO CODE needed! Inference API Webinar:We hosted a live webinar to show how to add Machine Learning capabilities with just a few lines of code. We also built a VSCode extension that leverages the Hugging Face Inference API to generate comments describing Python code.Hugging Face + Zapier Demo20,000+ Machine Learning models connected to 3,000+ apps? 🤯 By leveraging the Inference API, you can now easily connect models right into apps like Gmail, Slack, Twitter, and more. In this demo video, we created a zap that uses this code snippet to analyze your Twitter mentions and alerts you on Slack about the negative ones.Hugging Face + Google Sheets DemoWith the Inference API, you can easily use zero-shot classification right into your spreadsheets in Google Sheets. Just add this script in Tools -> Script Editor:Few-shot learning in practiceWe wrote a blog post about what Few-Shot Learning is and explores how GPT-Neo and 🤗 Accelerated Inference API are used to generate your own predictions.Expert Acceleration ProgramCheck out out the brand new home for the Expert Acceleration Program; you can now get direct, premium support from our Machine Learning experts and build better ML solutions, faster. Research At BigScience we held our first live event (since the kick off) in July BigScience Episode #1. Our second event BigScience Episode #2 was held on September 20th, 2021 with technical talks and updates by the BigScience working groups and invited talks by Jade Abbott (Masakhane), Percy Liang (Stanford CRFM), Stella Biderman (EleutherAI) and more. We have completed the first large-scale training on Jean Zay, a 13B English only decoder model (you can find the details here), and we're currently deciding on the architecture of the second model. The organization working group has filed the application for the second half of the compute budget: Jean Zay V100 : 2,500,000 GPU hours. 🚀 In June, we shared the result of our collaboration with the Yandex research team: DeDLOC, a method to collaboratively train your large neural networks, i.e. without using an HPC cluster, but with various accessible resources such as Google Colaboratory or Kaggle notebooks, personal computers or preemptible VMs. Thanks to this method, we were able to train sahajBERT, a Bengali language model, with 40 volunteers! And our model competes with the state of the art, and even is the best for the downstream task of classification on Soham News Article Classification dataset. You can read more about it in this blog post. This is a fascinating line of research because it would make model pre-training much more accessible (financially speaking)!In June our paper, How Many Data Points is a Prompt Worth?, got a Best Paper award at NAACL! In it, we reconcile and compare traditional and prompting approaches to adapt pre-trained models, finding that human-written prompts are worth up to thousands of supervised data points on new tasks. You can also read its blog post.We're looking forward to EMNLP this year where we have four accepted papers!Our paper "Datasets: A Community Library for Natural Language Processing" documents the Hugging Face Datasets project that has over 300 contributors. This community project gives easy access to hundreds of datasets to researchers. It has facilitated new use cases of cross-dataset NLP, and has advanced features for tasks like indexing and streaming large datasets.Our collaboration with researchers from TU Darmstadt lead to another paper accepted at the conference ("Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning"). In this paper, we show that prompt-based fine-tuned language models (which achieve strong performance in few-shot setups) still suffer from learning surface heuristics (sometimes called dataset biases), a pitfall that zero-shot models don't exhibit.Our submission "Block Pruning For Faster Transformers" has also been accepted as a long paper. In this paper, we show how to use block sparsity to obtain both fast and small Transformer models. Our experiments yield models which are 2.4x faster and 74% smaller than BERT on SQuAD. Last words 😎 🔥 Summer was fun! So many things have happened! We hope you enjoyed reading this blog post and looking forward to share the new projects we're working on. See you in the winter! ❄️
https://huggingface.co/blog/graphcore
Hugging Face and Graphcore partner for IPU-optimized Transformers
Sally Doherty
September 14, 2021
Graphcore and Hugging Face are two companies with a common goal – to make it easier for innovators to harness the power of machine intelligence. Hugging Face’s Hardware Partner Program will allow developers using Graphcore systems to deploy state-of-the-art Transformer models, optimised for our Intelligence Processing Unit (IPU), at production scale, with minimum coding complexity.What is an Intelligence Processing Unit?IPUs are the processors that power Graphcore’s IPU-POD datacenter compute systems. This new type of processor is designed to support the very specific computational requirements of AI and machine learning. Characteristics such as fine-grained parallelism, low precision arithmetic, and the ability to handle sparsity have been built into our silicon.Instead of adopting a SIMD/SIMT architecture like GPUs, Graphcore’s IPU uses a massively parallel, MIMD architecture, with ultra-high bandwidth memory placed adjacent to the processor cores, right on the silicon die.This design delivers high performance and new levels of efficiency, whether running today’s most popular models, such as BERT and EfficientNet, or exploring next-generation AI applications.Software plays a vital role in unlocking the IPU’s capabilities. Our Poplar SDK has been co-designed with the processor since Graphcore’s inception. Today it fully integrates with standard machine learning frameworks, including PyTorch and TensorFlow, as well as orchestration and deployment tools such as Docker and Kubernetes.Making Poplar compatible with these widely used, third-party systems allows developers to easily port their models from their other compute platforms and start taking advantage of the IPU’s advanced AI capabilities.Optimising Transformers for ProductionTransformers have completely transformed (pun intended) the field of AI. Models such as BERT are widely used by Graphcore customers in a huge array of applications, across NLP and beyond. These multi-talented models can perform feature extraction, text generation, sentiment analysis, translation and many more functions.Already, Hugging Face plays host to hundreds of Transformers, from the French-language CamemBERT to ViT which applies lessons learned in NLP to computer vision. The Transformers library is downloaded an average of 2 million times every month and demand is growing.With a user base of more than 50,000 developers – Hugging Face has seen the fastest ever adoption of an open-source project.Now, with its Hardware Partner Program, Hugging Face is connecting the ultimate Transformer toolset with today's most advanced AI hardware.Using Optimum, a new open-source library and toolkit, developers will be able to access hardware-optimized models certified by Hugging Face.These are being developed in a collaboration between Graphcore and Hugging Face, with the first IPU-optimized models appearing on Optimum later this year. Ultimately, these will cover a wide range of applications, from vision and speech to translation and text generation.Hugging Face CEO Clément Delangue said: “Developers all want access to the latest and greatest hardware – like the Graphcore IPU, but there’s always that question of whether they’ll have to learn new code or processes. With Optimum and the Hugging Face Hardware Program, that’s just not an issue. It’s essentially plug-and-play".SOTA Models meet SOTA HardwarePrior to the announcement of the Hugging Face Partnership, we had demonstrated the power of the IPU to accelerate state-of-the-art Transformer models with a special Graphcore-optimised implementation of Hugging Face BERT using Pytorch.Full details of this example can be found in the Graphcore blog BERT-Large training on the IPU explained.The dramatic benchmark results for BERT running on a Graphcore system, compared with a comparable GPU-based system are surely a tantalising prospect for anyone currently running the popular NLP model on something other than the IPU.This type of acceleration can be game changing for machine learning researchers and engineers, winning them back valuable hours of training time and allowing them many more iterations when developing new models.Now Graphcore users will be able to unlock such performance advantages, through the Hugging Face platform, with its elegant simplicity and superlative range of models.Together, Hugging Face and Graphcore are helping even more people to access the power of Transformers and accelerate the AI revolution.Visit the Hugging Face Hardware Partner portal to learn more about Graphcore IPU systems and how to gain access
https://huggingface.co/blog/hardware-partners-program
Introducing 🤗 Optimum: The Optimization Toolkit for Transformers at Scale
Morgan Funtowicz, Ella Charlaix, Michael Benayoun, Jeff Boudier
September 14, 2021
This post is the first step of a journey for Hugging Face to democratizestate-of-the-art Machine Learning production performance.To get there, we will work hand in hand with ourHardware Partners, as we have with Intel below.Join us in this journey, and follow Optimum, our new open source library!Why 🤗 Optimum?🤯 Scaling Transformers is hardWhat do Tesla, Google, Microsoft and Facebook all have in common?Well many things, but one of them is they all run billions of Transformer model predictionsevery day. Transformers for AutoPilot to drive your Tesla (lucky you!),for Gmail to complete your sentences,for Facebook to translate your posts on the fly,for Bing to answer your natural language queries.Transformers have brought a step change improvementin the accuracy of Machine Learning models, have conquered NLP and are now expandingto other modalities starting with Speechand Vision.But taking these massive models into production, and making them run fast at scale is a huge challengefor any Machine Learning Engineering team.What if you don’t have hundreds of highly skilled Machine Learning Engineers on payroll like the above companies?Through Optimum, our new open source library, we aim to build the definitive toolkit for Transformers production performance,and enable maximum efficiency to train and run models on specific hardware.🏭 Optimum puts Transformers to workTo get optimal performance training and serving models, the model acceleration techniques need to be specifically compatible with the targeted hardware.Each hardware platform offers specific software tooling,features and knobs that can have a huge impact on performance.Similarly, to take advantage of advanced model acceleration techniques like sparsity and quantization, optimized kernels need to be compatible with the operators on silicon,and specific to the neural network graph derived from the model architecture.Diving into this 3-dimensional compatibility matrix and how to use model acceleration libraries is daunting work,which few Machine Learning Engineers have experience on.Optimum aims to make this work easy, providing performance optimization tools targeting efficient AI hardware,built in collaboration with our Hardware Partners, and turn Machine Learning Engineers into ML Optimization wizards.With the Transformers library, we made it easy for researchers and engineers to use state-of-the-art models,abstracting away the complexity of frameworks, architectures and pipelines.With the Optimum library, we are making it easy for engineers to leverage all the available hardware features at their disposal,abstracting away the complexity of model acceleration on hardware platforms.🤗 Optimum in practice: how to quantize a model for Intel Xeon CPU🤔 Why quantization is important but tricky to get rightPre-trained language models such as BERT have achieved state-of-the-art results on a wide range of natural language processing tasks,other Transformer based models such as ViT and Speech2Text have achieved state-of-the-art results on computer vision and speech tasks respectively:transformers are everywhere in the Machine Learning world and are here to stay.However, putting transformer-based models into production can be tricky and expensive as they need a lot of compute power to work.To solve this many techniques exist, the most popular being quantization.Unfortunately, in most cases quantizing a model requires a lot of work, for many reasons:The model needs to be edited: some ops need to be replaced by their quantized counterparts, new ops need to be inserted (quantization and dequantization nodes),and others need to be adapted to the fact that weights and activations will be quantized.This part can be very time-consuming because frameworks such as PyTorch work in eager mode, meaning that the changes mentioned above need to be added to the model implementation itself.PyTorch now provides a tool called torch.fx that allows you to trace and transform your model without having to actually change the model implementation, but it is tricky to use when tracing is not supported for your model out of the box.On top of the actual editing, it is also necessary to find which parts of the model need to be edited,which ops have an available quantized kernel counterpart and which ops don't, and so on.Once the model has been edited, there are many parameters to play with to find the best quantization settings:Which kind of observers should I use for range calibration?Which quantization scheme should I use?Which quantization related data types (int8, uint8, int16) are supported on my target device?Balance the trade-off between quantization and an acceptable accuracy loss.Export the quantized model for the target device.Although PyTorch and TensorFlow made great progress in making things easy for quantization,the complexities of transformer based models makes it hard to use the provided tools out of the box and get something working without putting up a ton of effort.💡 How Intel is solving quantization and more with Neural CompressorIntel® Neural Compressor (formerly referred to as Low Precision Optimization Tool or LPOT) is an open-source python library designed to help users deploy low-precision inference solutions.The latter applies low-precision recipes for deep-learning models to achieve optimal product objectives,such as inference performance and memory usage, with expected performance criteria.Neural Compressor supports post-training quantization, quantization-aware training and dynamic quantization.In order to specify the quantization approach, objective and performance criteria, the user must provide a configuration yaml file specifying the tuning parameters.The configuration file can either be hosted on the Hugging Face's Model Hub or can be given through a local directory path.🔥 How to easily quantize Transformers for Intel Xeon CPUs with OptimumFollow 🤗 Optimum: a journey to democratize ML production performance⚡️State of the Art HardwareOptimum will focus on achieving optimal production performance on dedicated hardware, where software and hardware acceleration techniques can be applied for maximum efficiency.We will work hand in hand with our Hardware Partners to enable, test and maintain acceleration, and deliver it in an easy and accessible way through Optimum, as we did with Intel and Neural Compressor.We will soon announce new Hardware Partners who have joined us on our journey toward Machine Learning efficiency.🔮 State-of-the-Art ModelsThe collaboration with our Hardware Partners will yield hardware-specific optimized model configurations and artifacts,which we will make available to the AI community via the Hugging Face Model Hub.We hope that Optimum and hardware-optimized models will accelerate the adoption of efficiency in production workloads,which represent most of the aggregate energy spent on Machine Learning.And most of all, we hope that Optimum will accelerate the adoption of Transformers at scale, not just for the biggest tech companies, but for all of us.🌟 A journey of collaboration: join us, follow our progressEvery journey starts with a first step, and ours was the public release of Optimum.Join us and make your first step by giving the library a Star,so you can follow along as we introduce new supported hardware, acceleration techniques and optimized models.If you would like to see new hardware and features be supported in Optimum,or you are interested in joining us to work at the intersection of software and hardware, please reach out to us at hardware@huggingface.co
https://huggingface.co/blog/collaborative-training
Deep Learning over the Internet: Training Language Models Collaboratively
Max Ryabinin, Lucile Saulnier
July 15, 2021
Modern language models often require a significant amount of compute for pretraining, making it impossible to obtain them without access to tens and hundreds of GPUs or TPUs. Though in theory it might be possible to combine the resources of multiple individuals, in practice, such distributed training methods have previously seen limited success because connection speeds over the Internet are way slower than in high-performance GPU supercomputers.In this blog post, we describe DeDLOC — a new method for collaborative distributed training that can adapt itself to the network and hardware constraints of participants. We show that it can be successfully applied in real-world scenarios by pretraining sahajBERT, a model for the Bengali language, with 40 volunteers. On downstream tasks in Bengali, this model achieves nearly state-of-the-art quality with results comparable to much larger models that used hundreds of high-tier accelerators.Distributed Deep Learning in Open CollaborationsWhy should we do it?These days, many highest-quality NLP systems are based on large pretrained Transformers. In general, their quality improves with size: you can achieve unparalleled results in natural language understanding and generation by scaling up the parameter count and leveraging the abundance of unlabeled text data.Unfortunately, we use these pretrained models not only because it's convenient. The hardware resources for training Transformers on large datasets often exceed anything affordable to a single person and even most commercial or research organizations. Take, for example, BERT: its training was estimated to cost about $7,000, and for the largest models like GPT-3, this number can be as high as $12 million! This resource limitation might seem obvious and inevitable, but is there really no alternative to using pretrained models for the broader ML community?However, there might be a way out of this situation: to come up with a solution, we only need to take a look around. It might be the case that the computational resources we're looking for are already there; for example, many of us have powerful computers with gaming or workstation GPUs at home. You might've already guessed that we're going to join their power similarly to Folding@home, Rosetta@home, Leela Chess Zero or different BOINC projects that leverage volunteer computing, but the approach is even more general. For instance, several laboratories can join their smaller clusters to utilize all the available resources, and some might want to join the experiment using inexpensive cloud instances.To a skeptical mind, it might seem that we're missing a key factor here: data transfer in distributed DL is often a bottleneck, since we need to aggregate the gradients from multiple workers. Indeed, any naïve approach to distributed training over the Internet is bound to fail, as most participants don't have gigabit connections and might disconnect from the network at any time. So how on Earth can you train anything with a household data plan? :)As a solution to this problem, we propose a new training algorithm, called Distributed Deep Learning in Open Collaborations (or DeDLOC), which is described in detail in our recently released preprint. Now, let’s find out what are the core ideas behind this algorithm!Training with volunteersIn its most frequently used version, distributed training with multiple GPUs is pretty straightforward. Recall that when doing deep learning, you usually compute gradients of your loss function averaged across many examples in a batch of training data. In case of data-parallel distributed DL, you simply split the data across multiple workers, compute gradients separately, and then average them once the local batches are processed. When the average gradient is computed on all workers, we adjust the model weights with the optimizer and continue training our model. You can see an illustration of different tasks that are executed below.Typical machine learning tasks executed by peers in distributed training, possibly with a separation of rolesOften, to reduce the amount of synchronization and to stabilize the learning process, we can accumulate the gradients for N batches before averaging, which is equivalent to increasing the actual batch size N times. This approach, combined with the observation that most state-of-the-art language models use large batches, led us to a simple idea: let's accumulate one very large batch across all volunteer devices before each optimizer step! Along with complete equivalence to regular distributed training and easy scalability, this method also has the benefit of built-in fault tolerance, which we illustrate below.Let's consider a couple of potential failure cases that we might encounter throughout a collaborative experiment. By far, the most frequent scenario is that one or several peers disconnect from the training procedure: they might have an unstable connection or simply want to use their GPUs for something else. In this case, we only suffer a minor setback of training: the contribution of these peers gets deducted from the currently accumulated batch size, but other participants will compensate for that with their gradients. Also, if more peers join, the target batch size will simply be reached faster, and our training procedure will naturally speed up. You can see a demonstration of this in the video:Adaptive averagingNow that we have discussed the overall training procedure, there remains one more question: how do we actually aggregate the gradients of participants? Most home computers cannot easily accept incoming connections, and the download speed might also become a constraint.Since we rely on volunteer hardware for experiments, a central server is not really a viable option, as it will quickly face overload when scaling to tens of clients and hundreds of millions of parameters. Most data-parallel training runs today don't use this strategy anyway; instead, they rely on All-Reduce — an efficient all-to-all communication primitive. Thanks to clever algorithmic optimizations, each node can compute the global average without sending the entire local gradient to every peer.Because All-Reduce is decentralized, it seems like a good choice; however, we still need to take the diversity of hardware and network setups into account. For example, some volunteers might join from computers that have slow network but powerful GPUs, some might have better connectivity only to a subset of other peers, and some may be firewalled from incoming connections.It turns out we can actually come up with an optimal data transfer strategy on the fly by leveraging this information about performance! On a high level, we split the entire gradient vector into parts depending on the Internet speed of each peer: those with the fastest connection aggregate the largest parts. Also, if some nodes do not accept incoming connections, they simply send their data for aggregation but do not compute the average themselves. Depending on the conditions, this adaptive algorithm can recover well-known distributed DL algorithms and improve on them with a hybrid strategy, as demonstrated below.Examples of different averaging strategies with the adaptive algorithm.💡 The core techniques for decentralized training are available in Hivemind.Check out the repo and learn how to use this library in your own projects!sahajBERTAs always, having a well-designed algorithmic framework doesn't mean that it will work as intended in practice, because some assumptions may not hold true in actual training runs. To verify the competitive performance of this technology and to showcase its potential, we organized a special collaborative event to pretrain a masked language model for the Bengali language. Even though it is the fifth most spoken native language in the world, it has very few masked language models openly available, which emphasizes the importance of tools that can empower the community, unlocking a plethora of opportunities in the field.We conducted this experiment with real volunteers from the Neuropark community and used openly available datasets (OSCAR and Wikipedia), because we wanted to have a fully reproducible example that might serve as an inspiration for other groups. Below, we describe the detailed setup of our training run and demonstrate its results.ArchitectureFor our experiment, we chose ALBERT (A Lite BERT) — a model for language representations that is pretrained with Masked Language Modeling (MLM) and Sentence Order Prediction (SOP) as objectives. We use this architecture because weight sharing makes it very parameter-efficient: for example, ALBERT-large has ~18M trainable parameters and performs comparably to BERT-base with ~108M weights on the GLUE benchmark. It means that there is less data to exchange between the peers, which is crucial in our setup, as it significantly speeds up each training iteration.💡 Want to know more about ALBERT?PaperTransformers docTokenizerThe first brick of our model is called a tokenizer and takes care of transforming raw text into vocabulary indices. Because we are training a model for Bengali, which is not very similar to English, we need to implement language-specific preprocessing as a part of our tokenizer. We can view it as a sequence of operations:Normalization: includes all preprocessing operations on raw text data. This was the step at which we have made the most changes, because removing certain details can either change the meaning of the text or leave it the same, depending on the language. For example, the standard ALBERT normalizer removes the accents, while for the Bengali language, we need to keep them, because they contain information about the vowels. As a result, we use the following operations: NMT normalization, NFKC normalization, removal of multiple spaces, homogenization of recurring Unicode characters in the Bengali language, and lowercasing.Pretokenization describes rules for splitting the input (for example, by whitespace) to enforce specific token boundaries. As in the original work, we have chosen to keep the whitespace out of the tokens. Therefore, to distinguish the words from each other and not to have multiple single-space tokens, each token corresponding to the beginning of a word starts with a special character “_” (U+2581). In addition, we isolated all punctuation and digits from other characters to condense our vocabulary.Tokenizer modeling: It is at this level that the text is mapped into a sequence of elements of a vocabulary. There are several algorithms for this, such as Byte-Pair Encoding (BPE) or Unigram, and most of them need to build the vocabulary from a text corpus. Following the setup of ALBERT, we used the Unigram Language Model approach, training a vocabulary of 32k tokens on the deduplicated Bengali part of the OSCAR dataset.Post-processing: After tokenization, we might want to add several special tokens required by the architecture, such as starting the sequence with a special token [CLS] or separating two segments with a special token [SEP]. Since our main architecture is the same as the original ALBERT, we keep the same post-processing: specifically, we add a [CLS] token at the beginning of each example and a [SEP] token both between two segments and at the end.💡 Read more information about each component inTokenizers docYou can reuse our tokenizer by running the following code:from transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained("neuropark/sahajBERT")DatasetThe last thing we need to cover is the training dataset. As you probably know, the great strength of pretrained models like BERT or ALBERT is that you don't need an annotated dataset, but just a lot of texts. To train sahajBERT, we used the Bengali Wikipedia dump from 03/20/2021 and the Bengali subset of OSCAR (600MB + 6GB of text). These two datasets can easily be downloaded from the HF Hub.However, loading an entire dataset requires time and storage — two things that our peers do not necessarily have. To make the most of the resources provided by the participants, we have implemented dataset streaming, which allows them to train the model nearly as soon as they join the network. Specifically, the examples in the dataset are downloaded and transformed in parallel to the training. We can also shuffle the dataset so that our peers have little chance to process the same examples at the same time. As the dataset is not downloaded and preprocessed in advance, the transformations needed to go from plain text to a training example (shown in the figure below) are done on the fly.From a raw sample to a training sampleThe dataset streaming mode is available from version v1.9 of the 🤗 datasets library, so you can use it right now as follows:from datasets import load_datasetoscar_dataset = load_dataset("oscar", name="unshuffled_deduplicated_bn", streaming=True)💡 Learn more about loading datasets in streaming mode in the documentationCollaborative eventThe sahajBERT collaborative training event took place from May 12 to May 21. The event brought together 40 participants, 30 of whom were Bengali-speaking volunteers, and 10 were volunteers from one of the authors' organizations. These 40 volunteers joined the Neuropark Discord channel to receive all information regarding the event and participate in discussions. To join the experiment, volunteers were asked to:Send their username to the moderators to be allowlisted;Open the provided notebook locally, on Google Colaboratory, or on Kaggle;Run one code cell and fill in their Hugging Face credentials when requested;Watch the training loss decrease on the shared dashboards!For security purposes, we set up an authorization system so that only members of the Neuropark community could train the model. Sparing you the technical details, our authorization protocol allows us to guarantee that every participant is in the allowlist and to acknowledge the individual contribution of each peer.In the following figure, you can see the activity of each volunteer. Over the experiment, the volunteers logged in 600 different sessions. Participants regularly launched multiple runs in parallel, and many of them spread out the runs they launched over time. The runs of individual participants lasted 4 hours on average, and the maximum length was 21 hours. You can read more about the participation statistics in the paper.Chart showing participants of the sahajBERT experiment. Circle radius is relative to the total number of processed batches, the circle is greyed if the participant is not active. Every purple square represents an active device, darker color corresponds to higher performanceAlong with the resources provided by participants, we also used 16 preemptible (cheap but frequently interrupted) single-GPU T4 cloud instances to ensure the stability of the run. The cumulative runtime for the experiment was 234 days, and in the figure below you can see parts of the loss curve that each peer contributed to!The final model was uploaded to the Model Hub, so you can download and play with it if you want to: https://hf.co/neuropark/sahajBERTEvaluationTo evaluate the performance of sahajBERT, we finetuned it on two downstream tasks in Bengali:Named entity recognition (NER) on the Bengali split of WikiANN. The goal of this task is to classify each token in the input text into one of the following categories: person, organization, location, or none of them.News Category Classification (NCC) on the Soham articles dataset from IndicGLUE. The goal of this task is to predict the category to which belong the input text.We evaluated it during training on the NER task to check that everything was going well; as you can see on the following plot, this was indeed the case!Evaluation metrics of fine-tuned models on the NER task from different checkpoints of pre-trained models.At the end of training, we compared sahajBERT with three other pretrained language models: XLM-R Large, IndicBert, and bnRoBERTa. In the table below, you can see that our model has results comparable to the best Bengali language models available on HF Hub, even though our model has only ~18M trained parameters, while, for instance, XLM-R (a strong multilingual baseline), has ~559M parameters and was trained on several hundred V100 GPUs.ModelNER F1 (mean ± std)NCC Accuracy (mean ± std)sahajBERT95.45 ± 0.5391.97 ± 0.47XLM-R-large96.48 ± 0.2290.05 ± 0.38IndicBert92.52 ± 0.4574.46 ± 1.91bnRoBERTa82.32 ± 0.6780.94 ± 0.45These models are available on the Hub as well. You can test them directly by playing with the Hosted Inference API widget on their Model Cards or by loading them directly in your Python code.sahajBERT-NERModel card: https://hf.co/neuropark/sahajBERT-NERfrom transformers import (AlbertForTokenClassification,TokenClassificationPipeline,PreTrainedTokenizerFast,)# Initialize tokenizertokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER")# Initialize modelmodel = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER")# Initialize pipelinepipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model)raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change meoutput = pipeline(raw_text)sahajBERT-NCCModel card: https://hf.co/neuropark/sahajBERT-NERfrom transformers import (AlbertForSequenceClassification,TextClassificationPipeline,PreTrainedTokenizerFast,)# Initialize tokenizertokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NCC")# Initialize modelmodel = AlbertForSequenceClassification.from_pretrained("neuropark/sahajBERT-NCC")# Initialize pipelinepipeline = TextClassificationPipeline(tokenizer=tokenizer, model=model)raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change meoutput = pipeline(raw_text)ConclusionIn this blog post, we have discussed the method that can enable collaborative pretraining of neural networks with sahajBERT as the first truly successful example of applying it to a real-world problem.What does this all mean for the broader ML community? First, it is now possible to run large-scale distributed pretraining with your friends, and we hope to see a lot of cool new models that were previously less feasible to obtain. Also, our result might be important for multilingual NLP, since now the community for any language can train their own models without the need for significant computational resources concentrated in one place.AcknowledgementsThe DeDLOC paper and sahajBERT training experiment were created by Michael Diskin, Alexey Bukhtiyarov, Max Ryabinin, Lucile Saulnier, Quentin Lhoest, Anton Sinitsin, Dmitry Popov, Dmitry Pyrkin, Maxim Kashirin, Alexander Borzunov, Albert Villanova del Moral, Denis Mazur, Ilia Kobelev, Yacine Jernite, Thomas Wolf, and Gennady Pekhimenko. This project is the result of a collaboration betweenHugging Face, Yandex Research, HSE University, MIPT, University of Toronto and Vector Institute.In addition, we would like to thank Stas Bekman, Dmitry Abulkhanov, Roman Zhytar, Alexander Ploshkin, Vsevolod Plokhotnyuk and Roman Kail for their invaluable help with building the training infrastructure. Also, we thank Abhishek Thakur for helping with downstream evaluation and Tanmoy Sarkar with Omar Sanseviero, who helped us organize the collaborative experiment and gave regular status updates to the participants over the course of the training run.Below, you can see all participants of the collaborative experiment:References"Distributed Deep Learning in Open Collaborations", ArXivCode for sahajBERT experiments in the DeDLOC repository.
https://huggingface.co/blog/spacy
Welcome spaCy to the Hugging Face Hub
Omar Sanseviero, Ines Montani
July 13, 2021
spaCy is a popular library for advanced Natural Language Processing used widely across industry. spaCy makes it easy to use and train pipelines for tasks like named entity recognition, text classification, part of speech tagging and more, and lets you build powerful applications to process and analyze large volumes of text.Hugging Face makes it really easy to share your spaCy pipelines with the community! With a single command, you can upload any pipeline package, with a pretty model card and all required metadata auto-generated for you. The inference API currently supports NER out-of-the-box, and you can try out your pipeline interactively in your browser. You'll also get a live URL for your package that you can pip install from anywhere for a smooth path from prototype all the way to production!Finding modelsOver 60 canonical models can be found in the spaCy org. These models are from the latest 3.1 release, so you can try the latest realesed models right now! On top of this, you can find all spaCy models from the community here https://huggingface.co/models?filter=spacy.WidgetsThis integration includes support for NER widgets, so all models with a NER component will have this out of the box! Coming soon there will be support for text classification and POS.spacy/en_core_web_smHosted inference API Token Classification Compute This model is currently loaded and running on the Inference API. JSON Output Maximize Using existing modelsAll models from the Hub can be directly installed using pip install. pip install https://huggingface.co/spacy/en_core_web_sm/resolve/main/en_core_web_sm-any-py3-none-any.whl# Using spacy.load().import spacynlp = spacy.load("en_core_web_sm")# Importing as module.import en_core_web_smnlp = en_core_web_sm.load()When you open a repository, you can click Use in spaCy and you will be given a working snippet that you can use to install and load the model!You can even make HTTP requests to call the models from the Inference API, which is useful in production settings. Here is an example of a simple request:curl -X POST --data '{"inputs": "Hello, this is Omar"}' https://api-inference.huggingface.co/models/spacy/en_core_web_sm>>> [{"entity_group":"PERSON","word":"Omar","start":15,"end":19,"score":1.0}]And for larger-scale use cases, you can click "Deploy > Accelerated Inference" and see how to do this with Python.Sharing your modelsBut probably the coolest feature is that now you can very easily share your models with the spacy-huggingface-hub library, which extends the spaCy CLI with a new command, huggingface-hub push. huggingface-cli loginpython -m spacy package ./en_ner_fashion ./output --build wheelcd ./output/en_ner_fashion-0.0.0/distpython -m spacy huggingface-hub push en_ner_fashion-0.0.0-py3-none-any.whlIn just a minute, you can get your packaged model in the Hub, try it out directly in the browser, and share it with the rest of the community. All the required metadata will be uploaded for you and you even get a cool model card.Try it out and share your models with the community!Would you like to integrate your library to the Hub?This integration is possible thanks to the huggingface_hub library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a guide for you!
https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker
Deploy Hugging Face models easily with Amazon SageMaker 🏎
No authors found
July 8, 2021
Earlier this year we announced a strategic collaboration with Amazon to make it easier for companies to use Hugging Face in Amazon SageMaker, and ship cutting-edge Machine Learning features faster. We introduced new Hugging Face Deep Learning Containers (DLCs) to train Hugging Face Transformer models in Amazon SageMaker.Today, we are excited to share a new inference solution with you that makes it easier than ever to deploy Hugging Face Transformers with Amazon SageMaker! With the new Hugging Face Inference DLCs, you can deploy your trained models for inference with just one more line of code, or select any of the 10,000+ publicly available models from the Model Hub, and deploy them with Amazon SageMaker.Deploying models in SageMaker provides you with production-ready endpoints that scale easily within your AWS environment, with built-in monitoring and a ton of enterprise features. It's been an amazing collaboration and we hope you will take advantage of it!Here's how to use the new SageMaker Hugging Face Inference Toolkit to deploy Transformers-based models:from sagemaker.huggingface import HuggingFaceModel# create Hugging Face Model Class and deploy it as SageMaker Endpointhuggingface_model = HuggingFaceModel(...).deploy()That's it! 🚀To learn more about accessing and using the new Hugging Face DLCs with the Amazon SageMaker Python SDK, check out the guides and resources below.Resources, Documentation & Samples 📄Below you can find all the important resources for deploying your models to Amazon SageMaker.Blog/VideoVideo: Deploy a Hugging Face Transformers Model from S3 to Amazon SageMakerVideo: Deploy a Hugging Face Transformers Model from the Model Hub to Amazon SageMakerSamples/DocumentationHugging Face documentation for Amazon SageMakerDeploy models to Amazon SageMakerAmazon SageMaker documentation for Hugging FacePython SDK SageMaker documentation for Hugging FaceDeep Learning ContainerNotebook: Deploy one of the 10 000+ Hugging Face Transformers to Amazon SageMaker for InferenceNotebook: Deploy a Hugging Face Transformer model from S3 to SageMaker for inferenceSageMaker Hugging Face Inference Toolkit ⚙️In addition to the Hugging Face Transformers-optimized Deep Learning Containers for inference, we have created a new Inference Toolkit for Amazon SageMaker. This new Inference Toolkit leverages the pipelines from the transformers library to allow zero-code deployments of models without writing any code for pre- or post-processing. In the "Getting Started" section below you find two examples of how to deploy your models to Amazon SageMaker.In addition to the zero-code deployment, the Inference Toolkit supports "bring your own code" methods, where you can override the default methods. You can learn more about "bring your own code" in the documentation here or you can check out the sample notebook "deploy custom inference code to Amazon SageMaker".API - Inference Toolkit DescriptionUsing the transformers pipelines, we designed an API, which makes it easy for you to benefit from all pipelines features. The API has a similar interface than the 🤗 Accelerated Inference API, meaning your inputs need to be defined in the inputs key and if you want additional supported pipelines parameters you can add them in the parameters key. Below you can find examples for requests.# text-classification request body{"inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days."}# question-answering request body{"inputs": {"question": "What is used for inference?","context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."}}# zero-shot classification request body{"inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!","parameters": {"candidate_labels": ["refund","legal","faq"]}}Getting started 🧭In this guide we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy two transformer models for inference.In the first example, we deploy for inference a Hugging Face Transformer model trained in Amazon SageMaker.In the second example, we directly deploy one of the 10,000+ publicly available Hugging Face Transformers models from the Model Hub to Amazon SageMaker for Inference.Setting up the environmentWe will use an Amazon SageMaker Notebook Instance for the example. You can learn here how to set up a Notebook Instance. To get started, jump into your Jupyter Notebook or JupyterLab and create a new Notebook with the conda_pytorch_p36 kernel.Note: The use of Jupyter is optional: We could also launch SageMaker API calls from anywhere we have an SDK installed, connectivity to the cloud, and appropriate permissions, such as a Laptop, another IDE, or a task scheduler like Airflow or AWS Step Functions.After that we can install the required dependencies.pip install "sagemaker>=2.48.0" --upgradeTo deploy a model on SageMaker, we need to create a sagemaker Session and provide an IAM role with the right permission. The get_execution_role method is provided by the SageMaker SDK as an optional convenience. You can also specify the role by writing the specific role ARN you want your endpoint to use. This IAM role will be later attached to the Endpoint, e.g. download the model from Amazon S3. import sagemakersess = sagemaker.Session()role = sagemaker.get_execution_role()Deploy a trained Hugging Face Transformer model to SageMaker for inferenceThere are two ways to deploy your SageMaker trained Hugging Face model. You can either deploy it after your training is finished, or you can deploy it later, using the model_data pointing to your saved model on Amazon S3. In addition to the two below-mentioned options, you can also instantiate Hugging Face endpoints with lower-level SDK such as boto3 and AWS CLI, Terraform and with CloudFormation templates.Deploy the model directly after training with the Estimator classIf you deploy your model directly after training, you need to ensure that all required model artifacts are saved in your training script, including the tokenizer and the model. A benefit of deploying directly after training is that SageMaker model container metadata will contain the source training job, providing lineage from training job to deployed model.from sagemaker.huggingface import HuggingFace############ pseudo code start ############# create HuggingFace estimator for running traininghuggingface_estimator = HuggingFace(....)# starting the train job with our uploaded datasets as inputhuggingface_estimator.fit(...)############ pseudo code end ############# deploy model to SageMaker Inferencepredictor = hf_estimator.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")# example request, you always need to define "inputs"data = {"inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days."}# requestpredictor.predict(data)After we run our request we can delete the endpoint again with.# delete endpointpredictor.delete_endpoint()Deploy the model from pre-trained checkpoints using the HuggingFaceModel classIf you've already trained your model and want to deploy it at some later time, you can use the model_data argument to specify the location of your tokenizer and model weights.from sagemaker.huggingface.model import HuggingFaceModel# create Hugging Face Model Classhuggingface_model = HuggingFaceModel(model_data="s3://models/my-bert-model/model.tar.gz", # path to your trained sagemaker modelrole=role, # iam role with permissions to create an Endpointtransformers_version="4.6", # transformers version usedpytorch_version="1.7", # pytorch version used)# deploy model to SageMaker Inferencepredictor = huggingface_model.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")# example request, you always need to define "inputs"data = {"inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days."}# requestpredictor.predict(data)After we run our request, we can delete the endpoint again with:# delete endpointpredictor.delete_endpoint()Deploy one of the 10,000+ Hugging Face Transformers to Amazon SageMaker for InferenceTo deploy a model directly from the Hugging Face Model Hub to Amazon SageMaker, we need to define two environment variables when creating the HuggingFaceModel. We need to define:HF_MODEL_ID: defines the model id, which will be automatically loaded from huggingface.co/models when creating or SageMaker Endpoint. The 🤗 Hub provides 10,000+ models all available through this environment variable.HF_TASK: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be found here.from sagemaker.huggingface.model import HuggingFaceModel# Hub Model configuration. <https://huggingface.co/models>hub = {'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models'HF_TASK':'question-answering' # NLP task you want to use for predictions}# create Hugging Face Model Classhuggingface_model = HuggingFaceModel(env=hub, # configuration for loading model from Hubrole=role, # iam role with permissions to create an Endpointtransformers_version="4.6", # transformers version usedpytorch_version="1.7", # pytorch version used)# deploy model to SageMaker Inferencepredictor = huggingface_model.deploy(initial_instance_count=1,instance_type="ml.m5.xlarge")# example request, you always need to define "inputs"data = {"inputs": {"question": "What is used for inference?","context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."}}# requestpredictor.predict(data)After we run our request we can delete the endpoint again with.# delete endpointpredictor.delete_endpoint()FAQ 🎯You can find the complete Frequently Asked Questions in the documentation.Q: Which models can I deploy for Inference?A: You can deploy:any 🤗 Transformers model trained in Amazon SageMaker, or other compatible platforms and that can accommodate the SageMaker Hosting designany of the 10,000+ publicly available Transformer models from the Hugging Face Model Hub, oryour private models hosted in your Hugging Face premium account!Q: Which pipelines, tasks are supported by the Inference Toolkit?A: The Inference Toolkit and DLC support any of the transformers pipelines. You can find the full list hereQ: Do I have to use the transformers pipelines when hosting SageMaker endpoints?A: No, you can also write your custom inference code to serve your own models and logic, documented here. Q: Do I have to use the SageMaker Python SDK to use the Hugging Face Deep Learning Containers (DLCs)?A: You can use the Hugging Face DLC without the SageMaker Python SDK and deploy your models to SageMaker with other SDKs, such as the AWS CLI, boto3 or Cloudformation. The DLCs are also available through Amazon ECR and can be pulled and used in any environment of choice.Q: Why should I use the Hugging Face Deep Learning Containers?A: The DLCs are fully tested, maintained, optimized deep learning environments that require no installation, configuration, or maintenance. In particular, our inference DLC comes with a pre-written serving stack, which drastically lowers the technical bar of DL serving.Q: How is my data and code secured by Amazon SageMaker?A: Amazon SageMaker provides numerous security mechanisms including encryption at rest and in transit, Virtual Private Cloud (VPC) connectivity, and Identity and Access Management (IAM). To learn more about security in the AWS cloud and with Amazon SageMaker, you can visit Security in Amazon SageMaker and AWS Cloud Security.Q: Is this available in my region?A: For a list of the supported regions, please visit the AWS region table for all AWS global infrastructure.Q: Do you offer premium support or support SLAs for this solution?A: AWS Technical Support tiers are available from AWS and cover development and production issues for AWS products and services - please refer to AWS Support for specifics and scope.If you have questions which the Hugging Face community can help answer and/or benefit from, please post them in the Hugging Face forum.If you need premium support from the Hugging Face team to accelerate your NLP roadmap, our Expert Acceleration Program offers direct guidance from our open-source, science, and ML Engineering teams.
https://huggingface.co/blog/sentence-transformers-in-the-hub
Sentence Transformers in the Hugging Face Hub
Omar Sanseviero, Nils Reimers
June 28, 2021
Over the past few weeks, we've built collaborations with many Open Source frameworks in the machine learning ecosystem. One that gets us particularly excited is Sentence Transformers.Sentence Transformers is a framework for sentence, paragraph and image embeddings. This allows to derive semantically meaningful embeddings (1) which is useful for applications such as semantic search or multi-lingual zero shot classification. As part of Sentence Transformers v2 release, there are a lot of cool new features:Sharing your models in the Hub easily.Widgets and Inference API for sentence embeddings and sentence similarity.Better sentence-embeddings models available (benchmark and models in the Hub).With over 90 pretrained Sentence Transformers models for more than 100 languages in the Hub, anyone can benefit from them and easily use them. Pre-trained models can be loaded and used directly with few lines of code:from sentence_transformers import SentenceTransformersentences = ["Hello World", "Hallo Welt"]model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')embeddings = model.encode(sentences)print(embeddings)But not only this. People will probably want to either demo their models or play with other models easily, so we're happy to announce the release of two new widgets in the Hub! The first one is the feature-extraction widget which shows the sentence embedding.sentence-transformers/distilbert-base-nli-max-tokens Hosted inference API Feature Extraction Compute This model is currently loaded and running on the Inference API. JSON Output Maximize But seeing a bunch of numbers might not be very useful to you (unless you're able to understand the embeddings from a quick look, which would be impressive!). We're also introducing a new widget for a common use case of Sentence Transformers: computing sentence similarity.sentence-transformers/paraphrase-MiniLM-L6-v2Hosted inference API Sentence Similarity Source Sentence Sentences to compare to Add Sentence Compute This model can be loaded on the Inference API on-demand. JSON Output Maximize Of course, on top of the widgets, we also provide API endpoints in our Inference API that you can use to programmatically call your models!import jsonimport requestsAPI_URL = "https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2"headers = {"Authorization": "Bearer YOUR_TOKEN"}def query(payload):response = requests.post(API_URL, headers=headers, json=payload)return response.json()data = query({"inputs": {"source_sentence": "That is a happy person","sentences": ["That is a happy dog","That is a very happy person","Today is a sunny day"]}})Unleashing the Power of SharingSo why is this powerful? In a matter of minutes, you can share your trained models with the whole community.from sentence_transformers import SentenceTransformer# Load or train a modelmodel.save_to_hub("my_new_model")Now you will have a repository in the Hub which hosts your model. A model card was automatically created. It describes the architecture by listing the layers and shows how to use the model with both Sentence Transformers and 🤗 Transformers. You can also try out the widget and use the Inference API straight away!If this was not exciting enough, your models will also be easily discoverable by filtering for all Sentence Transformers models.What's next?Moving forward, we want to make this integration even more useful. In our roadmap, we expect training and evaluation data to be included in the automatically created model card, like is the case in transformers from version v4.8.And what's next for you? We're very excited to see your contributions! If you already have a Sentence Transformer repo in the Hub, you can now enable the widget and Inference API by changing the model card metadata.---tags:- sentence-transformers- sentence-similarity # Or feature-extraction!---If you don't have any model in the Hub and want to learn more about Sentence Transformers, head to www.SBERT.net!Would you like to integrate your library to the Hub?This integration is possible thanks to the huggingface_hub library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a guide for you!ReferencesSentence-BERT: Sentence Embeddings using Siamese BERT-Networks. https://arxiv.org/abs/1908.10084
https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api
Few-shot learning in practice: GPT-Neo and the 🤗 Accelerated Inference API
Philipp Schmid
June 3, 2021
In many Machine Learning applications, the amount of available labeled data is a barrier to producing a high-performing model. The latest developments in NLP show that you can overcome this limitation by providing a few examples at inference time with a large language model - a technique known as Few-Shot Learning. In this blog post, we'll explain what Few-Shot Learning is, and explore how a large language model called GPT-Neo, and the 🤗 Accelerated Inference API, can be used to generate your own predictions.What is Few-Shot Learning?Few-Shot Learning refers to the practice of feeding a machine learning model with a very small amount of training data to guide its predictions, like a few examples at inference time, as opposed to standard fine-tuning techniques which require a relatively large amount of training data for the pre-trained model to adapt to the desired task with accuracy.This technique has been mostly used in computer vision, but with some of the latest Language Models, like EleutherAI GPT-Neo and OpenAI GPT-3, we can now use it in Natural Language Processing (NLP). In NLP, Few-Shot Learning can be used with Large Language Models, which have learned to perform a wide number of tasks implicitly during their pre-training on large text datasets. This enables the model to generalize, that is to understand related but previously unseen tasks, with just a few examples.Few-Shot NLP examples consist of three main components: Task Description: A short description of what the model should do, e.g. "Translate English to French"Examples: A few examples showing the model what it is expected to predict, e.g. "sea otter => loutre de mer"Prompt: The beginning of a new example, which the model should complete by generating the missing text, e.g. "cheese => "Image from Language Models are Few-Shot LearnersCreating these few-shot examples can be tricky, since you need to articulate the “task” you want the model to perform through them. A common issue is that models, especially smaller ones, are very sensitive to the way the examples are written.An approach to optimize Few-Shot Learning in production is to learn a common representation for a task and then train task-specific classifiers on top of this representation.OpenAI showed in the GPT-3 Paper that the few-shot prompting ability improves with the number of language model parameters.Image from Language Models are Few-Shot LearnersLet's now take a look at how at how GPT-Neo and the 🤗 Accelerated Inference API can be used to generate your own Few-Shot Learning predictions!What is GPT-Neo?GPT⁠-⁠Neo is a family of transformer-based language models from EleutherAI based on the GPT architecture. EleutherAI's primary goal is to train a model that is equivalent in size to GPT⁠-⁠3 and make it available to the public under an open license.All of the currently available GPT-Neo checkpoints are trained with the Pile dataset, a large text corpus that is extensively documented in (Gao et al., 2021). As such, it is expected to function better on the text that matches the distribution of its training text; we recommend keeping this in mind when designing your examples.🤗 Accelerated Inference APIThe Accelerated Inference API is our hosted service to run inference on any of the 10,000+ models publicly available on the 🤗 Model Hub, or your own private models, via simple API calls. The API includes acceleration on CPU and GPU with up to 100x speedup compared to out of the box deployment of Transformers.To integrate Few-Shot Learning predictions with GPT-Neo in your own apps, you can use the 🤗 Accelerated Inference API with the code snippet below. You can find your API Token here, if you don't have an account you can get started here.import jsonimport requestsAPI_TOKEN = ""def query(payload='',parameters=None,options={'use_cache': False}):API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"headers = {"Authorization": f"Bearer {API_TOKEN}"}body = {"inputs":payload,'parameters':parameters,'options':options}response = requests.request("POST", API_URL, headers=headers, data= json.dumps(body))try:response.raise_for_status()except requests.exceptions.HTTPError:return "Error:"+" ".join(response.json()['error'])else:return response.json()[0]['generated_text']parameters = {'max_new_tokens':25, # number of generated tokens'temperature': 0.5, # controlling the randomness of generations'end_sequence': "###" # stopping sequence for generation}prompt="...." # few-shot promptdata = query(prompt,parameters,options)Practical InsightsHere are some practical insights, which help you get started using GPT-Neo and the 🤗 Accelerated Inference API.Since GPT-Neo (2.7B) is about 60x smaller than GPT-3 (175B), it does not generalize as well to zero-shot problems and needs 3-4 examples to achieve good results. When you provide more examples GPT-Neo understands the task and takes the end_sequence into account, which allows us to control the generated text pretty well. The hyperparameter End Sequence, Token Length & Temperature can be used to control the text-generation of the model and you can use this to your advantage to solve the task you need. The Temperature controlls the randomness of your generations, lower temperature results in less random generations and higher temperature results in more random generations.In the example, you can see how important it is to define your hyperparameter. These can make the difference between solving your task or failing miserably.Responsible UseFew-Shot Learning is a powerful technique but also presents unique pitfalls that need to be taken into account when designing uses cases.To illustrate this, let's consider the default Sentiment Analysis setting provided in the widget. After seeing three examples of sentiment classification, the model makes the following predictions 4 times out of 5, with temperature set to 0.1:Tweet: "I'm a disabled happy person"Sentiment: Negative What could go wrong? Imagine that you are using sentiment analysis to aggregate reviews of products on an online shopping website: a possible outcome could be that items useful to people with disabilities would be automatically down-ranked - a form of automated discrimination. For more on this specific issue, we recommend the ACL 2020 paper Social Biases in NLP Models as Barriers for Persons with Disabilities. Because Few-Shot Learning relies more directly on information and associations picked up from pre-training, it makes it more sensitive to this type of failures.How to minimize the risk of harm? Here are some practical recommendations.Best practices for responsible useMake sure people know which parts of their user experience depend on the outputs of the ML system If possible, give users the ability to opt-out Provide a mechanism for users to give feedback on the model decision, and to override it Monitor feedback, especially model failures, for groups of users that may be disproportionately affectedWhat needs most to be avoided is to use the model to automatically make decisions for, or about, a user, without opportunity for a human to provide input or correct the output. Several regulations, such as GDPR in Europe, require that users be provided an explanation for automatic decisions made about them.To use GPT-Neo or any Hugging Face model in your own application, you can start a free trial of the 🤗 Accelerated Inference API.If you need help mitigating bias in models and AI systems, or leveraging Few-Shot Learning, the 🤗 Expert Acceleration Program can offer your team direct premium support from the Hugging Face team.
https://huggingface.co/blog/gradio
Using & Mixing Hugging Face Models with Gradio 2.0
Abubakar Abid
May 25, 2021
Using & Mixing Hugging Face Models with Gradio 2.0Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesUsing & Mixing Hugging Face Models with Gradio 2.0
https://huggingface.co/blog/bert-cpu-scaling-part-1
Scaling up BERT-like model Inference on modern CPU - Part 1
Morgan Funtowicz
April 20, 2021
1. Context and MotivationsBack in October 2019, my colleague Lysandre Debut published a comprehensive (at the time) inference performance benchmarking blog (1).Since then, 🤗 transformers (2) welcomed a tremendous numberof new architectures and thousands of new models were added to the 🤗 hub (3)which now counts more than 9,000 of them as of first quarter of 2021.As the NLP landscape keeps trending towards more and more BERT-like models being used in production, it remains challenging to efficiently deploy and run these architectures at scale.This is why we recently introduced our 🤗 Inference API: to let you focus on building value for your users and customers, rather than digging into all the highlytechnical aspects of running such models.This blog post is the first part of a series which will cover most of the hardware and software optimizations to betterleverage CPUs for BERT model inference.For this initial blog post, we will cover the hardware part:Setting up a baseline - Out of the box resultsPractical & technical considerations when leveraging modern CPUs for CPU-bound tasksCore count scaling - Does increasing the number of cores actually give better performance?Batch size scaling - Increasing throughput with multiple parallel & independent model instancesWe decided to focus on the most famous Transformer model architecture, BERT (Delvin & al. 2018) (4). While we focus this blog post on BERT-like models to keep the article concise, all the described techniquescan be applied to any architecture on the Hugging Face model hub.In this blog post we will not describe in detail the Transformer architecture - to learn about that I can't recommend enough the Illustrated Transformer blogpost from Jay Alammar (5).Today's goals are to give you an idea of where we are from an Open Source perspective using BERT-likemodels for inference on PyTorch and TensorFlow, and also what you can easily leverage to speedup inference.2. Benchmarking methodologyWhen it comes to leveraging BERT-like models from Hugging Face's model hub, there are many knobs which canbe tuned to make things faster.Also, in order to quantify what "faster" means, we will rely on widely adopted metrics:Latency: Time it takes for a single execution of the model (i.e. forward call) Throughput: Number of executions performed in a fixed amount of timeThese two metrics will help us understand the benefits and tradeoffs along this blog post.The benchmarking methodology was reimplemented from scratch in order to integrate the latest features provided by transformersand also to let the community run and share benchmarks in an hopefully easier way.The whole framework is now based on Facebook AI & Research's Hydra configuration library allowing us to easily reportand track all the items involved while running the benchmark, hence increasing the overall reproducibility.You can find the whole structure of the project hereOn the 2021 version, we kept the ability to run inference workloads through PyTorch and Tensorflow as in the previous blog (1) along with their traced counterpartTorchScript (6), Google Accelerated Linear Algebra (XLA) (7).Also, we decided to include support for ONNX Runtime (8) as it provides many optimizations specifically targeting transformers based models which makes it a strong candidate to consider when discussing performance.Last but not least, this new unified benchmarking environment will allow us to easily run inference for different scenariossuch as Quantized Models (Zafrir & al.) (9) using less precise number representations (float16, int8, int4).This method known as quantization has seen an increased adoption among all major hardware providers. In the near future, we would like to integrate additional methods we are actively working on at Hugging Face, namely Distillation, Pruning & Sparsificaton. 3. BaselinesAll the results below were run on Amazon Web Services (AWS) c5.metal instance leveraging an Intel Xeon Platinum 8275 CPU (48 cores/96 threads).The choice of this instance provides all the useful CPU features to speedup Deep Learning workloads such as: AVX512 instructions set (which might not be leveraged out-of-the-box by the various frameworks)Intel Deep Learning Boost (also known as Vector Neural Network Instruction - VNNI) which provides specialized CPU instructions for running quantized networks (using int8 data type)The choice of using metal instance is to avoid any virtualization issue which can arise when using cloud providers.This gives us full control of the hardware, especially while targeting the NUMA (Non-Unified Memory Architecture) controller, which we will cover later in this post.The operating system was Ubuntu 20.04 (LTS) and all the experiments were conducted using Hugging Face transformers version 4.5.0, PyTorch 1.8.1 & Google TensorFlow 2.4.04. Out of the box resultsFigure 1. PyTorch (1.8.1) vs Google TensorFlow (2.4.1) out of the boxFigure 2. PyTorch (1.8.1) vs Google TensorFlow (2.4.1) out of the box - (Bigger Batch Size)Straigh to the point, out-of-the-box, PyTorch shows better inference results over TensorFlow for all the configurations tested here.It is important to note the results out-of-the-box might not reflect the "optimal" setup for both PyTorch and TensorFlow and thus it can look deceiving here.One possible way to explain such difference between the two frameworks might be the underlying technology toexecute parallel sections within operators.PyTorch internally uses OpenMP (10) along with Intel MKL (now oneDNN) (11) for efficient linear algebra computations whereas TensorFlow relies on Eigen and its own threading implementation.5. Scaling BERT Inference to increase overall throughput on modern CPU5.1. IntroductionThere are multiple ways to improve the latency and throughput for tasks such as BERT inference.Improvements and tuning can be performed at various levels from enabling Operating System features, swapping dependentlibraries with more performant ones, carefully tuning framework properties and, last but not least,using parallelization logic leveraging all the cores on the CPU(s).For the remainder of this blog post we will focus on the latter, also known as Multiple Inference Stream.The idea is simple: Allocate multiple instances of the same model and assign the execution of each instance to adedicated, non-overlapping subset of the CPU cores in order to have truly parallel instances.5.2. Cores and Threads on Modern CPUsOn our way towards optimizing CPU inference for better usage of the CPU cores you might have already seen -at least for thepast 20 years- modern CPUs specifications report "cores" and "hardware threads" or "physical" and "logical" numbers. These notions refer to a mechanism called Simultaneous Multi-Threading (SMT) or Hyper-Threading on Intel's platforms.To illustrate this, imagine two tasks A and B, executing in parallel, each on its own software thread.At some point, there is a high probability these two tasks will have to wait for some resources to be fetched from main memory, SSD, HDD or even the network.If the threads are scheduled on different physical cores, with no hyper-threading, during these periods the core executing the task is in an Idle state waiting for the resources to arrive, and effectively doing nothing... and hence not getting fully utilizedNow, with SMT, the two software threads for task A and B can be scheduled on the same physical core, such that their execution is interleaved on that physical core: Task A and Task B will execute simultaneously on the physical core and when one task is halted, the other task can still continue execution on the core thereby increasing the utilization of that core.Figure 3. Illustration of Intel Hyper Threading technology (SMT)The figure 3. above simplifies the situation by assuming single core setup. If you want some more details on how SMT works on multi-cores CPUs, please refer to these two articles with very deep technical explanations of the behavior:Intel® Hyper-Threading Technology - Technical User Guide (12)Introduction to Hyper-Threading Technology (13)Back to our model inference workload... If you think about it, in a perfect world with a fully optimized setup, computations take the majority of time.In this context, using the logical cores shouldn't bring us any performance benefit because both logical cores (hardware threads) compete for the core’s execution resources.As a result, the tasks being a majority of general matrix multiplications (gemms (14)), they are inherently CPU bounds and does not benefits from SMT.5.3. Leveraging Multi-Socket servers and CPU affinityNowadays servers bring many cores, some of them even support multi-socket setups (i.e. multiple CPUs on the motherboard).On Linux, the command lscpu reports all the specifications and topology of the CPUs present on the system:ubuntu@some-ec2-machine:~$ lscpuArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianAddress sizes: 46 bits physical, 48 bits virtualCPU(s): 96On-line CPU(s) list: 0-95Thread(s) per core: 2Core(s) per socket: 24Socket(s): 2NUMA node(s): 2Vendor ID: GenuineIntelCPU family: 6Model: 85Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHzStepping: 7CPU MHz: 1200.577CPU max MHz: 3900.0000CPU min MHz: 1200.0000BogoMIPS: 6000.00Virtualization: VT-xL1d cache: 1.5 MiBL1i cache: 1.5 MiBL2 cache: 48 MiBL3 cache: 71.5 MiBNUMA node0 CPU(s): 0-23,48-71NUMA node1 CPU(s): 24-47,72-95In our case we have a machine with 2 sockets, each socket providing 24 physical cores with 2 threads per cores (SMT).Another interesting characteristic is the notion of NUMA node (0, 1) which represents how cores and memory are being mapped on the system.Non-Uniform Memory Access (NUMA) is the opposite of Uniform Memory Access (UMA) where the whole memory pool is accessible by all the cores through a single unified bus between sockets and the main memory. NUMA on the other hand splits the memory pool and each CPU socket is responsible to address a subset of the memory, reducing the congestion on the bus.Figure 5. Difference illustration of UMA and NUMA architectures (source (15))In order to fully utilize the potential of such a beefy machine, we need to ensure our model instances are correctly dispatched across all the physical cores on all sockets along with enforcing memory allocation to be "NUMA-aware".On Linux, NUMA's process configuration can be tuned through numactl which provides an interface to bind a process to a set of CPU cores (referred as Thread Affinity).Also, it allows tuning the memory allocation policy, making sure the memory allocated for the process is as close as possible to the cores' memory pool (referred as Explicit Memory Allocation Directives).Note: Setting both cores and memory affinities is important here. Having computations done on socket 0 and memory allocatedon socket 1 would ask the system to go over the sockets shared bus to exchange memory, thus leading to an undesired overhead.5.4. Tuning Thread Affinity & Memory Allocation PolicyNow that we have all the knobs required to control the resources' allocation of our model instances we go further and see how toeffectively deploy those and see the impact on latency and throughput.Let's go gradually to get a sense of what is the impact of each command and parameter.First, we start by launching our inference model without any tuning, and we observe how the computations are being dispatched on CPU cores (Left).python3 src/main.py model=bert-base-cased backend.name=pytorch batch_size=1 sequence_length=128Then we specify the core and memory affinity through numactl using all the physical cores and only a single thread (thread 0) per core (Right):numactl -C 0-47 -m 0,1 python3 src/main.py model=bert-base-cased backend.name=pytorch batch_size=1 sequence_length=128Figure 6. Linux htop command side-by-side results without & with Thread Affinity setAs you can see, without any specific tuning, PyTorch and TensorFlow dispatch the work on a single socket, using all the logical cores in that socket (both threads on 24 cores).Also, as we highlighted earlier, we do not want to leverage the SMT feature in our case, so we set the process' thread affinity to target only 1 hardware thread.Note, this is specific to this run and can vary depending on individual setups. Hence, it is recommended to check thread affinity settings for each specific use-case.Let's take sometime from here to highlight what we did with numactl:-C 0-47 indicates to numactl what is the thread affinity (cores 0 to 47).-m 0,1 indicates to numactl to allocate memory on both CPU socketsIf you wonder why we are binding the process to cores [0...47], you need to go back to look at the output of lscpu.From there you will find the section NUMA node0 and NUMA node1 which has the form NUMA node<X> <logical ids>In our case, each socket is one NUMA node and there are 2 NUMA nodes. Each socket or each NUMA node has 24 physical cores and 2 hardware threads per core, so 48 logical cores. For NUMA node 0, 0-23 are hardware thread 0 and 24-47 are hardware thread 1 on the 24 physical cores in socket 0. Likewise, for NUMA node 1, 48-71 are hardware thread 0 and 72-95 are hardware thread 1 on the 24 physical cores in socket 1.As we are targeting just 1 thread per physical core, as explained earlier, we pick only thread 0 on each core and hence logical processors 0-47. Since we are using both sockets, we need to also bind the memory allocations accordingly (0,1).Please note that using both sockets may not always give the best results, particularly for small problem sizes. The benefit of using compute resources across both sockets might be reduced or even negated by cross-socket communication overhead.6. Core count scaling - Does using more cores actually improve performance?When thinking about possible ways to improve our model inference performances, the first rational solution might be tothrow some more resources to do the same amount of work.Through the rest of this blog series, we will refer to this setup as Core Count Scaling meaning, only the numberof cores used on the system to achieve the task will vary. This is also often referred as Strong Scaling in the HPC world.At this stage, you may wonder what is the point of allocating only a subset of the cores rather than throwingall the horses at the task to achieve minimum latency.Indeed, depending on the problem-size, throwing more resources to the task might give better results.It is also possible that for small problems putting more CPU cores at work doesn't improve the final latency.In order to illustrate this, the figure 6. below takes different problem sizes (batch_size = 1, sequence length = {32, 128, 512})and reports the latencies with respect to the number of CPU cores used for runningcomputations for both PyTorch and TensorFlow.Limiting the number of resources involved in computation is done by limiting the CPU cores involved inintra operations (intra here means inside an operator doing computation, also known as "kernel").This is achieved through the following APIs:PyTorch: torch.set_num_threads(x)TensorFlow: tf.config.threading.set_intra_op_parallelism_threads(x)Figure 7. Latency measurementsAs you can see, depending on the problem size, the number of threads involved in the computations has a positive impacton the latency measurements.For small-sized problems & medium-sized problems using only one socket would give the best performance.For large-sized problems, the overhead of the cross-socket communication is covered by the computations cost, thus benefiting fromusing all the cores available on the both sockets.7. Multi-Stream Inference - Using multiple instances in parallelIf you're still reading this, you should now be in good shape to set up parallel inference workloads on CPU.Now, we are going to highlight some possibilities offered by the powerful hardware we have, and tuning the knobs described before, to scale our inference as linearly as possible.In the following section we will explore another possible scaling solution Batch Size Scaling, but before diving into this, let's take a look at how we can leverage Linux tools in order to assign Thread Affinity allowing effective model instance parallelism.Instead of throwing more cores to the task as you would do in the core count scaling setup, now we will be using more model instances.Each instance will run independently on its own subset of the hardware resources in a truly parallel fashion on a subset of the CPU cores. 7.1. How-to allocate multiple independent instancesLet's start simple, if we want to spawn 2 instances, one on each socket with 24 cores assigned:numactl -C 0-23 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=24numactl -C 24-47 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=24Starting from here, each instance does not share any resource with the other, and everything is operating at maximum efficiency from a hardware perspective.The latency measurements are identical to what a single instance would achieve, but throughput is actually 2x higheras the two instances operate in a truly parallel way.We can further increase the number of instances, lowering the number of cores assigned for each instance.Let's run 4 independent instances, each of them effectively bound to 12 CPU cores.numactl -C 0-11 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12numactl -C 12-23 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12numactl -C 24-35 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12numactl -C 36-47 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12The outcomes remain the same, our 4 instances are effectively running in a truly parallel manner.The latency will be slightly higher than the example before (2x less cores being used), but the throughput will be again 2x higher.7.2. Smart dispatching - Allocating different model instances for different problem sizesOne another possibility offered by this setup is to have multiple instances carefully tuned for various problem sizes.With a smart dispatching approach, one can redirect incoming requests to the right configuration giving the best latency depending on the request workload.# Small-sized problems (sequence length <= 32) use only 8 cores (on socket 0 - 8/24 cores used)numactl -C 0-7 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=32 backend.name=pytorch backend.num_threads=8# Medium-sized problems (32 > sequence <= 384) use remaining 16 cores (on socket 0 - (8+16)/24 cores used)numactl -C 8-23 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=16# Large sized problems (sequence >= 384) use the entire CPU (on socket 1 - 24/24 cores used)numactl -C 24-37 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=384 backend.name=pytorch backend.num_threads=248. Batch size scaling - Improving throughput and latency with multiple parallel & independent model instancesOne another very interesting direction for scaling up inference is to actually put some more model instances into the pool along with reducing the actual workload each instance receives proportionally.This method actually changes both the size of the problem (batch size), and the resources involved in the computation (cores).To illustrate, imagine you have a server with C CPU cores, and you want to run a workload containing B samples with S tokens.You can represent this workload as a tensor of shape [B, S], B being the size of the batch and S being the maximum sequence length within the B samples. For all the instances (N), each of them executes on C / N cores and would receive a subset of the task [B / N, S]. Each instance doesn't receive the global batch but instead, they all receive a subset of it [B / N, S] thus the name Batch Size Scaling.In order to highlight the benefits of such scaling method, the charts below reports both the latencies when scaling up model instances along with the effects on the throughput.When looking at the results, let's focus on the latency and the throughput aspects: On one hand, we are taking the maximum latency over the pool of instances to reflect the time it takes to process all the samples in the batch.Putting it differently, as instances operate in a truly parallel fashion, the time it takes to gather all the batch chunks from all the instancesis driven by the longest time it takes for individual instance in the pool to get their chunk done.As you can see below on Figure 7., the actual latency gain when increasing the number of instances is really dependent of the problem size.In all cases, we can find an optimal resource allocation (batch size & number of instances) to minimize our latency but, there is no specific pattern on the number of cores to involve in the computation.Also, it is important to notice the results might look totally different on another system (i.e. Operating System, Kernel Version, Framework version, etc.)Figure 8. sums up the best multi-instance configuration when targeting minimum latency by taking the minimum over the number of instances involved.For instance, for {batch = 8, sequence length = 128} using 4 instances (each with {batch = 2} and 12 cores) gives the best latency measurements.The Figure 9. reports all the setups minimizing latency for both PyTorch and TensorFlow for various problem-sizes. Spoiler: There are numerous other optimizations we will discuss in a follow-up blog post which will substantially impact this chart.Figure 8. Max latency evolution with respect to number of instances for a total batch size of 8Figure 9. Optimal number of instance minimizing overall latency for a total batch size of 8On a second hand, we observe the throughput as the sum of all the model instance executing in parallel.It allows us to visualize the scalability of the system when adding more and more instances each of them with fewer resources but also proportional workload.Here, the results show almost linear scalability and thus an optimal hardware usage.Figure 10. Sum throughput with respect to number of instances for a total batch size of 89. ConclusionThrough this blog post, we covered out-of-box BERT inference performance one can expect for PyTorch and TensorFlow, from a simple PyPi install and without further tuning.It is important to highlight results provided here reflects out-of-the-box framework setup hence, they might not provide the absolute best performances.We decided to not include optimizations as part of this blog post to focus on hardware and efficiency. Optimizations will be discussed in the second part! 🚀Then, we covered and detailed the impact, and the importance of setting the thread affinity along with the trade-off between the target problem size, and the number of cores required for achieving the task.Also, it is important to define which criteria (i.e. latency vs throughput) to use when optimizing your deployment as the resulting setups might be totally different.On a more general note, small problem sizes (short sequences and/or small batches) might require much fewer cores to achieve the best possible latency than big problems (very long sequences and/or big batches).It is interesting to cover all these aspects when thinking about the final deployment platform as it might cut the cost of the infrastructure drastically.For instance, our 48 cores machine charges 4.848$/h whereas a smaller instances with only 8 cores lowers the cost to 0.808$/h, leading to a 6x cost reduction. Last but not least, many of the knobs discussed along this blog post can be automatically tuned through a launcher script highly inspired from the original script made by Intel and available here.The launcher script is able to automatically starts your python process(es) with the correct thread affinity, effectivelysplitting resources across instances along with many other performances tips! We will detail many of this tips in the second part 🧐.In the follow-up blog post, more advanced settings and tuning techniques to decrease model latency even further will be involved, such as: Launcher script walk-throughTuning the memory allocation libraryUsing Linux's Transparent Huge Pages mechanismsUsing vendor-specific Math/Parallel librariesStay tuned! 🤗AcknowledgmentsOmry Yadan (Facebook FAIR) - Author of OmegaConf & Hydra for all the tips setting up Hydra correctly.All Intel & Intel Labs' NLP colleagues - For the ongoing optimizations and research efforts they are putting into transformers and more generally in the NLP field.Hugging Face colleagues - For all the comments and improvements in the reviewing process.ReferencesBenchmarking Transformers: PyTorch and TensorFlowHuggingFace's Transformers: State-of-the-art Natural Language ProcessingHuggingFace's Model HubBERT - Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin & al. 2018)Illustrated Transformer blogpost from Jay AlammarPyTorch - TorchScriptGoogle Accelerated Linear Algebra (XLA)ONNX Runtime - Optimize and Accelerate Machine Learning Inferencing and TrainingQ8BERT - Quantized 8Bit BERT (Zafrir & al. 2019)OpenMPIntel oneDNNIntel® Hyper-Threading Technology - Technical User GuideIntroduction to Hyper-Threading TechnologyBLAS (Basic Linear Algebra Subprogram) - WikipediaOptimizing Applications for NUMA
https://huggingface.co/blog/accelerate-library
Introducing 🤗 Accelerate
Sylvain Gugger
April 16, 2021
🤗 AccelerateRun your raw PyTorch training scripts on any kind of device.Most high-level libraries above PyTorch provide support for distributed training and mixed precision, but the abstraction they introduce require a user to learn a new API if they want to customize the underlying training loop. 🤗 Accelerate was created for PyTorch users who like to have full control over their training loops but are reluctant to write (and maintain) the boilerplate code needed to use distributed training (for multi-GPU on one or several nodes, TPUs, ...) or mixed precision training. Plans forward include support for fairscale, deepseed, AWS SageMaker specific data-parallelism and model parallelism.It provides two things: a simple and consistent API that abstracts that boilerplate code and a launcher command to easily run those scripts on various setups.Easy integration!Let's first have a look at an example:import torchimport torch.nn.functional as Ffrom datasets import load_dataset+ from accelerate import Accelerator+ accelerator = Accelerator()- device = 'cpu'+ device = accelerator.devicemodel = torch.nn.Transformer().to(device)optim = torch.optim.Adam(model.parameters())dataset = load_dataset('my_dataset')data = torch.utils.data.DataLoader(dataset, shuffle=True)+ model, optim, data = accelerator.prepare(model, optim, data)model.train()for epoch in range(10):for source, targets in data:source = source.to(device)targets = targets.to(device)optimizer.zero_grad()output = model(source)loss = F.cross_entropy(output, targets)- loss.backward()+ accelerator.backward(loss)optimizer.step()By just adding five lines of code to any standard PyTorch training script, you can now run said script on any kind of distributed setting, as well as with or without mixed precision. 🤗 Accelerate even handles the device placement for you, so you can simplify the training loop above even further:import torchimport torch.nn.functional as Ffrom datasets import load_dataset+ from accelerate import Accelerator+ accelerator = Accelerator()- device = 'cpu'- model = torch.nn.Transformer().to(device)+ model = torch.nn.Transformer()optim = torch.optim.Adam(model.parameters())dataset = load_dataset('my_dataset')data = torch.utils.data.DataLoader(dataset, shuffle=True)+ model, optim, data = accelerator.prepare(model, optim, data)model.train()for epoch in range(10):for source, targets in data:- source = source.to(device)- targets = targets.to(device)optimizer.zero_grad()output = model(source)loss = F.cross_entropy(output, targets)- loss.backward()+ accelerator.backward(loss)optimizer.step()In contrast, here are the changes needed to have this code run with distributed training are the followings:+ import osimport torchimport torch.nn.functional as Ffrom datasets import load_dataset+ from torch.utils.data import DistributedSampler+ from torch.nn.parallel import DistributedDataParallel+ local_rank = int(os.environ.get("LOCAL_RANK", -1))- device = 'cpu'+ device = device = torch.device("cuda", local_rank)model = torch.nn.Transformer().to(device)+ model = DistributedDataParallel(model) optim = torch.optim.Adam(model.parameters())dataset = load_dataset('my_dataset')+ sampler = DistributedSampler(dataset)- data = torch.utils.data.DataLoader(dataset, shuffle=True)+ data = torch.utils.data.DataLoader(dataset, sampler=sampler)model.train()for epoch in range(10):+ sampler.set_epoch(epoch) for source, targets in data:source = source.to(device)targets = targets.to(device)optimizer.zero_grad()output = model(source)loss = F.cross_entropy(output, targets)loss.backward()optimizer.step()These changes will make your training script work for multiple GPUs, but your script will then stop working on CPU or one GPU (unless you start adding if statements everywhere). Even more annoying, if you wanted to test your script on TPUs you would need to change different lines of codes. Same for mixed precision training. The promise of 🤗 Accelerate is:to keep the changes to your training loop to the bare minimum so you have to learn as little as possible.to have the same functions work for any distributed setup, so only have to learn one API.How does it work?To see how the library works in practice, let's have a look at each line of code we need to add to a training loop.accelerator = Accelerator()On top of giving the main object that you will use, this line will analyze from the environment the type of distributed training run and perform the necessary initialization. You can force a training on CPU or a mixed precision training by passing cpu=True or fp16=True to this init. Both of those options can also be set using the launcher for your script.model, optim, data = accelerator.prepare(model, optim, data)This is the main bulk of the API and will prepare the three main type of objects: models (torch.nn.Module), optimizers (torch.optim.Optimizer) and dataloaders (torch.data.dataloader.DataLoader).ModelModel preparation include wrapping it in the proper container (for instance DistributedDataParallel) and putting it on the proper device. Like with a regular distributed training, you will need to unwrap your model for saving, or to access its specific methods, which can be done with accelerator.unwrap_model(model).OptimizerThe optimizer is also wrapped in a special container that will perform the necessary operations in the step to make mixed precision work. It will also properly handle device placement of the state dict if its non-empty or loaded from a checkpoint.DataLoaderThis is where most of the magic is hidden. As you have seen in the code example, the library does not rely on a DistributedSampler, it will actually work with any sampler you might pass to your dataloader (if you ever had to write a distributed version of your custom sampler, there is no more need for that!). The dataloader is wrapped in a container that will only grab the indices relevant to the current process in the sampler (or skip the batches for the other processes if you use an IterableDataset) and put the batches on the proper device.For this to work, Accelerate provides a utility function that will synchronize the random number generators on each of the processes run during distributed training. By default, it only synchronizes the generator of your sampler, so your data augmentation will be different on each process, but the random shuffling will be the same. You can of course use this utility to synchronize more RNGs if you need it.accelerator.backward(loss)This last line adds the necessary steps for the backward pass (mostly for mixed precision but other integrations will require some custom behavior here).What about evaluation?Evaluation can either be run normally on all processes, or if you just want it to run on the main process, you can use the handy test:if accelerator.is_main_process():# Evaluation loopBut you can also very easily run a distributed evaluation using Accelerate, here is what you would need to add to your evaluation loop:+ eval_dataloader = accelerator.prepare(eval_dataloader)predictions, labels = [], []for source, targets in eval_dataloader:with torch.no_grad():output = model(source)- predictions.append(output.cpu().numpy())- labels.append(targets.cpu().numpy())+ predictions.append(accelerator.gather(output).cpu().numpy())+ labels.append(accelerator.gather(targets).cpu().numpy())predictions = np.concatenate(predictions)labels = np.concatenate(labels)+ predictions = predictions[:len(eval_dataloader.dataset)]+ labels = label[:len(eval_dataloader.dataset)]metric_compute(predictions, labels)Like for the training, you need to add one line to prepare your evaluation dataloader. Then you can just use accelerator.gather to gather across processes the tensors of predictions and labels. The last line to add truncates the predictions and labels to the number of examples in your dataset because the prepared evaluation dataloader will return a few more elements to make sure batches all have the same size on each process.One launcher to rule them allThe scripts using Accelerate will be completely compatible with your traditional launchers, such as torch.distributed.launch. But remembering all the arguments to them is a bit annoying and when you've setup your instance with 4 GPUs, you'll run most of your trainings using them all. Accelerate comes with a handy CLI that works in two steps:accelerate configThis will trigger a little questionnaire about your setup, which will create a config file you can edit with all the defaults for your training commands. Thenaccelerate launch path_to_script.py --args_to_the_scriptwill launch your training script using those default. The only thing you have to do is provide all the arguments needed by your training script.To make this launcher even more awesome, you can use it to spawn an AWS instance using SageMaker. Look at this guide to discover how!How to get involved?To get started, just pip install accelerate or see the documentation for more install options.Accelerate is a fully open-sourced project, you can find it on GitHub, have a look at its documentation or skim through our basic examples. Please let us know if you have any issue or feature you would like the library to support. For all questions, the forums is the place to check!For more complex examples in situation, you can look at the official Transformers examples. Each folder contains a run_task_no_trainer.py that leverages the Accelerate library!
https://huggingface.co/blog/sagemaker-distributed-training-seq2seq
Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker
Philipp Schmid
April 8, 2021
In case you missed it: on March 25th we announced a collaboration with Amazon SageMaker to make it easier to create State-of-the-Art Machine Learning models, and ship cutting-edge NLP features faster. Together with the SageMaker team, we built 🤗 Transformers optimized Deep Learning Containers to accelerate training of Transformers-based models. Thanks AWS friends!🤗 🚀 With the new HuggingFace estimator in the SageMaker Python SDK, you can start training with a single line of code. The announcement blog post provides all the information you need to know about the integration, including a "Getting Started" example and links to documentation, examples, and features.listed again here:🤗 Transformers Documentation: Amazon SageMakerExample NotebooksAmazon SageMaker documentation for Hugging FacePython SDK SageMaker documentation for Hugging FaceDeep Learning ContainerIf you're not familiar with Amazon SageMaker: "Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models." [REF]TutorialWe will use the new Hugging Face DLCs and Amazon SageMaker extension to train a distributed Seq2Seq-transformer model on the summarization task using the transformers and datasets libraries, and then upload the model to huggingface.co and test it.As distributed training strategy we are going to use SageMaker Data Parallelism, which has been built into the Trainer API. To use data-parallelism we only have to define the distribution parameter in our HuggingFace estimator.# configuration for running training on smdistributed Data Paralleldistribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}In this tutorial, we will use an Amazon SageMaker Notebook Instance for running our training job. You can learn here how to set up a Notebook Instance.What are we going to do:Set up a development environment and install sagemakerChoose 🤗 Transformers examples/ scriptConfigure distributed training and hyperparametersCreate a HuggingFace estimator and start trainingUpload the fine-tuned model to huggingface.coTest inferenceModel and DatasetWe are going to fine-tune facebook/bart-large-cnn on the samsum dataset. "BART is sequence-to-sequence model trained with denoising as pretraining objective." [REF]The samsum dataset contains about 16k messenger-like conversations with summaries. {"id": "13818513","summary": "Amanda baked cookies and will bring Jerry some tomorrow.","dialogue": "Amanda: I baked cookies. Do you want some?\rJerry: Sure!\rAmanda: I'll bring you tomorrow :-)"}Set up a development environment and install sagemakerAfter our SageMaker Notebook Instance is running we can select either Jupyer Notebook or JupyterLab and create a new Notebook with the conda_pytorch_p36 kernel.Note: The use of Jupyter is optional: We could also launch SageMaker Training jobs from anywhere we have an SDK installed, connectivity to the cloud and appropriate permissions, such as a Laptop, another IDE or a task scheduler like Airflow or AWS Step Functions.After that we can install the required dependencies!pip install transformers "datasets[s3]" sagemaker --upgradeinstall git-lfs for model upload.!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash!sudo yum install git-lfs -y!git lfs installTo run training on SageMaker we need to create a sagemaker Session and provide an IAM role with the right permission. This IAM role will be later attached to the TrainingJob enabling it to download data, e.g. from Amazon S3.import sagemakersess = sagemaker.Session()role = sagemaker.get_execution_role()print(f"IAM role arn used for running training: {role}")print(f"S3 bucket used for storing artifacts: {sess.default_bucket()}")Choose 🤗 Transformers examples/ scriptThe 🤗 Transformers repository contains several examples/scripts for fine-tuning models on tasks from language-modeling to token-classification. In our case, we are using the run_summarization.py from the seq2seq/ examples. Note: you can use this tutorial as-is to train your model on a different examples script.Since the HuggingFace Estimator has git support built-in, we can specify a training script stored in a GitHub repository as entry_point and source_dir.We are going to use the transformers 4.4.2 DLC which means we need to configure the v4.4.2 as the branch to pull the compatible example scripts.#git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'} # v4.4.2 is referring to the `transformers_version you use in the estimator.# used due an missing package in v4.4.2 git_config = {'repo': 'https://github.com/philschmid/transformers.git','branch': 'master'} # v4.4.2 is referring to the `transformers_version you use in the estimator.Configure distributed training and hyperparametersNext, we will define our hyperparameters and configure our distributed training strategy. As hyperparameter, we can define any Seq2SeqTrainingArguments and the ones defined in run_summarization.py. # hyperparameters, which are passed into the training jobhyperparameters={'per_device_train_batch_size': 4,'per_device_eval_batch_size': 4,'model_name_or_path':'facebook/bart-large-cnn','dataset_name':'samsum','do_train':True,'do_predict': True,'predict_with_generate': True,'output_dir':'/opt/ml/model','num_train_epochs': 3,'learning_rate': 5e-5,'seed': 7,'fp16': True,}# configuration for running training on smdistributed Data Paralleldistribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}Since, we are using SageMaker Data Parallelism our total_batch_size will be per_device_train_batch_size * n_gpus.Create a HuggingFace estimator and start trainingThe last step before training is creating a HuggingFace estimator. The Estimator handles the end-to-end Amazon SageMaker training. We define which fine-tuning script should be used as entry_point, which instance_type should be used, and which hyperparameters are passed in.from sagemaker.huggingface import HuggingFace# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='run_summarization.py', # scriptsource_dir='./examples/seq2seq', # relative path to examplegit_config=git_config,instance_type='ml.p3dn.24xlarge',instance_count=2,transformers_version='4.4.2',pytorch_version='1.6.0',py_version='py36',role=role,hyperparameters = hyperparameters,distribution = distribution)As instance_type we are using ml.p3dn.24xlarge, which contains 8x NVIDIA A100 with an instance_count of 2. This means we are going to run training on 16 GPUs and a total_batch_size of 16*4=64. We are going to train a 400 Million Parameter model with a total_batch_size of 64, which is just wow.To start our training we call the .fit() method.# starting the training jobhuggingface_estimator.fit()2021-04-01 13:00:35 Starting - Starting the training job...2021-04-01 13:01:03 Starting - Launching requested ML instancesProfilerReport-1617282031: InProgress2021-04-01 13:02:23 Starting - Preparing the instances for training......2021-04-01 13:03:25 Downloading - Downloading input data...2021-04-01 13:04:04 Training - Downloading the training image...............2021-04-01 13:06:33 Training - Training image download completed. Training in progress........2021-04-01 13:16:47 Uploading - Uploading generated training model2021-04-01 13:27:49 Completed - Training job completedTraining seconds: 2882Billable seconds: 2882The training seconds are 2882 because they are multiplied by the number of instances. If we calculate 2882/2=1441 is it the duration from "Downloading the training image" to "Training job completed". Converted to real money, our training on 16 NVIDIA Tesla V100-GPU for a State-of-the-Art summarization model comes down to ~28$.Upload the fine-tuned model to huggingface.coSince our model achieved a pretty good score we are going to upload it to huggingface.co, create a model_card and test it with the Hosted Inference widget. To upload a model you need to create an account here.We can download our model from Amazon S3 and unzip it using the following snippet.import osimport tarfilefrom sagemaker.s3 import S3Downloaderlocal_path = 'my_bart_model'os.makedirs(local_path, exist_ok = True)# download model from S3S3Downloader.download(s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is locatedlocal_path=local_path, # local path where *.tar.gz will be savedsagemaker_session=sess # sagemaker session used for training the model)# unzip modeltar = tarfile.open(f"{local_path}/model.tar.gz", "r:gz")tar.extractall(path=local_path)tar.close()os.remove(f"{local_path}/model.tar.gz")Before we are going to upload our model to huggingface.co we need to create a model_card. The model_card describes the model and includes hyperparameters, results, and specifies which dataset was used for training. To create a model_card we create a README.md in our local_path # read eval and test results with open(f"{local_path}/eval_results.json") as f:eval_results_raw = json.load(f)eval_results={}eval_results["eval_rouge1"] = eval_results_raw["eval_rouge1"]eval_results["eval_rouge2"] = eval_results_raw["eval_rouge2"]eval_results["eval_rougeL"] = eval_results_raw["eval_rougeL"]eval_results["eval_rougeLsum"] = eval_results_raw["eval_rougeLsum"]with open(f"{local_path}/test_results.json") as f:test_results_raw = json.load(f)test_results={}test_results["test_rouge1"] = test_results_raw["test_rouge1"]test_results["test_rouge2"] = test_results_raw["test_rouge2"]test_results["test_rougeL"] = test_results_raw["test_rougeL"]test_results["test_rougeLsum"] = test_results_raw["test_rougeLsum"]After we extract all the metrics we want to include we are going to create our README.md. Additionally to the automated generation of the results table we add the metrics manually to the metadata of our model card under model-indeximport jsonMODEL_CARD_TEMPLATE = """---language: entags:- sagemaker- bart- summarizationlicense: apache-2.0datasets:- samsummodel-index:- name: {model_name}results:- task: name: Abstractive Text Summarizationtype: abstractive-text-summarizationdataset:name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization" type: samsummetrics:- name: Validation ROGUE-1type: rogue-1value: 42.621- name: Validation ROGUE-2type: rogue-2value: 21.9825- name: Validation ROGUE-Ltype: rogue-lvalue: 33.034- name: Test ROGUE-1type: rogue-1value: 41.3174- name: Test ROGUE-2type: rogue-2value: 20.8716- name: Test ROGUE-Ltype: rogue-lvalue: 32.1337widget:- text: | Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok.Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face ---## `{model_name}`This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.For more information look at:- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)## Hyperparameters{hyperparameters}## Usagefrom transformers import pipelinesummarizer = pipeline("summarization", model="philschmid/{model_name}")conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok.Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face '''nlp(conversation)## Results| key | value || --- | ----- |{eval_table}{test_table}"""# Generate model card (todo: add more data from Trainer)model_card = MODEL_CARD_TEMPLATE.format(model_name=f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}",hyperparameters=json.dumps(hyperparameters, indent=4, sort_keys=True),eval_table="".join(f"| {k} | {v} |" for k, v in eval_results.items()),test_table="".join(f"| {k} | {v} |" for k, v in test_results.items()),)with open(f"{local_path}/README.md", "w") as f:f.write(model_card)After we have our unzipped model and model card located in my_bart_model we can use the either huggingface_hub SDK to create a repository and upload it to huggingface.co – or just to https://huggingface.co/new an create a new repository and upload it.from getpass import getpassfrom huggingface_hub import HfApi, Repositoryhf_username = "philschmid" # your username on huggingface.cohf_email = "philipp@huggingface.co" # email used for commitrepository_name = f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}" # repository name on huggingface.copassword = getpass("Enter your password:") # creates a prompt for entering password# get hf tokentoken = HfApi().login(username=hf_username, password=password)# create repositoryrepo_url = HfApi().create_repo(token=token, name=repository_name, exist_ok=True)# create a Repository instancemodel_repo = Repository(use_auth_token=token,clone_from=repo_url,local_dir=local_path,git_user=hf_username,git_email=hf_email)# push model to the hubmodel_repo.push_to_hub()Test inferenceAfter we uploaded our model we can access it at https://huggingface.co/{hf_username}/{repository_name} print(f"https://huggingface.co/{hf_username}/{repository_name}")And use the "Hosted Inference API" widget to test it. https://huggingface.co/philschmid/bart-large-cnn-samsum
https://huggingface.co/blog/big-bird
Understanding BigBird's Block Sparse Attention
Vasudev Gupta
March 31, 2021
IntroductionTransformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its O(n2)O(n^2)O(n2) time & memory complexity (where nnn is sequence length). Hence, it's computationally very expensive to apply transformer-based models on long sequences n>512n > 512n>512. Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention try to remedy this problem by approximating the full attention matrix. You can checkout 🤗's recent blog post in case you are unfamiliar with these models.BigBird (introduced in paper) is one of such recent models to address this issue. BigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower computational cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this post is to give the reader an in-depth understanding of big bird implementation & ease one's life in using BigBird with 🤗Transformers. But, before going into more depth, it is important to remember that the BigBird's attention is an approximation of BERT's full attention and therefore does not strive to be better than BERT's full attention, but rather to be more efficient. It simply allows to apply transformer-based models to much longer sequences since BERT's quadratic memory requirement quickly becomes unbearable. Simply put, if we would have ∞\infty∞ compute & ∞\infty∞ time, BERT's attention would be preferred over block sparse attention (which we are going to discuss in this post).If you wonder why we need more compute when working with longer sequences, this blog post is just right for you!Some of the main questions one might have when working with standard BERT-like attention include:Do all tokens really have to attend to all other tokens?Why not compute attention only over important tokens?How to decide what tokens are important?How to attend to just a few tokens in a very efficient way?In this blog post, we will try to answer those questions.What tokens should be attended to?We will give a practical example of how attention works by considering the sentence "BigBird is now available in HuggingFace for extractive question answering".In BERT-like attention, every word would simply attend to all other tokens. Put mathematically, this would mean that each queried token query-token∈{BigBird,is,now,available,in,HuggingFace,for,extractive,question,answering} \text{query-token} \in \{\text{BigBird},\text{is},\text{now},\text{available},\text{in},\text{HuggingFace},\text{for},\text{extractive},\text{question},\text{answering}\} query-token∈{BigBird,is,now,available,in,HuggingFace,for,extractive,question,answering}, would attend to the full list of key-tokens=[BigBird,is,now,available,in,HuggingFace,for,extractive,question,answering] \text{key-tokens} = \left[\text{BigBird},\text{is},\text{now},\text{available},\text{in},\text{HuggingFace},\text{for},\text{extractive},\text{question},\text{answering} \right]key-tokens=[BigBird,is,now,available,in,HuggingFace,for,extractive,question,answering]. Let's think about a sensible choice of key tokens that a queried token actually only should attend to by writing some pseudo-code.Will will assume that the token available is queried and build a sensible list of key tokens to attend to.>>> # let's consider following sentence as an example>>> example = ['BigBird', 'is', 'now', 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']>>> # further let's assume, we're trying to understand the representation of 'available' i.e. >>> query_token = 'available'>>> # We will initialize an empty `set` and fill up the tokens of our interest as we proceed in this section.>>> key_tokens = [] # => currently 'available' token doesn't have anything to attendNearby tokens should be important because, in a sentence (sequence of words), the current word is highly dependent on neighboring past & future tokens. This intuition is the idea behind the concept of sliding attention.>>> # considering `window_size = 3`, we will consider 1 token to left & 1 to right of 'available'>>> # left token: 'now' ; right token: 'in'>>> sliding_tokens = ["now", "available", "in"]>>> # let's update our collection with the above tokens>>> key_tokens.append(sliding_tokens)Long-range dependencies: For some tasks, it is crucial to capture long-range relationships between tokens. E.g., in `question-answering the model needs to compare each token of the context to the whole question to be able to figure out which part of the context is useful for a correct answer. If most of the context tokens would just attend to other context tokens, but not to the question, it becomes much harder for the model to filter important context tokens from less important context tokens.Now, BigBird proposes two ways of allowing long-term attention dependencies while staying computationally efficient.Global tokens: Introduce some tokens which will attend to every token and which are attended by every token. Eg: "HuggingFace is building nice libraries for easy NLP". Now, let's say 'building' is defined as a global token, and the model needs to know the relation among 'NLP' & 'HuggingFace' for some task (Note: these 2 tokens are at two extremes); Now having 'building' attend globally to all other tokens will probably help the model to associate 'NLP' with 'HuggingFace'.>>> # let's assume 1st & last token to be `global`, then>>> global_tokens = ["BigBird", "answering"]>>> # fill up global tokens in our key tokens collection>>> key_tokens.append(global_tokens)Random tokens: Select some tokens randomly which will transfer information by transferring to other tokens which in turn can transfer to other tokens. This may reduce the cost of information travel from one token to other.>>> # now we can choose `r` token randomly from our example sentence>>> # let's choose 'is' assuming `r=1`>>> random_tokens = ["is"] # Note: it is chosen compleletly randomly; so it can be anything else also.>>> # fill random tokens to our collection>>> key_tokens.append(random_tokens)>>> # it's time to see what tokens are in our `key_tokens` list>>> key_tokens{'now', 'is', 'in', 'answering', 'available', 'BigBird'}# Now, 'available' (query we choose in our 1st step) will attend only these tokens instead of attending the complete sequenceThis way, the query token attends only to a subset of all possible tokens while yielding a good approximation of full attention. The same approach will is used for all other queried tokens. But remember, the whole point here is to approximate BERT's full attention as efficiently as possible. Simply making each queried token attend all key tokens as it's done for BERT can be computed very effectively as a sequence of matrix multiplication on modern hardware, like GPUs. However, a combination of sliding, global & random attention appears to imply sparse matrix multiplication, which is harder to implement efficiently on modern hardware.One of the major contributions of BigBird is the proposition of a block sparse attention mechanism that allows computing sliding, global & random attention effectively. Let's look into it!Understanding the need for global, sliding, random keys with GraphsFirst, let's get a better understanding of global, sliding & random attention using graphs and try to understand how the combination of these three attention mechanisms yields a very good approximation of standard Bert-like attention.The above figure shows global (left), sliding (middle) & random (right) connections respectively as a graph. Each node corresponds to a token and each line represents an attention score. If no connection is made between 2 tokens, then an attention score is assumed to 0.BigBird block sparse attention is a combination of sliding, global & random connections (total 10 connections) as shown in gif in left. While a graph of normal attention (right) will have all 15 connections (note: total 6 nodes are present). You can simply think of normal attention as all the tokens attending globally 1 {}^1 1.Normal attention: Model can transfer information from one token to another token directly in a single layer since each token is queried over every other token and is attended by every other token. Let's consider an example similar to what is shown in the above figures. If the model needs to associate 'going' with 'now', it can simply do that in a single layer since there is a direct connection joining both the tokens.Block sparse attention: If the model needs to share information between two nodes (or tokens), information will have to travel across various other nodes in the path for some of the tokens; since all the nodes are not directly connected in a single layer.Eg., assuming model needs to associate 'going' with 'now', then if only sliding attention is present the flow of information among those 2 tokens, is defined by the path: going -> am -> i -> now (i.e. it will have to travel over 2 other tokens). Hence, we may need multiple layers to capture the entire information of the sequence. Normal attention can capture this in a single layer. In an extreme case, this could mean that as many layers as input tokens are needed. If, however, we introduce some global tokens information can travel via the path: going -> i -> now (which is shorter). If we in addition introduce random connections it can travel via: going -> am -> now. With the help of random connections & global connections, information can travel very rapidly (with just a few layers) from one token to the next.In case, we have many global tokens, then we may not need random connections since there will be multiple short paths through which information can travel. This is the idea behind keeping num_random_tokens = 0 when working with a variant of BigBird, called ETC (more on this in later sections).1 {}^1 1 In these graphics, we are assuming that the attention matrix is symmetric i.e. Aij=Aji\mathbf{A}_{ij} = \mathbf{A}_{ji}Aij​=Aji​ since in a graph if some token A attends B, then B will also attend A. You can see from the figure of the attention matrix shown in the next section that this assumption holds for most tokens in BigBirdAttention Typeglobal_tokenssliding_tokensrandom_tokensoriginal_fulln00block_sparse2 x block_size3 x block_sizenum_random_blocks x block_sizeoriginal_full represents BERT's attention while block_sparse represents BigBird's attention. Wondering what the block_size is? We will cover that in later sections. For now, consider it to be 1 for simplicityBigBird block sparse attentionBigBird block sparse attention is just an efficient implementation of what we discussed above. Each token is attending some global tokens, sliding tokens, & random tokens instead of attending to all other tokens. The authors hardcoded the attention matrix for multiple query components separately; and used a cool trick to speed up training/inference on GPU and TPU.Note: on the top, we have 2 extra sentences. As you can notice, every token is just switched by one place in both sentences. This is how sliding attention is implemented. When q[i] is multiplied with k[i,0:3], we will get a sliding attention score for q[i] (where i is index of element in sequence).You can find the actual implementation of block_sparse attention here. This may look very scary 😨😨 now. But this article will surely ease your life in understanding the code.Global AttentionFor global attention, each query is simply attending to all the other tokens in the sequence & is attended by every other token. Let's assume Vasudev (1st token) & them (last token) to be global (in the above figure). You can see that these tokens are directly connected to all other tokens (blue boxes).# pseudo codeQ -> Query martix (seq_length, head_dim)K -> Key matrix (seq_length, head_dim)# 1st & last token attends all other tokensQ[0] x [K[0], K[1], K[2], ......, K[n-1]]Q[n-1] x [K[0], K[1], K[2], ......, K[n-1]]# 1st & last token getting attended by all other tokensK[0] x [Q[0], Q[1], Q[2], ......, Q[n-1]]K[n-1] x [Q[0], Q[1], Q[2], ......, Q[n-1]]Sliding AttentionThe sequence of key tokens is copied 2 times with each element shifted to the right in one of the copies and to the left in the other copy. Now if we multiply query sequence vectors by these 3 sequence vectors, we will cover all the sliding tokens. Computational complexity is simply O(3xn) = O(n). Referring to the above picture, the orange boxes represent the sliding attention. You can see 3 sequences at the top of the figure with 2 of them shifted by one token (1 to the left, 1 to the right).# what we want to doQ[i] x [K[i-1], K[i], K[i+1]] for i = 1:-1# efficient implementation in code (assume dot product multiplication 👇)[Q[0], Q[1], Q[2], ......, Q[n-2], Q[n-1]] x [K[1], K[2], K[3], ......, K[n-1], K[0]][Q[0], Q[1], Q[2], ......, Q[n-1]] x [K[n-1], K[0], K[1], ......, K[n-2]][Q[0], Q[1], Q[2], ......, Q[n-1]] x [K[0], K[1], K[2], ......, K[n-1]]# Each sequence is getting multiplied by only 3 sequences to keep `window_size = 3`.# Some computations might be missing; this is just a rough idea.Random AttentionRandom attention is ensuring that each query token will attend a few random tokens as well. For the actual implementation, this means that the model gathers some tokens randomly and computes their attention score.# r1, r2, r are some random indices; Note: r1, r2, r3 are different for each row 👇Q[1] x [Q[r1], Q[r2], ......, Q[r]]...Q[n-2] x [Q[r1], Q[r2], ......, Q[r]]# leaving 0th & (n-1)th token since they are already globalNote: The current implementation further divides sequence into blocks & each notation is defined w.r.to block instead of tokens. Let's discuss this in more detail in the next section.ImplementationRecap: In regular BERT attention, a sequence of tokens i.e. X=x1,x2,....,xn X = x_1, x_2, ...., x_n X=x1​,x2​,....,xn​ is projected through a dense layer into Q,K,V Q,K,V Q,K,V and the attention score Z Z Z is calculated as Z=Softmax(QKT) Z=Softmax(QK^T) Z=Softmax(QKT). In the case of BigBird block sparse attention, the same algorithm is used but only with some selected query & key vectors.Let's have a look at how bigbird block sparse attention is implemented. To begin with, let's assume b,r,s,gb, r, s, gb,r,s,g represent block_size, num_random_blocks, num_sliding_blocks, num_global_blocks, respectively. Visually, we can illustrate the components of big bird's block sparse attention with b=4,r=1,g=2,s=3,d=5b=4, r=1, g=2, s=3, d=5b=4,r=1,g=2,s=3,d=5 as follows:Attention scores for q1,q2,q3:n−2,qn−1,qn{q}_{1}, {q}_{2}, {q}_{3:n-2}, {q}_{n-1}, {q}_{n}q1​,q2​,q3:n−2​,qn−1​,qn​ are calculated separately as described below:Attention score for q1\mathbf{q}_{1}q1​ represented by a1a_1a1​ where a1=Softmax(q1∗KT)a_1=Softmax(q_1 * K^T)a1​=Softmax(q1​∗KT), is nothing but attention score between all the tokens in 1st block with all the other tokens in the sequence.q1q_1q1​ represents 1st block, gig_igi​ represents iii block. We are simply performing normal attention operation between q1q_1q1​ & ggg (i.e. all the keys).For calculating attention score for tokens in seconcd block, we are gathering the first three blocks, the last block, and the fifth block. Then we can compute a2=Softmax(q2∗concat(k1,k2,k3,k5,k7)a_2 = Softmax(q_2 * concat(k_1, k_2, k_3, k_5, k_7)a2​=Softmax(q2​∗concat(k1​,k2​,k3​,k5​,k7​).I am representing tokens by g,r,sg, r, sg,r,s just to represent their nature explicitly (i.e. showing global, random, sliding tokens), else they are kkk only.For calculating attention score for q3:n−2{q}_{3:n-2}q3:n−2​, we will gather global, sliding, random keys & will compute the normal attention operation over q3:n−2{q}_{3:n-2}q3:n−2​ and the gathered keys. Note that sliding keys are gathered using the special shifting trick as discussed earlier in the sliding attention section.For calculating attention score for tokens in previous to last block (i.e. qn−1{q}_{n-1}qn−1​), we are gathering the first block, last three blocks, and the third block. Then we can apply the formula an−1=Softmax(qn−1∗concat(k1,k3,k5,k6,k7)){a}_{n-1} = Softmax({q}_{n-1} * concat(k_1, k_3, k_5, k_6, k_7))an−1​=Softmax(qn−1​∗concat(k1​,k3​,k5​,k6​,k7​)). This is very similar to what we did for q2q_2q2​.Attention score for qn\mathbf{q}_{n}qn​ is represented by ana_nan​ where an=Softmax(qn∗KT)a_n=Softmax(q_n * K^T)an​=Softmax(qn​∗KT), and is nothing but attention score between all the tokens in the last block with all the other tokens in sequence. This is very similar to what we did for q1 q_1 q1​ .Let's combine the above matrices to get the final attention matrix. This attention matrix can be used to get a representation of all the tokens.blue -> global blocks, red -> random blocks, orange -> sliding blocks This attention matrix is just for illustration. During the forward pass, we aren't storing white blocks, but are computing a weighted value matrix (i.e. representation of each token) directly for each separated components as discussed above.Now, we have covered the hardest part of block sparse attention, i.e. its implementation. Hopefully, you now have a better background to understand the actual code. Feel free to dive into it and to connect each part of the code with one of the components above.Time & Memory complexityAttention TypeSequence lengthTime & Memory Complexityoriginal_full512T10244 x T409664 x Tblock_sparse10242 x T40968 x TComparison of time & space complexity of BERT attention and BigBird block sparse attention.Expand this snippet in case you wanna see the calculationsBigBird time complexity = O(w x n + r x n + g x n)BERT time complexity = O(n^2)Assumptions:w = 3 x 64r = 3 x 64g = 2 x 64When seqlen = 512=> **time complexity in BERT = 512^2**When seqlen = 1024=> time complexity in BERT = (2 x 512)^2=> **time complexity in BERT = 4 x 512^2**=> time complexity in BigBird = (8 x 64) x (2 x 512)=> **time complexity in BigBird = 2 x 512^2**When seqlen = 4096=> time complexity in BERT = (8 x 512)^2=> **time complexity in BERT = 64 x 512^2**=> compute in BigBird = (8 x 64) x (8 x 512)=> compute in BigBird = 8 x (512 x 512)=> **time complexity in BigBird = 8 x 512^2**ITC vs ETCThe BigBird model can be trained using 2 different strategies: ITC & ETC. ITC (internal transformer construction) is simply what we discussed above. In ETC (extended transformer construction), some additional tokens are made global such that they will attend to / will be attended by all tokens.ITC requires less compute since very few tokens are global while at the same time the model can capture sufficient global information (also with the help of random attention). On the other hand, ETC can be very helpful for tasks in which we need a lot of global tokens such as `question-answering for which the entire question should be attended to globally by the context to be able to relate the context correctly to the question.Note: It is shown in the Big Bird paper that in many ETC experiments, the number of random blocks is set to 0. This is reasonable given our discussions above in the graph section.The table below summarizes ITC & ETC:ITCETCAttention Matrix with global attentionA=[111111111111111111111111] A = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix} A=​1111111​11​11​11​11​11​1111111​​B=[11111111111111111111111111111111111111111111111111111111] B = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix} B=​111111111​111111111​111111111​1111​1111​1111​1111​1111​111111111​​global_tokens2 x block_sizeextra_tokens + 2 x block_sizerandom_tokensnum_random_blocks x block_sizenum_random_blocks x block_sizesliding_tokens3 x block_size3 x block_sizeUsing BigBird with 🤗TransformersYou can use BigBirdModel just like any other 🤗 model. Let's see some code below:from transformers import BigBirdModel# loading bigbird from its pretrained checkpointmodel = BigBirdModel.from_pretrained("google/bigbird-roberta-base")# This will init the model with default configuration i.e. attention_type = "block_sparse" num_random_blocks = 3, block_size = 64.# But You can freely change these arguments with any checkpoint. These 3 arguments will just change the number of tokens each query token is going to attend.model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", num_random_blocks=2, block_size=16)# By setting attention_type to `original_full`, BigBird will be relying on the full attention of n^2 complexity. This way BigBird is 99.9 % similar to BERT.model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", attention_type="original_full")There are total 3 checkpoints available in 🤗Hub (at the point of writing this article): bigbird-roberta-base, bigbird-roberta-large, bigbird-base-trivia-itc. The first two checkpoints come from pretraining BigBirdForPretraining with masked_lm loss; while the last one corresponds to the checkpoint after finetuning BigBirdForQuestionAnswering on trivia-qa dataset.Let's have a look at minimal code you can write (in case you like to use your PyTorch trainer), to use 🤗's BigBird model for fine-tuning your tasks.# let's consider our task to be question-answering as an examplefrom transformers import BigBirdForQuestionAnswering, BigBirdTokenizerimport torchdevice = torch.device("cpu")if torch.cuda.is_available():device = torch.device("cuda")# lets initialize bigbird model from pretrained weights with randomly initialized head on its topmodel = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base", block_size=64, num_random_blocks=3)tokenizer = BigBirdTokenizer.from_pretrained("google/bigbird-roberta-base")model.to(device)dataset = "torch.utils.data.DataLoader object"optimizer = "torch.optim object"epochs = ...# very minimal training loopfor e in range(epochs):for batch in dataset:model.train()batch = {k: batch[k].to(device) for k in batch}# forward passoutput = model(**batch)# back-propogationoutput["loss"].backward()optimizer.step()optimizer.zero_grad()# let's save final weights in a local directorymodel.save_pretrained("<YOUR-WEIGHTS-DIR>")# let's push our weights to 🤗Hubfrom huggingface_hub import ModelHubMixinModelHubMixin.push_to_hub("<YOUR-WEIGHTS-DIR>", model_id="<YOUR-FINETUNED-ID>")# using finetuned model for inferencequestion = ["How are you doing?", "How is life going?"]context = ["<some big context having ans-1>", "<some big context having ans-2>"]batch = tokenizer(question, context, return_tensors="pt")batch = {k: batch[k].to(device) for k in batch}model = BigBirdForQuestionAnswering.from_pretrained("<YOUR-FINETUNED-ID>")model.to(device)with torch.no_grad():start_logits, end_logits = model(**batch).to_tuple()# now decode start_logits, end_logits with what ever strategy you want.# Note:# This was very minimal code (in case you want to use raw PyTorch) just for showing how BigBird can be used very easily# I would suggest using 🤗Trainer to have access for a lot of featuresIt's important to keep the following points in mind while working with big bird:Sequence length must be a multiple of block size i.e. seqlen % block_size = 0. You need not worry since 🤗Transformers will automatically <pad> (to smallest multiple of block size which is greater than sequence length) if batch sequence length is not a multiple of block_size.Currently, HuggingFace version doesn't support ETC and hence only 1st & last block will be global.Current implementation doesn't support num_random_blocks = 0.It's recommended by authors to set attention_type = "original_full" when sequence length < 1024.This must hold: seq_length > global_token + random_tokens + sliding_tokens + buffer_tokens where global_tokens = 2 x block_size, sliding_tokens = 3 x block_size, random_tokens = num_random_blocks x block_size & buffer_tokens = num_random_blocks x block_size. In case you fail to do that, 🤗Transformers will automatically switch attention_type to original_full with a warning.When using big bird as decoder (or using BigBirdForCasualLM), attention_type should be original_full. But you need not worry, 🤗Transformers will automatically switch attention_type to original_full in case you forget to do that.What's next?@patrickvonplaten has made a really cool notebook on how to evaluate BigBirdForQuestionAnswering on the trivia-qa dataset. Feel free to play with BigBird using that notebook.You will soon find BigBird Pegasus-like model in the library for long document summarization💥.End NotesThe original implementation of block sparse attention matrix can be found here. You can find 🤗's version here.
https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
The Partnership: Amazon SageMaker and Hugging Face
No authors found
March 23, 2021
Look at these smiles!Today, we announce a strategic partnership between Hugging Face and Amazon to make it easier for companies to leverage State of the Art Machine Learning models, and ship cutting-edge NLP features faster.Through this partnership, Hugging Face is leveraging Amazon Web Services as its Preferred Cloud Provider to deliver services to its customers.As a first step to enable our common customers, Hugging Face and Amazon are introducing new Hugging Face Deep Learning Containers (DLCs) to make it easier than ever to train Hugging Face Transformer models in Amazon SageMaker.To learn how to access and use the new Hugging Face DLCs with the Amazon SageMaker Python SDK, check out the guides and resources below.On July 8th, 2021 we extended the Amazon SageMaker integration to add easy deployment and inference of Transformers models. If you want to learn how you can deploy Hugging Face models easily with Amazon SageMaker take a look at the new blog post and the documentation.Features & Benefits 🔥One Command is All you NeedWith the new Hugging Face Deep Learning Containers available in Amazon SageMaker, training cutting-edge Transformers-based NLP models has never been simpler. There are variants specially optimized for TensorFlow and PyTorch, for single-GPU, single-node multi-GPU and multi-node clusters.Accelerating Machine Learning from Science to ProductionIn addition to Hugging Face DLCs, we created a first-class Hugging Face extension to the SageMaker Python-sdk to accelerate data science teams, reducing the time required to set up and run experiments from days to minutes.You can use the Hugging Face DLCs with the Automatic Model Tuning capability of Amazon SageMaker, in order to automatically optimize your training hyperparameters and quickly increase the accuracy of your models.Thanks to the SageMaker Studio web-based Integrated Development Environment (IDE), you can easily track and compare your experiments and your training artifacts.Built-in PerformanceWith the Hugging Face DLCs, SageMaker customers will benefit from built-in performance optimizations for PyTorch or TensorFlow, to train NLP models faster, and with the flexibility to choose the training infrastructure with the best price/performance ratio for your workload.The Hugging Face DLCs are fully integrated with the SageMaker distributed training libraries, to train models faster than was ever possible before, using the latest generation of instances available on Amazon EC2.Resources, Documentation & Samples 📄Below you can find all the important resources to all published blog posts, videos, documentation, and sample Notebooks/scripts.Blog/VideoAWS: Embracing natural language processing with Hugging FaceDeploy Hugging Face models easily with Amazon SageMakerAWS and Hugging Face collaborate to simplify and accelerate adoption of natural language processing modelsWalkthrough: End-to-End Text ClassificationWorking with Hugging Face models on Amazon SageMakerDistributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMakerDeploy a Hugging Face Transformers Model from S3 to Amazon SageMakerDeploy a Hugging Face Transformers Model from the Model Hub to Amazon SageMakerDocumentationHugging Face documentation for Amazon SageMakerRun training on Amazon SageMakerDeploy models to Amazon SageMakerFrequently Asked QuestionsAmazon SageMaker documentation for Hugging FacePython SDK SageMaker documentation for Hugging FaceDeep Learning ContainerSageMaker's Distributed Data Parallel LibrarySageMaker's Distributed Model Parallel LibrarySample Notebookall NotebooksGetting Started PytorchGetting Started TensorflowDistributed Training Data ParallelismDistributed Training Model ParallelismSpot Instances and continue trainingSageMaker MetricsDistributed Training Data Parallelism TensorflowDistributed Training SummarizationImage Classification with Vision TransformerDeploy one of the 10,000+ Hugging Face Transformers to Amazon SageMaker for InferenceDeploy a Hugging Face Transformer model from S3 to SageMaker for inferenceGetting started: End-to-End Text Classification 🧭In this getting started guide, we will use the new Hugging Face DLCs and Amazon SageMaker extension to train a transformer model on binary text classification using the transformers and datasets libraries.We will use an Amazon SageMaker Notebook Instance for the example. You can learn here how to set up a Notebook Instance.What are we going to do:set up a development environment and install sagemakercreate the training script train.pypreprocess our data and upload it to Amazon S3create a HuggingFace Estimator and train our modelSet up a development environment and install sagemakerAs mentioned above we are going to use SageMaker Notebook Instances for this. To get started you need to jump into your Jupyer Notebook or JupyterLab and create a new Notebook with the conda_pytorch_p36 kernel.Note: The use of Jupyter is optional: We could also launch SageMaker Training jobs from anywhere we have an SDK installed, connectivity to the cloud and appropriate permissions, such as a Laptop, another IDE or a task scheduler like Airflow or AWS Step Functions.After that we can install the required dependenciespip install "sagemaker>=2.31.0" "transformers==4.6.1" "datasets[s3]==1.6.2" --upgradeTo run training on SageMaker we need to create a sagemaker Session and provide an IAM role with the right permission. This IAM role will be later attached to the TrainingJob enabling it to download data, e.g. from Amazon S3.import sagemakersess = sagemaker.Session()# sagemaker session bucket -> used for uploading data, models and logs# sagemaker will automatically create this bucket if it not existssagemaker_session_bucket=Noneif sagemaker_session_bucket is None and sess is not None:# set to default bucket if a bucket name is not givensagemaker_session_bucket = sess.default_bucket()role = sagemaker.get_execution_role()sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)print(f"sagemaker role arn: {role}")print(f"sagemaker bucket: {sess.default_bucket()}")print(f"sagemaker session region: {sess.boto_region_name}")Create the training script train.pyIn a SageMaker TrainingJob we are executing a python script with named arguments. In this example, we use PyTorch together with transformers. The script willpass the incoming parameters (hyperparameters from HuggingFace Estimator)load our datasetdefine our compute metrics functionset up our Trainerrun training with trainer.train()evaluate the training and save our model at the end to S3.from transformers import AutoModelForSequenceClassification, Trainer, TrainingArgumentsfrom sklearn.metrics import accuracy_score, precision_recall_fscore_supportfrom datasets import load_from_diskimport randomimport loggingimport sysimport argparseimport osimport torchif __name__ == "__main__":parser = argparse.ArgumentParser()# hyperparameters sent by the client are passed as command-line arguments to the script.parser.add_argument("--epochs", type=int, default=3)parser.add_argument("--train-batch-size", type=int, default=32)parser.add_argument("--eval-batch-size", type=int, default=64)parser.add_argument("--warmup_steps", type=int, default=500)parser.add_argument("--model_name", type=str)parser.add_argument("--learning_rate", type=str, default=5e-5)# Data, model, and output directoriesparser.add_argument("--output-data-dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"])parser.add_argument("--model-dir", type=str, default=os.environ["SM_MODEL_DIR"])parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"])parser.add_argument("--training_dir", type=str, default=os.environ["SM_CHANNEL_TRAIN"])parser.add_argument("--test_dir", type=str, default=os.environ["SM_CHANNEL_TEST"])args, _ = parser.parse_known_args()# Set up logginglogger = logging.getLogger(__name__)logging.basicConfig(level=logging.getLevelName("INFO"),handlers=[logging.StreamHandler(sys.stdout)],format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",)# load datasetstrain_dataset = load_from_disk(args.training_dir)test_dataset = load_from_disk(args.test_dir)logger.info(f" loaded train_dataset length is: {len(train_dataset)}")logger.info(f" loaded test_dataset length is: {len(test_dataset)}")# compute metrics function for binary classificationdef compute_metrics(pred):labels = pred.label_idspreds = pred.predictions.argmax(-1)precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average="binary")acc = accuracy_score(labels, preds)return {"accuracy": acc, "f1": f1, "precision": precision, "recall": recall}# download model from model hubmodel = AutoModelForSequenceClassification.from_pretrained(args.model_name)# define training argstraining_args = TrainingArguments(output_dir=args.model_dir,num_train_epochs=args.epochs,per_device_train_batch_size=args.train_batch_size,per_device_eval_batch_size=args.eval_batch_size,warmup_steps=args.warmup_steps,evaluation_strategy="epoch",logging_dir=f"{args.output_data_dir}/logs",learning_rate=float(args.learning_rate),)# create Trainer instancetrainer = Trainer(model=model,args=training_args,compute_metrics=compute_metrics,train_dataset=train_dataset,eval_dataset=test_dataset,)# train modeltrainer.train()# evaluate modeleval_result = trainer.evaluate(eval_dataset=test_dataset)# writes eval result to file which can be accessed later in s3 outputwith open(os.path.join(args.output_data_dir, "eval_results.txt"), "w") as writer:print(f"***** Eval results *****")for key, value in sorted(eval_result.items()):writer.write(f"{key} = {value}")# Saves the model to s3; default is /opt/ml/model which SageMaker sends to S3trainer.save_model(args.model_dir)Preprocess our data and upload it to s3We use the datasets library to download and preprocess our imdb dataset. After preprocessing, the dataset will be uploaded to the current session’s default s3 bucket sess.default_bucket() used within our training job. The imdb dataset consists of 25000 training and 25000 testing highly polar movie reviews.import botocorefrom datasets import load_datasetfrom transformers import AutoTokenizerfrom datasets.filesystems import S3FileSystem# tokenizer used in preprocessingtokenizer_name = 'distilbert-base-uncased'# filesystem client for s3s3 = S3FileSystem()# dataset useddataset_name = 'imdb'# s3 key prefix for the datas3_prefix = 'datasets/imdb'# load datasetdataset = load_dataset(dataset_name)# download tokenizertokenizer = AutoTokenizer.from_pretrained(tokenizer_name)# tokenizer helper functiondef tokenize(batch):return tokenizer(batch['text'], padding='max_length', truncation=True)# load datasettrain_dataset, test_dataset = load_dataset('imdb', split=['train', 'test'])test_dataset = test_dataset.shuffle().select(range(10000)) # smaller the size for test dataset to 10k# tokenize datasettrain_dataset = train_dataset.map(tokenize, batched=True, batch_size=len(train_dataset))test_dataset = test_dataset.map(tokenize, batched=True, batch_size=len(test_dataset))# set format for pytorchtrain_dataset = train_dataset.rename_column("label", "labels")train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])test_dataset = test_dataset.rename_column("label", "labels")test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])# save train_dataset to s3training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'train_dataset.save_to_disk(training_input_path,fs=s3)# save test_dataset to s3test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'test_dataset.save_to_disk(test_input_path,fs=s3)Create a HuggingFace Estimator and train our modelIn order to create a SageMaker Trainingjob we can use a HuggingFace Estimator. The Estimator handles the end-to-end Amazon SageMaker training. In an Estimator, we define which fine-tuning script should be used as entry_point, which instance_type should be used, which hyperparameters are passed in. In addition to this, a number of advanced controls are available, such as customizing the output and checkpointing locations, specifying the local storage size or network configuration.SageMaker takes care of starting and managing all the required Amazon EC2 instances for us with the Hugging Face DLC, it uploads the provided fine-tuning script, for example, our train.py, then downloads the data from the S3 bucket, sess.default_bucket(), into the container. Once the data is ready, the training job will start automatically by running./opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32The hyperparameters you define in the HuggingFace Estimator are passed in as named arguments.from sagemaker.huggingface import HuggingFace# hyperparameters, which are passed into the training jobhyperparameters={'epochs': 1,'train_batch_size': 32,'model_name':'distilbert-base-uncased'}# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='train.py',source_dir='./scripts',instance_type='ml.p3.2xlarge',instance_count=1,role=role,transformers_version='4.6',pytorch_version='1.7',py_version='py36',hyperparameters = hyperparameters)To start our training we call the .fit() method and pass our S3 uri as input.# starting the train job with our uploaded datasets as inputhuggingface_estimator.fit({'train': training_input_path, 'test': test_input_path})Additional Features 🚀In addition to the Deep Learning Container and the SageMaker SDK, we have implemented other additional features.Distributed Training: Data-ParallelYou can use SageMaker Data Parallelism Library out of the box for distributed training. We added the functionality of Data Parallelism directly into the Trainer. If your train.py uses the Trainer API you only need to define the distribution parameter in the HuggingFace Estimator.Example Notebook PyTorchExample Notebook TensorFlow# configuration for running training on smdistributed Data Paralleldistribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='train.py',source_dir='./scripts',instance_type='ml.p3dn.24xlarge',instance_count=2,role=role,transformers_version='4.4.2',pytorch_version='1.6.0',py_version='py36',hyperparameters = hyperparametersdistribution = distribution)The "Getting started: End-to-End Text Classification 🧭" example can be used for distributed training without any changes.Distributed Training: Model ParallelYou can use SageMaker Model Parallelism Library out of the box for distributed training. We added the functionality of Model Parallelism directly into the Trainer. If your train.py uses the Trainer API you only need to define the distribution parameter in the HuggingFace Estimator.For detailed information about the adjustments take a look here.Example Notebook# configuration for running training on smdistributed Model Parallelmpi_options = {"enabled" : True,"processes_per_host" : 8}smp_options = {"enabled":True,"parameters": {"microbatches": 4,"placement_strategy": "spread","pipeline": "interleaved","optimize": "speed","partitions": 4,"ddp": True,}}distribution={"smdistributed": {"modelparallel": smp_options},"mpi": mpi_options}# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='train.py',source_dir='./scripts',instance_type='ml.p3dn.24xlarge',instance_count=2,role=role,transformers_version='4.4.2',pytorch_version='1.6.0',py_version='py36',hyperparameters = hyperparameters,distribution = distribution)Spot instancesWith the creation of HuggingFace Framework extension for the SageMaker Python SDK we can also leverage the benefit of fully-managed EC2 spot instances and save up to 90% of our training cost.Note: Unless your training job will complete quickly, we recommend you use checkpointing with managed spot training, therefore you need to define the checkpoint_s3_uri.To use spot instances with the HuggingFace Estimator we have to set the use_spot_instances parameter to True and define your max_wait and max_run time. You can read more about the managed spot training lifecycle here.Example Notebook# hyperparameters, which are passed into the training jobhyperparameters={'epochs': 1,'train_batch_size': 32,'model_name':'distilbert-base-uncased','output_dir':'/opt/ml/checkpoints'}# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='train.py',source_dir='./scripts',instance_type='ml.p3.2xlarge',instance_count=1,checkpoint_s3_uri=f's3://{sess.default_bucket()}/checkpoints'use_spot_instances=True,max_wait=3600, # This should be equal to or greater than max_run in seconds'max_run=1000,role=role,transformers_version='4.4',pytorch_version='1.6',py_version='py36',hyperparameters = hyperparameters)# Training seconds: 874# Billable seconds: 105# Managed Spot Training savings: 88.0%Git RepositoriesWhen you create an HuggingFace Estimator, you can specify a training script that is stored in a GitHub repository as the entry point for the estimator, so that you don’t have to download the scripts locally. If Git support is enabled, then entry_point and source_dir should be relative paths in the Git repo if provided.As an example to use git_config with an example script from the transformers repository.Be aware that you need to define output_dir as a hyperparameter for the script to save your model to S3 after training. Suggestion: define output_dir as /opt/ml/model since it is the default SM_MODEL_DIR and will be uploaded to S3.Example Notebook# configure git settingsgit_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'master'}# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='run_glue.py',source_dir='./examples/text-classification',git_config=git_config,instance_type='ml.p3.2xlarge',instance_count=1,role=role,transformers_version='4.4',pytorch_version='1.6',py_version='py36',hyperparameters=hyperparameters)SageMaker MetricsSageMaker Metrics can automatically parse the logs for metrics and send those metrics to CloudWatch. If you want SageMaker to parse logs you have to specify the metrics that you want SageMaker to send to CloudWatch when you configure the training job. You specify the name of the metrics that you want to send and the regular expressions that SageMaker uses to parse the logs that your algorithm emits to find those metrics.Example Notebook# define metrics definitionsmetric_definitions = [{"Name": "train_runtime", "Regex": "train_runtime.*=\D*(.*?)$"},{"Name": "eval_accuracy", "Regex": "eval_accuracy.*=\D*(.*?)$"},{"Name": "eval_loss", "Regex": "eval_loss.*=\D*(.*?)$"},]# create the Estimatorhuggingface_estimator = HuggingFace(entry_point='train.py',source_dir='./scripts',instance_type='ml.p3.2xlarge',instance_count=1,role=role,transformers_version='4.4',pytorch_version='1.6',py_version='py36',metric_definitions=metric_definitions,hyperparameters = hyperparameters)FAQ 🎯You can find the complete Frequently Asked Questions in the documentation.Q: What are Deep Learning Containers?A: Deep Learning Containers (DLCs) are Docker images pre-installed with deep learning frameworks and libraries (e.g. transformers, datasets, tokenizers) to make it easy to train models by letting you skip the complicated process of building and optimizing your environments from scratch.Q: Do I have to use the SageMaker Python SDK to use the Hugging Face Deep Learning Containers?A: You can use the HF DLC without the SageMaker Python SDK and launch SageMaker Training jobs with other SDKs, such as the AWS CLI or boto3. The DLCs are also available through Amazon ECR and can be pulled and used in any environment of choice.Q: Why should I use the Hugging Face Deep Learning Containers?A: The DLCs are fully tested, maintained, optimized deep learning environments that require no installation, configuration, or maintenance.Q: Why should I use SageMaker Training to train Hugging Face models?A: SageMaker Training provides numerous benefits that will boost your productivity with Hugging Face : (1) first it is cost-effective: the training instances live only for the duration of your job and are paid per second. No risk anymore to leave GPU instances up all night: the training cluster stops right at the end of your job! It also supports EC2 Spot capacity, which enables up to 90% cost reduction. (2) SageMaker also comes with a lot of built-in automation that facilitates teamwork and MLOps: training metadata and logs are automatically persisted to a serverless managed metastore, and I/O with S3 (for datasets, checkpoints and model artifacts) is fully managed. Finally, SageMaker also allows to drastically scale up and out: you can launch multiple training jobs in parallel, but also launch large-scale distributed training jobsQ: Once I've trained my model with Amazon SageMaker, can I use it with 🤗/Transformers ?A: Yes, you can download your trained model from S3 and directly use it with transformers or upload it to the Hugging Face Model Hub.Q: How is my data and code secured by Amazon SageMaker?A: Amazon SageMaker provides numerous security mechanisms including encryption at rest and in transit, Virtual Private Cloud (VPC) connectivity and Identity and Access Management (IAM). To learn more about security in the AWS cloud and with Amazon SageMaker, you can visit Security in Amazon SageMaker and AWS Cloud Security.Q: Is this available in my region?A: For a list of the supported regions, please visit the AWS region table for all AWS global infrastructure.Q: Do I need to pay for a license from Hugging Face to use the DLCs?A: No - the Hugging Face DLCs are open source and licensed under Apache 2.0.Q: How can I run inference on my trained models?A: You have multiple options to run inference on your trained models. One option is to use Hugging Face Accelerated Inference-API hosted service: start by uploading the trained models to your Hugging Face account to deploy them publicly, or privately. Another great option is to use SageMaker Inference to run your own inference code in Amazon SageMaker. We are working on offering an integrated solution for Amazon SageMaker with Hugging Face Inference DLCs in the future - stay tuned!Q: Do you offer premium support or support SLAs for this solution?A: AWS Technical Support tiers are available from AWS and cover development and production issues for AWS products and services - please refer to AWS Support for specifics and scope.If you have questions which the Hugging Face community can help answer and/or benefit from, please post them in the Hugging Face forum.If you need premium support from the Hugging Face team to accelerate your NLP roadmap, our Expert Acceleration Program offers direct guidance from our open source, science and ML Engineering team - contact us to learn more.Q: What are you planning next through this partnership?A: Our common goal is to democratize state of the art Machine Learning. We will continue to innovate to make it easier for researchers, data scientists and ML practitioners to manage, train and run state of the art models. If you have feature requests for integration in AWS with Hugging Face, please let us know in the Hugging Face community forum.Q: I use Hugging Face with Azure Machine Learning or Google Cloud Platform, what does this partnership mean for me?A: A foundational goal for Hugging Face is to make the latest AI accessible to as many people as possible, whichever framework or development environment they work in. While we are focusing integration efforts with Amazon Web Services as our Preferred Cloud Provider, we will continue to work hard to serve all Hugging Face users and customers, no matter what compute environment they run on.
https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds
My Journey to a serverless transformers pipeline on Google Cloud
Dominici
March 18, 2021
This article will discuss my journey to deploy the transformers sentiment-analysis pipeline on Google Cloud. We will start with a quick introduction to transformers and then move to the technical part of the implementation. Finally, we'll summarize this implementation and review what we have achieved.The GoalI wanted to create a micro-service that automatically detects whether a customer review left in Discord is positive or negative. This would allow me to treat the comment accordingly and improve the customer experience. For instance, if the review was negative, I could create a feature which would contact the customer, apologize for the poor quality of service, and inform him/her that our support team will contact him/her as soon as possible to assist him and hopefully fix the problem. Since I don't plan to get more than 2,000 requests per month, I didn't impose any performance constraints regarding the time and the scalability.The Transformers libraryI was a bit confused at the beginning when I downloaded the .h5 file. I thought it would be compatible with tensorflow.keras.models.load_model, but this wasn't the case. After a few minutes of research I was able to figure out that the file was a weights checkpoint rather than a Keras model.After that, I tried out the API that Hugging Face offers and read a bit more about the pipeline feature they offer. Since the results of the API & the pipeline were great, I decided that I could serve the model through the pipeline on my own server.Below is the official example from the Transformers GitHub page.from transformers import pipeline# Allocate a pipeline for sentiment-analysisclassifier = pipeline('sentiment-analysis')classifier('We are very happy to include pipeline into the transformers repository.')[{'label': 'POSITIVE', 'score': 0.9978193640708923}]Deploy transformers to Google CloudGCP is chosen as it is the cloud environment I am using in my personal organization.Step 1 - ResearchI already knew that I could use an API-Service like flask to serve a transformers model. I searched in the Google Cloud AI documentation and found a service to host Tensorflow models named AI-Platform Prediction. I also found App Engine and Cloud Run there, but I was concerned about the memory usage for App Engine and was not very familiar with Docker.Step 2 - Test on AI-Platform PredictionAs the model is not a "pure TensorFlow" saved model but a checkpoint, and I couldn't turn it into a "pure TensorFlow model", I figured out that the example on this page wouldn't work.From there I saw that I could write some custom code, allowing me to load the pipeline instead of having to handle the model, which seemed is easier. I also learned that I could define a pre-prediction & post-prediction action, which could be useful in the future for pre- or post-processing the data for customers' needs.I followed Google's guide but encountered an issue as the service is still in beta and everything is not stable. This issue is detailed here.Step 3 - Test on App EngineI moved to Google's App Engine as it's a service that I am familiar with, but encountered an installation issue with TensorFlow due to a missing system dependency file. I then tried with PyTorch which worked with an F4_1G instance, but it couldn't handle more than 2 requests on the same instance, which isn't really great performance-wise.Step 4 - Test on Cloud RunLastly, I moved to Cloud Run with a docker image. I followed this guide to get an idea of how it works. In Cloud Run, I could configure a higher memory and more vCPUs to perform the prediction with PyTorch. I ditched Tensorflow as PyTorch seems to load the model faster.Implementation of the serverless pipelineThe final solution consists of four different components: main.py handling the request to the pipelineDockerfile used to create the image that will be deployed on Cloud Run.Model folder having the pytorch_model.bin, config.json and vocab.txt.Model : DistilBERT base uncased finetuned SST-2To download the model folder, follow the instructions in the button. You don't need to keep the rust_model.ot or the tf_model.h5 as we will use PyTorch.requirement.txt for installing the dependenciesThe content on the main.py is really simple. The idea is to receive a GET request containing two fields. First the review that needs to be analysed, second the API key to "protect" the service. The second parameter is optional, I used it to avoid setting up the oAuth2 of Cloud Run. After these arguments are provided, we load the pipeline which is built based on the model distilbert-base-uncased-finetuned-sst-2-english (provided above). In the end, the best match is returned to the client.import osfrom flask import Flask, jsonify, requestfrom transformers import pipelineapp = Flask(__name__)model_path = "./model"@app.route('/')def classify_review():review = request.args.get('review')api_key = request.args.get('api_key')if review is None or api_key != "MyCustomerApiKey":return jsonify(code=403, message="bad request")classify = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)return classify("that was great")[0]if __name__ == '__main__':# This is used when running locally only. When deploying to Google Cloud# Run, a webserver process such as Gunicorn will serve the app.app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))Then the DockerFile which will be used to create a docker image of the service. We specify that our service runs with python:3.7, plus that we need to install our requirements. Then we use gunicorn to handle our process on the port 5000.# Use Python37FROM python:3.7# Allow statements and log messages to immediately appear in the Knative logsENV PYTHONUNBUFFERED True# Copy requirements.txt to the docker image and install packagesCOPY requirements.txt /RUN pip install -r requirements.txt# Set the WORKDIR to be the folderCOPY . /app# Expose port 5000EXPOSE 5000ENV PORT 5000WORKDIR /app# Use gunicorn as the entrypointCMD exec gunicorn --bind :$PORT main:app --workers 1 --threads 1 --timeout 0It is important to note the arguments --workers 1 --threads 1 which means that I want to execute my app on only one worker (= 1 process) with a single thread. This is because I don't want to have 2 instances up at once because it might increase the billing. One of the downsides is that it will take more time to process if the service receives two requests at once. After that, I put the limit to one thread due to the memory usage needed for loading the model into the pipeline. If I were using 4 threads, I might have 4 Gb / 4 = 1 Gb only to perform the full process, which is not enough and would lead to a memory error.Finally, the requirement.txt fileFlask==1.1.2torch===1.7.1transformers~=4.2.0gunicorn>=20.0.0Deployment instructionsFirst, you will need to meet some requirements such as having a project on Google Cloud, enabling the billing and installing the gcloud cli. You can find more details about it in the Google's guide - Before you begin, Second, we need to build the docker image and deploy it to cloud run by selecting the correct project (replace PROJECT-ID) and set the name of the instance such as ai-customer-review. You can find more information about the deployment on Google's guide - Deploying to.gcloud builds submit --tag gcr.io/PROJECT-ID/ai-customer-reviewgcloud run deploy --image gcr.io/PROJECT-ID/ai-customer-review --platform managedAfter a few minutes, you will also need to upgrade the memory allocated to your Cloud Run instance from 256 MB to 4 Gb. To do so, head over to the Cloud Run Console of your project.There you should find your instance, click on it.After that you will have a blue button labelled "edit and deploy new revision" on top of the screen, click on it and you'll be prompt many configuration fields. At the bottom you should find a "Capacity" section where you can specify the memory.PerformancesHandling a request takes less than five seconds from the moment you send the request including loading the model into the pipeline, and prediction. The cold start might take up an additional 10 seconds more or less.We can improve the request handling performance by warming the model, it means loading it on start-up instead on each request (global variable for example), by doing so, we win time and memory usage.CostsI simulated the cost based on the Cloud Run instance configuration with Google pricing simulatorFor my micro-service, I am planning to near 1,000 requests per month, optimistically. 500 may more likely for my usage. That's why I considered 2,000 requests as an upper bound when designing my microservice.Due to that low number of requests, I didn't bother so much regarding the scalability but might come back into it if my billing increases.Nevertheless, it's important to stress that you will pay the storage for each Gigabyte of your build image. It's roughly €0.10 per Gb per month, which is fine if you don't keep all your versions on the cloud since my version is slightly above 1 Gb (Pytorch for 700 Mb & the model for 250 Mb).ConclusionBy using Transformers' sentiment analysis pipeline, I saved a non-negligible amount of time. Instead of training/fine-tuning a model, I could find one ready to be used in production and start the deployment in my system. I might fine-tune it in the future, but as shown on my test, the accuracy is already amazing!I would have liked a "pure TensorFlow" model, or at least a way to load it in TensorFlow without Transformers dependencies to use the AI platform. It would also be great to have a lite version.
https://huggingface.co/blog/fine-tune-wav2vec2-english
Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers
Patrick von Platen
March 12, 2021
Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR)and was released in September2020by Alexei Baevski, Michael Auli, and Alex Conneau.Using a novel contrastive pretraining objective, Wav2Vec2 learnspowerful speech representations from more than 50.000 hours of unlabeledspeech. Similar, to BERT's masked languagemodeling, the model learnscontextualized speech representations by randomly masking featurevectors before passing them to a transformer network.For the first time, it has been shown that pretraining, followed byfine-tuning on very little labeled speech data achieves competitiveresults to state-of-the-art ASR systems. Using as little as 10 minutesof labeled data, Wav2Vec2 yields a word error rate (WER) of less than 5%on the clean test set ofLibriSpeech - cf.with Table 9 of the paper.In this notebook, we will give an in-detail explanation of howWav2Vec2's pretrained checkpoints can be fine-tuned on any English ASRdataset. Note that in this notebook, we will fine-tune Wav2Vec2 withoutmaking use of a language model. It is much simpler to use Wav2Vec2without a language model as an end-to-end ASR system and it has beenshown that a standalone Wav2Vec2 acoustic model achieves impressiveresults. For demonstration purposes, we fine-tune the "base"-sizedpretrained checkpointon the rather small Timitdataset that contains just 5h of training data.Wav2Vec2 is fine-tuned using Connectionist Temporal Classification(CTC), which is an algorithm that is used to train neural networks forsequence-to-sequence problems and mainly in Automatic Speech Recognitionand handwriting recognition.I highly recommend reading the blog post Sequence Modeling with CTC(2017) very well-written blog post byAwni Hannun.Before we start, let's install both datasets and transformers frommaster. Also, we need the soundfile package to load audio files andthe jiwer to evaluate our fine-tuned model using the word error rate(WER) metric 1{}^11.!pip install datasets>=1.18.3!pip install transformers==4.11.3!pip install librosa!pip install jiwerNext we strongly suggest to upload your training checkpoints directly to the Hugging Face Hub while training. The Hub has integrated version control so you can be sure that no model checkpoint is getting lost during training. To do so you have to store your authentication token from the Hugging Face website (sign up here if you haven't already!)from huggingface_hub import notebook_loginnotebook_login()Print Output:Login successfulYour token has been saved to /root/.huggingface/tokenAuthenticated through git-crendential store but this isn't the helper defined on your machine.You will have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal to set it as the defaultgit config --global credential.helper storeThen you need to install Git-LFS to upload your model checkpoints:!apt install git-lfs1{}^11 Timit is usually evaluated using the phoneme error rate (PER),but by far the most common metric in ASR is the word error rate (WER).To keep this notebook as general as possible we decided to evaluate themodel using WER.Prepare Data, Tokenizer, Feature ExtractorASR models transcribe speech to text, which means that we both need afeature extractor that processes the speech signal to the model's inputformat, e.g. a feature vector, and a tokenizer that processes themodel's output format to text.In 🤗 Transformers, the Wav2Vec2 model is thus accompanied by both atokenizer, calledWav2Vec2CTCTokenizer,and a feature extractor, calledWav2Vec2FeatureExtractor.Let's start by creating the tokenizer responsible for decoding themodel's predictions.Create Wav2Vec2CTCTokenizerThe pretrained Wav2Vec2 checkpoint maps the speech signal to asequence of context representations as illustrated in the figure above.A fine-tuned Wav2Vec2 checkpoint needs to map this sequence of contextrepresentations to its corresponding transcription so that a linearlayer has to be added on top of the transformer block (shown in yellow).This linear layer is used to classifies each context representation to atoken class analogous how, e.g., after pretraining a linear layer isadded on top of BERT's embeddings for further classification - cf.with "BERT" section of this blog post.The output size of this layer corresponds to the number of tokens in thevocabulary, which does not depend on Wav2Vec2's pretraining task,but only on the labeled dataset used for fine-tuning. So in the firststep, we will take a look at Timit and define a vocabulary based on thedataset's transcriptions.Let's start by loading the dataset and taking a look at its structure.from datasets import load_dataset, load_metrictimit = load_dataset("timit_asr")print(timit)Print Output:DatasetDict({train: Dataset({features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],num_rows: 4620})test: Dataset({features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],num_rows: 1680})})Many ASR datasets only provide the target text, 'text' for each audiofile 'file'. Timit actually provides much more information about eachaudio file, such as the 'phonetic_detail', etc., which is why manyresearchers choose to evaluate their models on phoneme classificationinstead of speech recognition when working with Timit. However, we wantto keep the notebook as general as possible, so that we will onlyconsider the transcribed text for fine-tuning.timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])Let's write a short function to display some random samples of thedataset and run it a couple of times to get a feeling for thetranscriptions.from datasets import ClassLabelimport randomimport pandas as pdfrom IPython.display import display, HTMLdef show_random_elements(dataset, num_examples=10):assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."picks = []for _ in range(num_examples):pick = random.randint(0, len(dataset)-1)while pick in picks:pick = random.randint(0, len(dataset)-1)picks.append(pick)df = pd.DataFrame(dataset[picks])display(HTML(df.to_html()))show_random_elements(timit["train"].remove_columns(["file", "audio"]))Print Output:IdxTranscription1Who took the kayak down the bayou?2As such it acts as an anchor for the people.3She had your dark suit in greasy wash water all year.4We're not drunkards, she said.5The most recent geological survey found seismic activity.6Alimony harms a divorced man's wealth.7Our entire economy will have a terrific uplift.8Don't ask me to carry an oily rag like that.9The gorgeous butterfly ate a lot of nectar.10Where're you takin' me?Alright! The transcriptions look very clean and the language seems tocorrespond more to written text than dialogue. This makes sense takinginto account that Timit isa read speech corpus.We can see that the transcriptions contain some special characters, suchas ,.?!;:. Without a language model, it is much harder to classifyspeech chunks to such special characters because they don't reallycorrespond to a characteristic sound unit. E.g., the letter "s" hasa more or less clear sound, whereas the special character "." doesnot. Also in order to understand the meaning of a speech signal, it isusually not necessary to include special characters in thetranscription.In addition, we normalize the text to only have lower case letters.import rechars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]'def remove_special_characters(batch):batch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower()return batchtimit = timit.map(remove_special_characters)Let's take a look at the preprocessed transcriptions.show_random_elements(timit["train"].remove_columns(["file", "audio"]))Print Output:IdxTranscription1anyhow it was high time the boy was salted2their basis seems deeper than mere authority3only the best players enjoy popularity4tornados often destroy acres of farm land5where're you takin' me6soak up local color7satellites sputniks rockets balloons what next8i gave them several choices and let them set the priorities9reading in poor light gives you eyestrain10that dog chases cats mercilesslyGood! This looks better. We have removed most special characters fromtranscriptions and normalized them to lower-case only.In CTC, it is common to classify speech chunks into letters, so we willdo the same here. Let's extract all distinct letters of the trainingand test data and build our vocabulary from this set of letters.We write a mapping function that concatenates all transcriptions intoone long transcription and then transforms the string into a set ofchars. It is important to pass the argument batched=True to themap(...) function so that the mapping function has access to alltranscriptions at once.def extract_all_chars(batch):all_text = " ".join(batch["text"])vocab = list(set(all_text))return {"vocab": [vocab], "all_text": [all_text]}vocabs = timit.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=timit.column_names["train"])Now, we create the union of all distinct letters in the training datasetand test dataset and convert the resulting list into an enumerateddictionary.vocab_list = list(set(vocabs["train"]["vocab"][0]) | set(vocabs["test"]["vocab"][0]))vocab_dict = {v: k for k, v in enumerate(vocab_list)}vocab_dictPrint Output:{ ' ': 21,"'": 13,'a': 24,'b': 17,'c': 25,'d': 2,'e': 9,'f': 14,'g': 22,'h': 8,'i': 4,'j': 18,'k': 5,'l': 16,'m': 6,'n': 7,'o': 10,'p': 19,'q': 3,'r': 20,'s': 11,'t': 0,'u': 26,'v': 27,'w': 1,'x': 23,'y': 15,'z': 12}Cool, we see that all letters of the alphabet occur in the dataset(which is not really surprising) and we also extracted the specialcharacters " " and '. Note that we did not exclude those specialcharacters because:The model has to learn to predict when a word finished or else themodel prediction would always be a sequence of chars which wouldmake it impossible to separate words from each other.In English, we need to keep the ' character to differentiatebetween words, e.g., "it's" and "its" which have verydifferent meanings.To make it clearer that " " has its own token class, we give it a morevisible character |. In addition, we also add an "unknown" token sothat the model can later deal with characters not encountered inTimit's training set.vocab_dict["|"] = vocab_dict[" "]del vocab_dict[" "]Finally, we also add a padding token that corresponds to CTC's "blanktoken". The "blank token" is a core component of the CTC algorithm.For more information, please take a look at the "Alignment" sectionhere.vocab_dict["[UNK]"] = len(vocab_dict)vocab_dict["[PAD]"] = len(vocab_dict)print(len(vocab_dict))Print Output:30Cool, now our vocabulary is complete and consists of 30 tokens, whichmeans that the linear layer that we will add on top of the pretrainedWav2Vec2 checkpoint will have an output dimension of 30.Let's now save the vocabulary as a json file.import jsonwith open('vocab.json', 'w') as vocab_file:json.dump(vocab_dict, vocab_file)In a final step, we use the json file to instantiate an object of theWav2Vec2CTCTokenizer class.from transformers import Wav2Vec2CTCTokenizertokenizer = Wav2Vec2CTCTokenizer("./vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")If one wants to re-use the just created tokenizer with the fine-tuned model of this notebook, it is strongly advised to upload the tokenizer to the 🤗 Hub. Let's call the repo to which we will upload the files"wav2vec2-large-xlsr-turkish-demo-colab":repo_name = "wav2vec2-base-timit-demo-colab"and upload the tokenizer to the 🤗 Hub.tokenizer.push_to_hub(repo_name)Great, you can see the just created repository under https://huggingface.co/<your-username>/wav2vec2-base-timit-demo-colabCreate Wav2Vec2 Feature ExtractorSpeech is a continuous signal and to be treated by computers, it firsthas to be discretized, which is usually called sampling. Thesampling rate hereby plays an important role in that it defines how manydata points of the speech signal are measured per second. Therefore,sampling with a higher sampling rate results in a better approximationof the real speech signal but also necessitates more values persecond.A pretrained checkpoint expects its input data to have been sampled moreor less from the same distribution as the data it was trained on. Thesame speech signals sampled at two different rates have a very differentdistribution, e.g., doubling the sampling rate results in data pointsbeing twice as long. Thus, before fine-tuning a pretrained checkpoint ofan ASR model, it is crucial to verify that the sampling rate of the datathat was used to pretrain the model matches the sampling rate of thedataset used to fine-tune the model.Wav2Vec2 was pretrained on the audio data ofLibriSpeech andLibriVox which both were sampling with 16kHz. Our fine-tuning dataset,Timit, was luckily alsosampled with 16kHz. If the fine-tuning dataset would have been sampledwith a rate lower or higher than 16kHz, we first would have had to up ordownsample the speech signal to match the sampling rate of the data usedfor pretraining.A Wav2Vec2 feature extractor object requires the following parameters tobe instantiated:feature_size: Speech models take a sequence of feature vectors asan input. While the length of this sequence obviously varies, thefeature size should not. In the case of Wav2Vec2, the feature sizeis 1 because the model was trained on the raw speech signal 2{}^22 .sampling_rate: The sampling rate at which the model is trained on.padding_value: For batched inference, shorter inputs need to bepadded with a specific valuedo_normalize: Whether the input should bezero-mean-unit-variance normalized or not. Usually, speech modelsperform better when normalizing the inputreturn_attention_mask: Whether the model should make use of anattention_mask for batched inference. In general, models shouldalways make use of the attention_mask to mask padded tokens.However, due to a very specific design choice of Wav2Vec2's"base" checkpoint, better results are achieved when using noattention_mask. This is not recommended for other speechmodels. For more information, one can take a look atthis issue.Important If you want to use this notebook to fine-tunelarge-lv60,this parameter should be set to True.from transformers import Wav2Vec2FeatureExtractorfeature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=False)Great, Wav2Vec2's feature extraction pipeline is thereby fully defined!To make the usage of Wav2Vec2 as user-friendly as possible, the featureextractor and tokenizer are wrapped into a single Wav2Vec2Processorclass so that one only needs a model and processor object.from transformers import Wav2Vec2Processorprocessor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)Preprocess DataSo far, we have not looked at the actual values of the speech signal but just the transcription. In addition to sentence, our datasets include two more column names path and audio. path states the absolute path of the audio file. Let's take a look.print(timit[0]["path"])Print Output:'/root/.cache/huggingface/datasets/downloads/extracted/404950a46da14eac65eb4e2a8317b1372fb3971d980d91d5d5b221275b1fd7e0/data/TRAIN/DR4/MMDM0/SI681.WAV'Wav2Vec2 expects the input in the format of a 1-dimensional array of 16 kHz. This means that the audio file has to be loaded and resampled.Thankfully, datasets does this automatically by calling the other column audio. Let try it out.common_voice_train[0]["audio"]Print Output:{'array': array([-2.1362305e-04, 6.1035156e-05, 3.0517578e-05, ...,-3.0517578e-05, -9.1552734e-05, -6.1035156e-05], dtype=float32),'path': '/root/.cache/huggingface/datasets/downloads/extracted/404950a46da14eac65eb4e2a8317b1372fb3971d980d91d5d5b221275b1fd7e0/data/TRAIN/DR4/MMDM0/SI681.WAV','sampling_rate': 16000}We can see that the audio file has automatically been loaded. This is thanks to the new "Audio" feature introduced in datasets == 4.13.3, which loads and resamples audio files on-the-fly upon calling.The sampling rate is set to 16kHz which is what Wav2Vec2 expects as an input.Great, let's listen to a couple of audio files to better understand the dataset and verify that the audio was correctly loaded. import IPython.display as ipdimport numpy as npimport randomrand_int = random.randint(0, len(timit["train"]))print(timit["train"][rand_int]["text"])ipd.Audio(data=np.asarray(timit["train"][rand_int]["audio"]["array"]), autoplay=True, rate=16000)It can be heard, that the speakers change along with their speaking rate, accent, etc. Overall, the recordings sound relatively clear though, which is to be expected from a read speech corpus.Let's do a final check that the data is correctly prepared, by printing the shape of the speech input, its transcription, and the corresponding sampling rate.rand_int = random.randint(0, len(timit["train"]))print("Target text:", timit["train"][rand_int]["text"])print("Input array shape:", np.asarray(timit["train"][rand_int]["audio"]["array"]).shape)print("Sampling rate:", timit["train"][rand_int]["audio"]["sampling_rate"])Print Output:Target text: she had your dark suit in greasy wash water all yearInput array shape: (52941,)Sampling rate: 16000Good! Everything looks fine - the data is a 1-dimensional array, thesampling rate always corresponds to 16kHz, and the target text isnormalized.Finally, we can process the dataset to the format expected by the model for training. We will make use of the map(...) function.First, we load and resample the audio data, simply by calling batch["audio"].Second, we extract the input_values from the loaded audio file. In our case, the Wav2Vec2Processor only normalizes the data. For other speech models, however, this step can include more complex feature extraction, such as Log-Mel feature extraction. Third, we encode the transcriptions to label ids.Note: This mapping function is a good example of how the Wav2Vec2Processor class should be used. In "normal" context, calling processor(...) is redirected to Wav2Vec2FeatureExtractor's call method. When wrapping the processor into the as_target_processor context, however, the same method is redirected to Wav2Vec2CTCTokenizer's call method.For more information please check the docs.def prepare_dataset(batch):audio = batch["audio"]# batched output is "un-batched" to ensure mapping is correctbatch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]with processor.as_target_processor():batch["labels"] = processor(batch["text"]).input_idsreturn batchLet's apply the data preparation function to all examples.timit = timit.map(prepare_dataset, remove_columns=timit.column_names["train"], num_proc=4)Note: Currently datasets make use of torchaudio and librosa for audio loading and resampling. If you wish to implement your own costumized data loading/sampling, feel free to just make use of the "path" column instead and disregard the "audio" column.Training & EvaluationThe data is processed so that we are ready to start setting up thetraining pipeline. We will make use of 🤗'sTrainerfor which we essentially need to do the following:Define a data collator. In contrast to most NLP models, Wav2Vec2 hasa much larger input length than output length. E.g., a sample ofinput length 50000 has an output length of no more than 100. Giventhe large input sizes, it is much more efficient to pad the trainingbatches dynamically meaning that all training samples should only bepadded to the longest sample in their batch and not the overalllongest sample. Therefore, fine-tuning Wav2Vec2 requires a specialpadding data collator, which we will define belowEvaluation metric. During training, the model should be evaluated onthe word error rate. We should define a compute_metrics functionaccordinglyLoad a pretrained checkpoint. We need to load a pretrainedcheckpoint and configure it correctly for training.Define the training configuration.After having fine-tuned the model, we will correctly evaluate it on thetest data and verify that it has indeed learned to correctly transcribespeech.Set-up TrainerLet's start by defining the data collator. The code for the datacollator was copied from thisexample.Without going into too many details, in contrast to the common datacollators, this data collator treats the input_values and labelsdifferently and thus applies to separate padding functions on them(again making use of Wav2Vec2's context manager). This is necessarybecause in speech input and output are of different modalities meaningthat they should not be treated by the same padding function. Analogousto the common data collators, the padding tokens in the labels with-100 so that those tokens are not taken into account whencomputing the loss.import torchfrom dataclasses import dataclass, fieldfrom typing import Any, Dict, List, Optional, Union@dataclassclass DataCollatorCTCWithPadding:"""Data collator that will dynamically pad the inputs received.Args:processor (:class:`~transformers.Wav2Vec2Processor`)The processor used for proccessing the data.padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):Select a strategy to pad the returned sequences (according to the model's padding side and padding index)among:* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a singlesequence if provided).* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to themaximum acceptable input length for the model if that argument is not provided.* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences ofdifferent lengths).max_length (:obj:`int`, `optional`):Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).max_length_labels (:obj:`int`, `optional`):Maximum length of the ``labels`` returned list and optionally padding length (see above).pad_to_multiple_of (:obj:`int`, `optional`):If set will pad the sequence to a multiple of the provided value.This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=7.5 (Volta)."""processor: Wav2Vec2Processorpadding: Union[bool, str] = Truemax_length: Optional[int] = Nonemax_length_labels: Optional[int] = Nonepad_to_multiple_of: Optional[int] = Nonepad_to_multiple_of_labels: Optional[int] = Nonedef __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:# split inputs and labels since they have to be of different lengths and need# different padding methodsinput_features = [{"input_values": feature["input_values"]} for feature in features]label_features = [{"input_ids": feature["labels"]} for feature in features]batch = self.processor.pad(input_features,padding=self.padding,max_length=self.max_length,pad_to_multiple_of=self.pad_to_multiple_of,return_tensors="pt",)with self.processor.as_target_processor():labels_batch = self.processor.pad(label_features,padding=self.padding,max_length=self.max_length_labels,pad_to_multiple_of=self.pad_to_multiple_of_labels,return_tensors="pt",)# replace padding with -100 to ignore loss correctlylabels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)batch["labels"] = labelsreturn batchLet's initialize the data collator.data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)Next, the evaluation metric is defined. As mentioned earlier, thepredominant metric in ASR is the word error rate (WER), hence we willuse it in this notebook as well.wer_metric = load_metric("wer")The model will return a sequence of logit vectors:y1,…,ym \mathbf{y}_1, \ldots, \mathbf{y}_m y1​,…,ym​, with y1=fθ(x1,…,xn)[0]\mathbf{y}_1 = f_{\theta}(x_1, \ldots, x_n)[0]y1​=fθ​(x1​,…,xn​)[0] and n>>mn >> mn>>m.A logit vector y1 \mathbf{y}_1 y1​ contains the log-odds for each word in thevocabulary we defined earlier, thus len(yi)=\text{len}(\mathbf{y}_i) =len(yi​)=config.vocab_size. We are interested in the most likely prediction ofthe model and thus take the argmax(...) of the logits. Also, wetransform the encoded labels back to the original string by replacing-100 with the pad_token_id and decoding the ids while making surethat consecutive tokens are not grouped to the same token in CTCstyle 1{}^11.def compute_metrics(pred):pred_logits = pred.predictionspred_ids = np.argmax(pred_logits, axis=-1)pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_idpred_str = processor.batch_decode(pred_ids)# we do not want to group tokens when computing the metricslabel_str = processor.batch_decode(pred.label_ids, group_tokens=False)wer = wer_metric.compute(predictions=pred_str, references=label_str)return {"wer": wer}Now, we can load the pretrained Wav2Vec2 checkpoint. The tokenizer'spad_token_id must be to define the model's pad_token_id or in thecase of Wav2Vec2ForCTC also CTC's blank token 2{}^22. To save GPUmemory, we enable PyTorch's gradientcheckpointing and alsoset the loss reduction to "mean".from transformers import Wav2Vec2ForCTCmodel = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base", ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id,)Print Output:Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base and are newly initialized: ['lm_head.weight', 'lm_head.bias']You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.The first component of Wav2Vec2 consists of a stack of CNN layers thatare used to extract acoustically meaningful - but contextuallyindependent - features from the raw speech signal. This part of themodel has already been sufficiently trained during pretrainind and asstated in the paper does not need tobe fine-tuned anymore. Thus, we can set the requires_grad to Falsefor all parameters of the feature extraction part.model.freeze_feature_extractor()In a final step, we define all parameters related to training. To givemore explanation on some of the parameters:group_by_length makes training more efficient by grouping trainingsamples of similar input length into one batch. This cansignificantly speed up training time by heavily reducing the overallnumber of useless padding tokens that are passed through the modellearning_rate and weight_decay were heuristically tuned untilfine-tuning has become stable. Note that those parameters stronglydepend on the Timit dataset and might be suboptimal for other speechdatasets.For more explanations on other parameters, one can take a look at thedocs.During training, a checkpoint will be uploaded asynchronously to the hub every 400 training steps. It allows you to also play around with the demo widget even while your model is still training.Note: If one does not want to upload the model checkpoints to the hub, simply set push_to_hub=False.from transformers import TrainingArgumentstraining_args = TrainingArguments(output_dir=repo_name,group_by_length=True,per_device_train_batch_size=32,evaluation_strategy="steps",num_train_epochs=30,fp16=True,gradient_checkpointing=True, save_steps=500,eval_steps=500,logging_steps=500,learning_rate=1e-4,weight_decay=0.005,warmup_steps=1000,save_total_limit=2,)Now, all instances can be passed to Trainer and we are ready to starttraining!from transformers import Trainertrainer = Trainer(model=model,data_collator=data_collator,args=training_args,compute_metrics=compute_metrics,train_dataset=timit_prepared["train"],eval_dataset=timit_prepared["test"],tokenizer=processor.feature_extractor,)1{}^11 To allow models to become independent of the speaker rate, inCTC, consecutive tokens that are identical are simply grouped as asingle token. However, the encoded labels should not be grouped whendecoding since they don't correspond to the predicted tokens of themodel, which is why the group_tokens=False parameter has to be passed.If we wouldn't pass this parameter a word like "hello" wouldincorrectly be encoded, and decoded as "helo".2{}^22 The blank token allows the model to predict a word, such as"hello" by forcing it to insert the blank token between the two l's.A CTC-conform prediction of "hello" of our model would be[PAD] [PAD] "h" "e" "e" "l" "l" [PAD] "l" "o" "o" [PAD].TrainingTraining will take between 90 and 180 minutes depending on the GPUallocated to the google colab attached to this notebook. While the trained model yields satisfyingresults on Timit's test data, it is by no means an optimallyfine-tuned model. The purpose of this notebook is to demonstrate howWav2Vec2's base,large, andlarge-lv60checkpoints can be fine-tuned on any English dataset.In case you want to use this google colab to fine-tune your model, youshould make sure that your training doesn't stop due to inactivity. Asimple hack to prevent this is to paste the following code into theconsole of this tab (right mouse click -> inspect -> Console tab andinsert code).function ConnectButton(){console.log("Connect pushed"); document.querySelector("#top-toolbar > colab-connect-button").shadowRoot.querySelector("#connect").click() }setInterval(ConnectButton,60000);trainer.train()Depending on your GPU, it might be possible that you are seeing an "out-of-memory" error here. In this case, it's probably best to reduce per_device_train_batch_size to 16 or even less and eventually make use of gradient_accumulation.Print Output:StepTraining LossValidation LossWERRuntimeSamples per Second5003.7581001.6861570.94521497.29900017.26600010000.6914000.4764870.39142798.28330017.09300015000.2024000.4034250.33071599.07810016.95600020000.1152000.4050250.30735398.11650017.12200025000.0750000.4281190.29405398.49650017.05600030000.0582000.4426290.28729998.87130016.99200035000.0476000.4426190.28578399.47750016.88800040000.0345000.4569890.28220099.41910016.898000The final WER should be below 0.3 which is reasonable given thatstate-of-the-art phoneme error rates (PER) are just below 0.1 (seeleaderboard)and that WER is usually worse than PER.You can now upload the result of the training to the Hub, just execute this instruction:trainer.push_to_hub()You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier "your-username/the-name-you-picked" so for instance:from transformers import AutoModelForCTC, Wav2Vec2Processormodel = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-timit-demo-colab")processor = Wav2Vec2Processor.from_pretrained("patrickvonplaten/wav2vec2-base-timit-demo-colab")EvaluationIn the final part, we evaluate our fine-tuned model on the test set andplay around with it a bit.Let's load the processor and model.processor = Wav2Vec2Processor.from_pretrained(repo_name)model = Wav2Vec2ForCTC.from_pretrained(repo_name)Now, we will make use of the map(...) function to predict thetranscription of every test sample and to save the prediction in thedataset itself. We will call the resulting dictionary "results".Note: we evaluate the test data set with batch_size=1 on purposedue to this issue.Since padded inputs don't yield the exact same output as non-paddedinputs, a better WER can be achieved by not padding the input at all.def map_to_result(batch):with torch.no_grad():input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0)logits = model(input_values).logitspred_ids = torch.argmax(logits, dim=-1)batch["pred_str"] = processor.batch_decode(pred_ids)[0]batch["text"] = processor.decode(batch["labels"], group_tokens=False)return batchresults = timit["test"].map(map_to_result, remove_columns=timit["test"].column_names)Let's compute the overall WER now.print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["text"])))Print Output:Test WER: 0.22122.1% WER - not bad! Our demo model would have probably made it on the official leaderboard.Let's take a look at some predictions to see what errors are made by the model.Print Output:show_random_elements(results.remove_columns(["speech", "sampling_rate"]))pred_strtarget_textam to balence your employe you benefits packageaim to balance your employee benefit packagethe fawlg prevented them from ariving on tomthe fog prevented them from arriving on timeyoung children should avoide exposure to contagieous diseasesyoung children should avoid exposure to contagious diseasesartifficial intelligence is for realartificial intelligence is for realtheir pcrops were two step latters a chair and a polmb fantheir props were two stepladders a chair and a palm fanif people were more generous there would be no need for wealfareif people were more generous there would be no need for welfarethe fish began to leep frantically on the surface of the small acthe fish began to leap frantically on the surface of the small lakeher right hand eggs whenever the barametric pressur changesher right hand aches whenever the barometric pressure changesonly lawyers loved miliunearsonly lawyers love millionairesthe nearest cennagade may not be within wallkin distancethe nearest synagogue may not be within walking distanceIt becomes clear that the predicted transcriptions are acoustically verysimilar to the target transcriptions, but often contain spelling orgrammatical errors. This shouldn't be very surprising though given thatwe purely rely on Wav2Vec2 without making use of a language model.Finally, to better understand how CTC works, it is worth taking a deeperlook at the exact output of the model. Let's run the first test samplethrough the model, take the predicted ids and convert them to theircorresponding tokens.model.to("cuda")with torch.no_grad():logits = model(torch.tensor(timit["test"][:1]["input_values"], device="cuda")).logitspred_ids = torch.argmax(logits, dim=-1)# convert ids to tokens" ".join(processor.tokenizer.convert_ids_to_tokens(pred_ids[0].tolist()))Print Output:[PAD] [PAD] [PAD] [PAD] [PAD] [PAD] t t h e e | | b b [PAD] u u n n n g g [PAD] a [PAD] [PAD] l l [PAD] o o o [PAD] | w w a a [PAD] s s | | [PAD] [PAD] p l l e e [PAD] [PAD] s s e n n t t t [PAD] l l y y | | | s s [PAD] i i [PAD] t t t [PAD] u u u u [PAD] [PAD] [PAD] a a [PAD] t t e e e d d d | n n e e a a a r | | t h h e | | s s h h h [PAD] o o o [PAD] o o r r [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]The output should make it a bit clearer how CTC works in practice. Themodel is to some extent invariant to speaking rate since it has learnedto either just repeat the same token in case the speech chunk to beclassified still corresponds to the same token. This makes CTC a verypowerful algorithm for speech recognition since the speech file'stranscription is often very much independent of its length.I again advise the reader to take a look atthis very nice blog post to betterunderstand CTC.
https://huggingface.co/blog/long-range-transformers
Hugging Face Reads, Feb. 2021 - Long-range Transformers
Victor Sanh
March 9, 2021
Co-written by Teven Le Scao, Patrick Von Platen, Suraj Patil, Yacine Jernite and Victor Sanh.Each month, we will choose a topic to focus on, reading a set of four papers recently published on the subject. We will then write a short blog post summarizing their findings and the common trends between them, and questions we had for follow-up work after reading them. The first topic for January 2021 was Sparsity and Pruning, in February 2021 we addressed Long-Range Attention in Transformers. Introduction After the rise of large transformer models in 2018 and 2019, two trends have quickly emerged to bring their compute requirements down. First, conditional computation, quantization, distillation, and pruning have unlocked inference of large models in compute-constrained environments; we’ve already touched upon this in part in our last reading group post. The research community then moved to reduce the cost of pre-training.In particular, one issue has been at the center of the efforts: the quadratic cost in memory and time of transformer models with regard to the sequence length. In order to allow efficient training of very large models, 2020 saw an onslaught of papers to address that bottleneck and scale transformers beyond the usual 512- or 1024- sequence lengths that were the default in NLP at the start of the year.This topic has been a key part of our research discussions from the start, and our own Patrick Von Platen has already dedicated a 4-part series to Reformer. In this reading group, rather than trying to cover every approach (there are so many!), we’ll focus on four main ideas:Custom attention patterns (with Longformer)Recurrence (with Compressive Transformer)Low-rank approximations (with Linformer)Kernel approximations (with Performer)For exhaustive views of the subject, check out Efficient Transfomers: A Survey and Long Range Arena. Summaries Longformer - The Long-Document Transformer Iz Beltagy, Matthew E. Peters, Arman CohanLongformer addresses the memory bottleneck of transformers by replacing conventional self-attention with a combination of windowed/local/sparse (cf. Sparse Transformers (2019)) attention and global attention that scales linearly with the sequence length. As opposed to previous long-range transformer models (e.g. Transformer-XL (2019), Reformer (2020), Adaptive Attention Span (2019)), Longformer’s self-attention layer is designed as a drop-in replacement for the standard self-attention, thus making it possible to leverage pre-trained checkpoints for further pre-training and/or fine-tuning on long sequence tasks.The standard self-attention matrix (Figure a) scales quadratically with the input length:Figure taken from LongformerLongformer uses different attention patterns for autoregressive language modeling, encoder pre-training & fine-tuning, and sequence-to-sequence tasks.For autoregressive language modeling, the strongest results are obtained by replacing causal self-attention (a la GPT2) with dilated windowed self-attention (Figure c). With nnn being the sequence length and www being the window length, this attention pattern reduces the memory consumption from n2n^2n2 to wnwnwn, which under the assumption that w<<nw << nw<<n, scales linearly with the sequence length.For encoder pre-training, Longformer replaces the bi-directional self-attention (a la BERT) with a combination of local windowed and global bi-directional self-attention (Figure d). This reduces the memory consumption from n2n^2n2 to wn+gnw n + g nwn+gn with ggg being the number of tokens that are attended to globally, which again scales linearly with the sequence length.For sequence-to-sequence models, only the encoder layers (a la BART) are replaced with a combination of local and global bi-directional self-attention (Figure d) because for most seq2seq tasks, only the encoder processes very large inputs (e.g. summarization). The memory consumption is thus reduced from ns2+nsnt+nt2n_s^2+ n_s n_t +n_t^2ns2​+ns​nt​+nt2​ to wns+gns+nsnt+nt2w n_s +gn_s +n_s n_t +n_t^2wns​+gns​+ns​nt​+nt2​ with nsn_sns​ and ntn_tnt​ being the source (encoder input) and target (decoder input) lengths respectively. For Longformer Encoder-Decoder to be efficient, it is assumed that nsn_sns​ is much bigger than ntn_tnt​. Main findings The authors proposed the dilated windowed self-attention (Figure c) and showed that it yields better results on language modeling compared to just windowed/sparse self-attention (Figure b). The window sizes are increased through the layers. This pattern further outperforms previous architectures (such as Transformer-XL, or adaptive span attention) on downstream benchmarks.Global attention allows the information to flow through the whole sequence and applying the global attention to task-motivated tokens (such as the tokens of the question in QA, CLS token for sentence classification) leads to stronger performance on downstream tasks. Using this global pattern, Longformer can be successfully applied to document-level NLP tasks in the transfer learning setting.Standard pre-trained models can be adapted to long-range inputs by simply replacing the standard self-attention with the long-range self-attention proposed in this paper and then fine-tuning on the downstream task. This avoids costly pre-training specific to long-range inputs. Follow-up questions The increasing size (throughout the layers) of the dilated windowed self-attention echoes findings in computer vision on increasing the receptive field of stacked CNN. How do these two findings relate? What are the transposable learnings?Longformer’s Encoder-Decoder architecture works well for tasks that do not require a long target length (e.g. summarization). However, how would it work for long-range seq2seq tasks which require a long target length (e.g. document translation, speech recognition, etc.) especially considering the cross-attention layer of encoder-decoder’s models?In practice, the sliding window self-attention relies on many indexing operations to ensure a symmetric query-key weights matrix. Those operations are very slow on TPUs which highlights the question of the applicability of such patterns on other hardware. Compressive Transformers for Long-Range Sequence Modelling Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. LillicrapTransformer-XL (2019) showed that caching previously computed layer activations in a memory can boost performance on language modeling tasks (such as enwik8). Instead of just attending the current nnn input tokens, the model can also attend to the past nmn_mnm​ tokens, with nmn_mnm​ being the memory size of the model. Transformer-XL has a memory complexity of O(n2+nnm)O(n^2+ n n_m)O(n2+nnm​), which shows that memory cost can increase significantly for very large nmn_mnm​. Hence, Transformer-XL has to eventually discard past activations from the memory when the number of cached activations gets larger than nmn_mnm​. Compressive Transformer addresses this problem by adding an additional compressed memory to efficiently cache past activations that would have otherwise eventually been discarded. This way the model can learn better long-range sequence dependencies having access to significantly more past activations.Figure taken from Compressive TransfomerA compression factor ccc (equal to 3 in the illustration) is chosen to decide the rate at which past activations are compressed. The authors experiment with different compression functions fcf_cfc​ such as max/mean pooling (parameter-free) and 1D convolution (trainable layer). The compression function is trained with backpropagation through time or local auxiliary compression losses. In addition to the current input of length nnn, the model attends to nmn_mnm​ cached activations in the regular memory and ncmn_{cm}ncm​ compressed memory activations allowing a long temporal dependency of l×(nm+cncm)l × (n_m + c n_{cm})l×(nm​+cncm​), with lll being the number of attention layers. This increases Transformer-XL’s range by additional l×c×ncml × c × n_{cm}l×c×ncm​ tokens and the memory cost amounts to O(n2+nnm+nncm)O(n^2+ n n_m+ n n_{cm})O(n2+nnm​+nncm​). Experiments are conducted on Reinforcement learning, audio generation, and natural language processing. The authors also introduce a new long-range language modeling benchmark called PG19. Main findings Compressive Transformer significantly outperforms the state-of-the-art perplexity on language modeling, namely on the enwik8 and WikiText-103 datasets. In particular, compressed memory plays a crucial role in modeling rare words occurring on long sequences.The authors show that the model learns to preserve salient information by increasingly attending the compressed memory instead of the regular memory, which goes against the trend of older memories being accessed less frequently.All compression functions (average pooling, max pooling, 1D convolution) yield similar results confirming that memory compression is an effective way to store past information. Follow-up questions Compressive Transformer requires a special optimization schedule in which the effective batch size is progressively increased to avoid significant performance degradation for lower learning rates. This effect is not well understood and calls into more analysis.The Compressive Transformer has many more hyperparameters compared to a simple model like BERT or GPT2: the compression rate, the compression function and loss, the regular and compressed memory sizes, etc. It is not clear whether those parameters generalize well across different tasks (other than language modeling) or similar to the learning rate, make the training also very brittle.It would be interesting to probe the regular memory and compressed memory to analyze what kind of information is memorized through the long sequences. Shedding light on the most salient pieces of information can inform methods such as Funnel Transformer which reduces the redundancy in maintaining a full-length token-level sequence. Linformer: Self-Attention with Linear Complexity Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao MaThe goal is to reduce the complexity of the self-attention with respect to the sequence length nnn) from quadratic to linear. This paper makes the observation that the attention matrices are low rank (i.e. they don’t contain n×nn × nn×n worth of information) and explores the possibility of using high-dimensional data compression techniques to build more memory efficient transformers.The theoretical foundations of the proposed approach are based on the Johnson-Lindenstrauss lemma. Let’s consider mmm) points in a high-dimensional space. We want to project them to a low-dimensional space while preserving the structure of the dataset (i.e. the mutual distances between points) with a margin of error ε\varepsilonε. The Johnson-Lindenstrauss lemma states we can choose a small dimension k∼8log⁡(m)/ε2k \sim 8 \log(m) / \varepsilon^2k∼8log(m)/ε2 and find a suitable projection into Rk in polynomial time by simply trying random orthogonal projections.Linformer projects the sequence length into a smaller dimension by learning a low-rank decomposition of the attention context matrix. The matrix multiplication of the self-attention can be then cleverly re-written such that no matrix of size n×nn × nn×n needs to be ever computed and stored.Standard transformer:Attention(Q,K,V)=softmax(Q∗K)∗V\text{Attention}(Q, K, V) = \text{softmax}(Q * K) * VAttention(Q,K,V)=softmax(Q∗K)∗V (n * h) (n * n) (n * h)Linformer:LinAttention(Q,K,V)=softmax(Q∗K∗WK)∗WV∗V\text{LinAttention}(Q, K, V) = \text{softmax}(Q * K * W^K) * W^V * VLinAttention(Q,K,V)=softmax(Q∗K∗WK)∗WV∗V (n * h) (n * d) (d * n) (n * h) Main findings The self-attention matrix is low-rank which implies that most of its information can be recovered by its first few highest eigenvalues and can be approximated by a low-rank matrix.Lot of works focus on reducing the dimensionality of the hidden states. This paper shows that reducing the sequence length with learned projections can be a strong alternative while shrinking the memory complexity of the self-attention from quadratic to linear.Increasing the sequence length doesn’t affect the inference speed (time-clock) of Linformer, when transformers have a linear increase. Moreover, the convergence speed (number of updates) is not impacted by Linformer's self-attention.Figure taken from Linformer Follow-up questions Even though the projections matrices are shared between layers, the approach presented here comes in contrast with the Johnson-Lindenstrauss that states that random orthogonal projections are sufficient (in polynomial time). Would random projections have worked here? This is reminiscent of Reformer which uses random projections in locally sensitive hashing to reduce the memory complexity of the self-attention. Rethinking Attention with Performers Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian WellerThe goal is (again!) to reduce the complexity of the self-attention with respect to the sequence length nnn) from quadratic to linear. In contrast to other papers, the authors note that the sparsity and low-rankness priors of the self-attention may not hold in other modalities (speech, protein sequence modeling). Thus the paper explores methods to reduce the memory burden of the self-attention without any priors on the attention matrix.The authors observe that if we could perform the matrix multiplication K×VK × VK×V through the softmax ( softmax(Q×K)×V\text{softmax}(Q × K) × Vsoftmax(Q×K)×V ), we wouldn’t have to compute the QxKQ x KQxK matrix of size nxnn x nnxn which is the memory bottleneck. They use random feature maps (aka random projections) to approximate the softmax by:softmax(Q∗K)∼Q’∗K’=ϕ(Q)∗ϕ(K)\text{softmax}(Q * K) \sim Q’ * K’ = \phi(Q) * \phi(K)softmax(Q∗K)∼Q’∗K’=ϕ(Q)∗ϕ(K), where phiphiphi is a non-linear suitable function. And then:Attention(Q,K,V)∼ϕ(Q)∗(ϕ(K)∗V)\text{Attention}(Q, K, V) \sim \phi(Q) * (\phi(K) * V)Attention(Q,K,V)∼ϕ(Q)∗(ϕ(K)∗V)Taking inspiration from machine learning papers from the early 2000s, the authors introduce FAVOR+ (Fast Attention Via Orthogonal Random positive (+) Features) a procedure to find unbiased or nearly-unbiased estimations of the self-attention matrix, with uniform convergence and low estimation variance. Main findings The FAVOR+ procedure can be used to approximate self-attention matrices with high accuracy, without any priors on the form of the attention matrix, making it applicable as a drop-in replacement of standard self-attention and leading to strong performances in multiple applications and modalities.The very thorough mathematical investigation of how-to and not-to approximate softmax highlights the relevance of principled methods developed in the early 2000s even in the deep learning era.FAVOR+ can also be applied to efficiently model other kernelizable attention mechanisms beyond softmax. Follow-up questions Even if the approximation of the attention mechanism is tight, small errors propagate through the transformer layers. This raises the question of the convergence and stability of fine-tuning a pre-trained network with FAVOR+ as an approximation of self-attention.The FAVOR+ algorithm is the combination of multiple components. It is not clear which of these components have the most empirical impact on the performance, especially in view of the variety of modalities considered in this work. Reading group discussion The developments in pre-trained transformer-based language models for natural language understanding and generation are impressive. Making these systems efficient for production purposes has become a very active research area. This emphasizes that we still have much to learn and build both on the methodological and practical sides to enable efficient and general deep learning based systems, in particular for applications that require modeling long-range inputs.The four papers above offer different ways to deal with the quadratic memory complexity of the self-attention mechanism, usually by reducing it to linear complexity. Linformer and Longformer both rely on the observation that the self-attention matrix does not contain n×nn × nn×n worth of information (the attention matrix is low-rank and sparse). Performer gives a principled method to approximate the softmax-attention kernel (and any kernelizable attention mechanisms beyond softmax). Compressive Transformer offers an orthogonal approach to model long range dependencies based on recurrence.These different inductive biases have implications in terms of computational speed and generalization beyond the training setup. In particular, Linformer and Longformer lead to different trade-offs: Longformer explicitly designs the sparse attention patterns of the self-attention (fixed patterns) while Linformer learns the low-rank matrix factorization of the self-attention matrix. In our experiments, Longformer is less efficient than Linformer, and is currently highly dependent on implementation details. On the other hand, Linformer’s decomposition only works for fixed context length (fixed at training) and cannot generalize to longer sequences without specific adaptation. Moreover, it cannot cache previous activations which can be extremely useful in the generative setup. Interestingly, Performer is conceptually different: it learns to approximate the softmax attention kernel without relying on any sparsity or low-rank assumption. The question of how these inductive biases compare to each other for varying quantities of training data remains.All these works highlight the importance of long-range inputs modeling in natural language. In the industry, it is common to encounter use-cases such as document translation, document classification or document summarization which require modeling very long sequences in an efficient and robust way. Recently, zero-shot examples priming (a la GPT3) has also emerged as a promising alternative to standard fine-tuning, and increasing the number of priming examples (and thus the context size) steadily increases the performance and robustness. Finally, it is common in other modalities such as speech or protein modeling to encounter long sequences beyond the standard 512 time steps.Modeling long inputs is not antithetical to modeling short inputs but instead should be thought from the perspective of a continuum from shorter to longer sequences. Shortformer, Longformer and BERT provide evidence that training the model on short sequences and gradually increasing sequence lengths lead to an accelerated training and stronger downstream performance. This observation is coherent with the intuition that the long-range dependencies acquired when little data is available can rely on spurious correlations instead of robust language understanding. This echoes some experiments Teven Le Scao has run on language modeling: LSTMs are stronger learners in the low data regime compared to transformers and give better perplexities on small-scale language modeling benchmarks such as Penn Treebank.From a practical point of view, the question of positional embeddings is also a crucial methodological aspect with computational efficiency trade-offs. Relative positional embeddings (introduced in Transformer-XL and used in Compressive Transformers) are appealing because they can easily be extended to yet-unseen sequence lengths, but at the same time, relative positional embeddings are computationally expensive. On the other side, absolute positional embeddings (used in Longformer and Linformer) are less flexible for sequences longer than the ones seen during training, but are computationally more efficient. Interestingly, Shortformer introduces a simple alternative by adding the positional information to the queries and keys of the self-attention mechanism instead of adding it to the token embeddings. The method is called position-infused attention and is shown to be very efficient while producing strong results. @Hugging Face 🤗: Long-range modeling The Longformer implementation and the associated open-source checkpoints are available through the Transformers library and the model hub. Performer and Big Bird, which is a long-range model based on sparse attention, are currently in the works as part of our call for models, an effort involving the community in order to promote open-source contributions. We would be pumped to hear from you if you’ve wondered how to contribute to transformers but did not know where to start!For further reading, we recommend checking Patrick Platen’s blog on Reformer, Teven Le Scao’s post on Johnson-Lindenstrauss approximation, Efficient Transfomers: A Survey, and Long Range Arena: A Benchmark for Efficient Transformers.Next month, we'll cover self-training methods and applications. See you in March!
https://huggingface.co/blog/simple-considerations
🚧 Simple considerations for simple people building fancy neural networks
Victor Sanh
February 25, 2021
Photo by Henry & Co. on UnsplashAs machine learning continues penetrating all aspects of the industry, neural networks have never been so hyped. For instance, models like GPT-3 have been all over social media in the past few weeks and continue to make headlines outside of tech news outlets with fear-mongering titles.An article from The GuardianAt the same time, deep learning frameworks, tools, and specialized libraries democratize machine learning research by making state-of-the-art research easier to use than ever. It is quite common to see these almost-magical/plug-and-play 5 lines of code that promise (near) state-of-the-art results. Working at Hugging Face 🤗, I admit that I am partially guilty of that. 😅 It can give an inexperienced user the misleading impression that neural networks are now a mature technology while in fact, the field is in constant development.In reality, building and training neural networks can often be an extremely frustrating experience:It is sometimes hard to understand if your performance comes from a bug in your model/code or is simply limited by your model’s expressiveness.You can make tons of tiny mistakes at every step of the process without realizing at first, and your model will still train and give a decent performance.In this post, I will try to highlight a few steps of my mental process when it comes to building and debugging neural networks. By “debugging”, I mean making sure you align what you have built and what you have in mind. I will also point out things you can look at when you are not sure what your next step should be by listing the typical questions I ask myself.A lot of these thoughts stem from my experience doing research in natural language processing but most of these principles can be applied to other fields of machine learning.1. 🙈 Start by putting machine learning asideIt might sound counter-intuitive but the very first step of building a neural network is to put aside machine learning and simply focus on your data. Look at the examples, their labels, the diversity of the vocabulary if you are working with text, their length distribution, etc. You should dive into the data to get a first sense of the raw product you are working with and focus on extracting general patterns that a model might be able to catch. Hopefully, by looking at a few hundred examples, you will be able to identify high-level patterns. A few standard questions you can ask yourself:Are the labels balanced?Are there gold-labels that you do not agree with?How were the data obtained? What are the possible sources of noise in this process?Are there any preprocessing steps that seem natural (tokenization, URL or hashtag removing, etc.)?How diverse are the examples?What rule-based algorithm would perform decently on this problem?It is important to get a high-level feeling (qualitative) of your dataset along with a fine-grained analysis (quantitative). If you are working with a public dataset, someone else might have already dived into the data and reported their analysis (it is quite common in Kaggle competition for instance) so you should absolutely have a look at these!2. 📚 Continue as if you just started machine learningOnce you have a deep and broad understanding of your data, I always recommend to put yourself in the shoes of your old self when you just started machine learning and were watching introduction classes from Andrew Ng on Coursera. Start as simple as possible to get a sense of the difficulty of your task and how well standard baselines would perform. For instance, if you work with text, standard baselines for binary text classification can include a logistic regression trained on top of word2vec or fastText embeddings. With the current tools, running these baselines is as easy (if not more) as running BERT which can arguably be considered one of the standard tools for many natural language processing problems. If other baselines are available, run (or implement) some of them. It will help you get even more familiar with the data.As developers, it easy to feel good when building something fancy but it is sometimes hard to rationally justify it if it beats easy baselines by only a few points, so it is central to make sure you have reasonable points of comparisons:How would a random predictor perform (especially in classification problems)? Dataset can be unbalanced…What would the loss look like for a random predictor?What is (are) the best metric(s) to measure progress on my task?What are the limits of this metric? If it’s perfect, what can I conclude? What can’t I conclude?What is missing in “simple approaches” to reach a perfect score?Are there architectures in my neural network toolbox that would be good to model the inductive bias of the data?3. 🦸‍♀️ Don’t be afraid to look under the hood of these 5-liners templatesNext, you can start building your model based on the insights and understanding you acquired previously. As mentioned earlier, implementing neural networks can quickly become quite tricky: there are many moving parts that work together (the optimizer, the model, the input processing pipeline, etc.), and many small things can go wrong when implementing these parts and connecting them to each other. The challenge lies in the fact that you can make these mistakes, train a model without it ever crashing, and still get a decent performance…Yet, it is a good habit when you think you have finished implementing to overfit a small batch of examples (16 for instance). If your implementation is (nearly) correct, your model will be able to overfit and remember these examples by displaying a 0-loss (make sure you remove any form of regularization such as weight decay). If not, it is highly possible that you did something wrong in your implementation. In some rare cases, it means that your model is not expressive enough or lacks capacity. Again, start with a small-scale model (fewer layers for instance): you are looking to debug your model so you want a quick feedback loop, not a high performance.Pro-tip: in my experience working with pre-trained language models, freezing the embeddings modules to their pre-trained values doesn’t affect much the fine-tuning task performance while considerably speeding up the training.Some common errors include:Wrong indexing… (these are really the worst 😅). Make sure you are gathering tensors along the correct dimensions for instance…You forgot to call model.eval() in evaluation mode (in PyTorch) or model.zero\_grad() to clean the gradientsSomething went wrong in the pre-processing of the inputsThe loss got wrong arguments (for instance passing probabilities when it expects logits)Initialization doesn’t break the symmetry (usually happens when you initialize a whole matrix with a single constant value)Some parameters are never called during the forward pass (and thus receive no gradients)The learning rate is taking funky values like 0 all the timeYour inputs are being truncated in a suboptimal wayPro-tip: when you work with language, have a serious look at the outputs of the tokenizers. I can’t count the number of lost hours I spent trying to reproduce results (and sometimes my own old results) because something went wrong with the tokenization.🤦‍♂️Another useful tool is deep-diving into the training dynamic and plot (in Tensorboard for instance) the evolution of multiple scalars through training. At the bare minimum, you should look at the dynamic of your loss(es), the parameters, and their gradients.As the loss decreases, you also want to look at the model’s predictions: either by evaluating on your development set or, my personal favorite, print a couple of model outputs. For instance, if you are training a machine translation model, it is quite satisfying to see the generations become more and more convincing through the training. You want to be more specifically careful about overfitting: your training loss continues to decreases while your evaluation loss is aiming at the stars.💫4. 👀 Tune but don’t tune blindlyOnce you have everything up and running, you might want to tune your hyperparameters to find the best configuration for your setup. I generally stick with a random grid search as it turns out to be fairly effective in practice.Some people report successes using fancy hyperparameter tuning methods such as Bayesian optimization but in my experience, random over a reasonably manually defined grid search is still a tough-to-beat baseline.Most importantly, there is no point of launching 1000 runs with different hyperparameters (or architecture tweaks like activation functions): compare a couple of runs with different hyperparameters to get an idea of which hyperparameters have the highest impact but in general, it is delusional to expect to get your biggest jumps of performance by simply tuning a few values. For instance, if your best performing model is trained with a learning rate of 4e2, there is probably something more fundamental happening inside your neural network and you want to identify and understand this behavior so that you can re-use this knowledge outside of your current specific context.On average, experts use fewer resources to find better solutions.To conclude, a piece of general advice that has helped me become better at building neural networks is to favor (as most as possible) a deep understanding of each component of your neural network instead of blindly (not to say magically) tweak the architecture. Keep it simple and avoid small tweaks that you can’t reasonably justify even after trying really hard. Obviously, there is the right balance to find between a “trial-and-error” and an “analysis approach” but a lot of these intuitions feel more natural as you accumulate practical experience. You too are training your internal model. 🤯A few related pointers to complete your reading:Reproducibility (in ML) as a vehicle for engineering best practices from Joel GrusChecklist for debugging neural networks from Cecelia ShaoHow to unit test machine learning code from Chase RobertsA recipe for Training Neural Networks from Andrej Karpathy
https://huggingface.co/blog/ray-rag
Retrieval Augmented Generation with Huggingface Transformers and Ray
Ray Project (Anyscale)
February 10, 2021
Huggingface Transformers recently added the Retrieval Augmented Generation (RAG) model, a new NLP architecture that leverages external documents (like Wikipedia) to augment its knowledge and achieve state of the art results on knowledge-intensive tasks. In this blog post, we introduce the integration of Ray, a library for building scalable applications, into the RAG contextual document retrieval mechanism. This speeds up retrieval calls by 2x and improves the scalability of RAG distributed fine-tuning.What is Retrieval Augmented Generation (RAG)?An overview of RAG. The model retrieves contextual documents from an external dataset as part of its execution. These contextual documents are used in conjunction with the original input to produce an output. The GIF is taken from Facebook's original blog post.Recently, Huggingface partnered with Facebook AI to introduce the RAG model as part of its Transformers library. RAG acts just like any other seq2seq model. However, RAG has an intermediate component that retrieves contextual documents from an external knowledge base (like a Wikipedia text corpus). These documents are then used in conjunction with the input sequence and passed into the underlying seq2seq generator.This information retrieval step allows RAG to make use of multiple sources of knowledge -- those that are baked into the model parameters and the information that is contained in the contextual passages, allowing it to outperform other state-of-the-art models in tasks like question answering. You can try it for yourself using this demo provided by Huggingface!Scaling up fine-tuningThis retrieval of contextual documents is crucial for RAG's state-of-the-art results but introduces an extra layer of complexity. When scaling up the training process via a data-parallel training routine, a naive implementation of the document lookup can become a bottleneck for training. Further, the document index used in the retrieval component is often quite large, making it infeasible for each training worker to load its own replicated copy of the index.The previous implementation of RAG fine-tuning leveraged the torch.distributed communication package for the document retrieval portion. However, this implementation sometimes proved to be inflexible and limited in scalability.Instead, a framework-agnostic and a more flexible implementation for ad-hoc concurrent programming is required. Ray fits the bill perfectly. Ray is a simple, yet powerful Python library for general-purpose distributed and parallel programming. Using Ray for distributed document retrieval, we achieved a 2x speedup per retrieval call compared to torch.distributed, and overall better fine-tuning scalability.Ray for Document RetrievalDocument retrieval with the torch.distributed implementationThe main drawback of the torch.distributed implementation for document retrieval was that it latched onto the same process group used for training and only the rank 0 training worker loaded the index into memory.As a result, this implementation had some limitations:Synchronization bottleneck: The rank 0 worker had to receive the inputs from all workers, perform the index query, and then send the results back to the other workers. This limited performance with multiple training workers.PyTorch specific: The document retrieval process group had to latch onto the existing process group used for training, meaning that PyTorch had to be used for training as well.Document retrieval with the Ray implementationTo overcome these limitations, we introduced a novel implementation of distributed retrieval based on Ray. With Ray’s stateful actor abstractions, multiple processes that are separate from the training processes are used to load the index and handle the retrieval queries. With multiple Ray actors, retrieval is no longer a bottleneck and PyTorch is no longer a requirement for RAG.And as you can see below, using the Ray based implementation leads to better retrieval performance for multi-GPU fine-tuning. The following results show the seconds per retrieval call and we can see that as we increase the number of GPUs that we train on, using Ray has comparatively better performance than torch.distributed. Also, if we increase the number of Ray processes that perform retrieval, we also get better performance with more training workers since a single retrieval process is no longer a bottleneck.2 GPU3 GPU4 GPUtorch.distributed2.12 sec/retrieval2.62 sec/retrieve3.438 sec/retrieveRay 2 retrieval processes1.49 sec/retrieve1.539 sec/retrieve2.029 sec/retrieveRay 4 retrieval processes1.145 sec/retrieve1.484 sec/retrieve1.66 sec/retrieveA performance comparison of different retrieval implementations. For each document retrieval implementation, we run 500 training steps with a per-GPU batch size of 8, and measure the time it takes to retrieve the contextual documents for each batch on the rank 0 training worker. As the results show, using multiple retrieval processes improves performance, especially as we scale training to multiple GPUs.How do I use it?Huggingface provides a PyTorch Lightning based fine tuning script, and we extended it to add the Ray retrieval implementation as an option. To try it out, first install the necessary requirementspip install raypip install transformerspip install -r transformers/examples/research_projects/rag/requirements.txtThen, you can specify your data paths and other configurations and run finetune-rag-ray.sh!# Sample script to finetune RAG using Ray for distributed retrieval.# Add parent directory to python path to access lightning_base.pyexport PYTHONPATH="../":"${PYTHONPATH}"# Start a single-node Ray cluster.ray start --head# A sample finetuning run, you need to specify data_dir, output_dir and model_name_or_path# run ./examples/rag/finetune_rag_ray.sh --help to see all the possible optionspython examples/rag/finetune_rag.py \--data_dir $DATA_DIR \--output_dir $OUTPUT_DIR \--model_name_or_path $MODEL_NAME_OR_PATH \--model_type rag_sequence \--fp16 \--gpus 8 \--profile \--do_train \--do_predict \--n_val -1 \--train_batch_size 8 \--eval_batch_size 1 \--max_source_length 128 \--max_target_length 25 \--val_max_target_length 25 \--test_max_target_length 25 \--label_smoothing 0.1 \--dropout 0.1 \--attention_dropout 0.1 \--weight_decay 0.001 \--adam_epsilon 1e-08 \--max_grad_norm 0.1 \--lr_scheduler polynomial \--learning_rate 3e-05 \--num_train_epochs 100 \--warmup_steps 500 \--gradient_accumulation_steps 1 \--distributed_retriever ray \--num_retrieval_workers 4# Stop the Ray cluster.ray stopWhat’s next?Using RAG with Huggingface transformers and the Ray retrieval implementation for faster distributed fine-tuning, you can leverage RAG for retrieval-based generation on your own knowledge-intensive tasks.Also, hyperparameter tuning is another aspect of transformer fine tuning and can have huge impacts on accuracy. For scalable and easy hyperparameter tuning, check out the Ray Tune library. By using Ray Tune’s integration with PyTorch Lightning, or the built-in integration with Huggingface transformers, you can run experiments to find the perfect hyperparameters for your RAG model.And lastly, stay tuned for a potential Tensorflow implementation of RAG on Huggingface!If you plan to try RAG+Ray integration out, please feel free to share your experiences on the Ray Discourse or join the Ray community Slack for further discussion -- we’d love to hear from you!Also published at https://medium.com/distributed-computing-with-ray/retrieval-augmented-generation-with-huggingface-transformers-and-ray-b09b56161b1e
https://huggingface.co/blog/pytorch-xla
Hugging Face on PyTorch / XLA TPUs: Faster and cheaper training
Daniel JinYoung Sohn, Lysandre
February 9, 2021
Training Your Favorite Transformers on Cloud TPUs using PyTorch / XLAThe PyTorch-TPU project originated as a collaborative effort between the Facebook PyTorch and Google TPU teams and officially launched at the 2019 PyTorch Developer Conference 2019. Since then, we’ve worked with the Hugging Face team to bring first-class support to training on Cloud TPUs using PyTorch / XLA. This new integration enables PyTorch users to run and scale up their models on Cloud TPUs while maintaining the exact same Hugging Face trainers interface.This blog post provides an overview of changes made in the Hugging Face library, what the PyTorch / XLA library does, an example to get you started training your favorite transformers on Cloud TPUs, and some performance benchmarks. If you can’t wait to get started with TPUs, please skip ahead to the “Train Your Transformer on Cloud TPUs” section - we handle all the PyTorch / XLA mechanics for you within the Trainer module!XLA:TPU Device TypePyTorch / XLA adds a new xla device type to PyTorch. This device type works just like other PyTorch device types. For example, here's how to create and print an XLA tensor:import torchimport torch_xlaimport torch_xla.core.xla_model as xmt = torch.randn(2, 2, device=xm.xla_device())print(t.device)print(t)This code should look familiar. PyTorch / XLA uses the same interface as regular PyTorch with a few additions. Importing torch_xla initializes PyTorch / XLA, and xm.xla_device() returns the current XLA device. This may be a CPU, GPU, or TPU depending on your environment, but for this blog post we’ll focus primarily on TPU.The Trainer module leverages a TrainingArguments dataclass in order to define the training specifics. It handles multiple arguments, from batch sizes, learning rate, gradient accumulation and others, to the devices used. Based on the above, in TrainingArguments._setup_devices() when using XLA:TPU devices, we simply return the TPU device to be used by the Trainer:@dataclassclass TrainingArguments:...@cached_property@torch_requireddef _setup_devices(self) -> Tuple["torch.device", int]:...elif is_torch_tpu_available():device = xm.xla_device()n_gpu = 0...return device, n_gpuXLA Device Step ComputationIn a typical XLA:TPU training scenario we’re training on multiple TPU cores in parallel (a single Cloud TPU device includes 8 TPU cores). So we need to ensure that all the gradients are exchanged between the data parallel replicas by consolidating the gradients and taking an optimizer step. For this we provide the xm.optimizer_step(optimizer) which does the gradient consolidation and step-taking. In the Hugging Face trainer, we correspondingly update the train step to use the PyTorch / XLA APIs:class Trainer:…def train(self, *args, **kwargs):...if is_torch_tpu_available():xm.optimizer_step(self.optimizer)PyTorch / XLA Input PipelineThere are two main parts to running a PyTorch / XLA model: (1) tracing and executing your model’s graph lazily (refer to below “PyTorch / XLA Library” section for a more in-depth explanation) and (2) feeding your model. Without any optimization, the tracing/execution of your model and input feeding would be executed serially, leaving chunks of time during which your host CPU and your TPU accelerators would be idle, respectively. To avoid this, we provide an API, which pipelines the two and thus is able to overlap the tracing of step n+1 while step n is still executing.import torch_xla.distributed.parallel_loader as pl...dataloader = pl.MpDeviceLoader(dataloader, device)Checkpoint Writing and LoadingWhen a tensor is checkpointed from a XLA device and then loaded back from the checkpoint, it will be loaded back to the original device. Before checkpointing tensors in your model, you want to ensure that all of your tensors are on CPU devices instead of XLA devices. This way, when you load back the tensors, you’ll load them through CPU devices and then have the opportunity to place them on whatever XLA devices you desire. We provide the xm.save() API for this, which already takes care of only writing to storage location from only one process on each host (or one globally if using a shared file system across hosts).class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin):…def save_pretrained(self, save_directory):...if getattr(self.config, "xla_device", False):import torch_xla.core.xla_model as xmif xm.is_master_ordinal():# Save configuration filemodel_to_save.config.save_pretrained(save_directory)# xm.save takes care of saving only from masterxm.save(state_dict, output_model_file)class Trainer:…def train(self, *args, **kwargs):...if is_torch_tpu_available():xm.rendezvous("saving_optimizer_states")xm.save(self.optimizer.state_dict(),os.path.join(output_dir, "optimizer.pt"))xm.save(self.lr_scheduler.state_dict(),os.path.join(output_dir, "scheduler.pt"))PyTorch / XLA LibraryPyTorch / XLA is a Python package that uses the XLA linear algebra compiler to connect the PyTorch deep learning framework with XLA devices, which includes CPU, GPU, and Cloud TPUs. Part of the following content is also available in our API_GUIDE.md.PyTorch / XLA Tensors are LazyUsing XLA tensors and devices requires changing only a few lines of code. However, even though XLA tensors act a lot like CPU and CUDA tensors, their internals are different. CPU and CUDA tensors launch operations immediately or eagerly. XLA tensors, on the other hand, are lazy. They record operations in a graph until the results are needed. Deferring execution like this lets XLA optimize it. A graph of multiple separate operations might be fused into a single optimized operation.Lazy execution is generally invisible to the caller. PyTorch / XLA automatically constructs the graphs, sends them to XLA devices, and synchronizes when copying data between an XLA device and the CPU. Inserting a barrier when taking an optimizer step explicitly synchronizes the CPU and the XLA device.This means that when you call model(input) forward pass, calculate your loss loss.backward(), and take an optimization step xm.optimizer_step(optimizer), the graph of all operations is being built in the background. Only when you either explicitly evaluate the tensor (ex. Printing the tensor or moving it to a CPU device) or mark a step (this will be done by the MpDeviceLoader everytime you iterate through it), does the full step get executed.Trace, Compile, Execute, and RepeatFrom a user’s point of view, a typical training regimen for a model running on PyTorch / XLA involves running a forward pass, backward pass, and optimizer step. From the PyTorch / XLA library point of view, things look a little different.While a user runs their forward and backward passes, an intermediate representation (IR) graph is traced on the fly. The IR graph leading to each root/output tensor can be inspected as following:>>> import torch>>> import torch_xla>>> import torch_xla.core.xla_model as xm>>> t = torch.tensor(1, device=xm.xla_device())>>> s = t*t>>> print(torch_xla._XLAC._get_xla_tensors_text([s]))IR {%0 = s64[] prim::Constant(), value=1%1 = s64[] prim::Constant(), value=0%2 = s64[] xla::as_strided_view_update(%1, %0), size=(), stride=(), storage_offset=0%3 = s64[] aten::as_strided(%2), size=(), stride=(), storage_offset=0%4 = s64[] aten::mul(%3, %3), ROOT=0}This live graph is accumulated while the forward and backward passes are run on the user's program, and once xm.mark_step() is called (indirectly by pl.MpDeviceLoader), the graph of live tensors is cut. This truncation marks the completion of one step and subsequently we lower the IR graph into XLA Higher Level Operations (HLO), which is the IR language for XLA.This HLO graph then gets compiled into a TPU binary and subsequently executed on the TPU devices. However, this compilation step can be costly, typically taking longer than a single step, so if we were to compile the user’s program every single step, overhead would be high. To avoid this, we have caches that store compiled TPU binaries keyed by their HLO graphs’ unique hash identifiers. So once this TPU binary cache has been populated on the first step, subsequent steps will typically not have to re-compile new TPU binaries; instead, they can simply look up the necessary binaries from the cache.Since TPU compilations are typically much slower than the step execution time, this means that if the graph keeps changing in shape, we’ll have cache misses and compile too frequently. To minimize compilation costs, we recommend keeping tensor shapes static whenever possible. Hugging Face library’s shapes are already static for the most part with input tokens being padded appropriately, so throughout training the cache should be consistently hit. This can be checked using the debugging tools that PyTorch / XLA provides. In the example below, you can see that compilation only happened 5 times (CompileTime) whereas execution happened during each of 1220 steps (ExecuteTime):>>> import torch_xla.debug.metrics as met>>> print(met.metrics_report())Metric: CompileTimeTotalSamples: 5Accumulator: 28s920ms153.731usValueRate: 092ms152.037us / secondRate: 0.0165028 / secondPercentiles: 1%=428ms053.505us; 5%=428ms053.505us; 10%=428ms053.505us; 20%=03s640ms888.060us; 50%=03s650ms126.150us; 80%=11s110ms545.595us; 90%=11s110ms545.595us; 95%=11s110ms545.595us; 99%=11s110ms545.595usMetric: DeviceLockWaitTotalSamples: 1281Accumulator: 38s195ms476.007usValueRate: 151ms051.277us / secondRate: 4.54374 / secondPercentiles: 1%=002.895us; 5%=002.989us; 10%=003.094us; 20%=003.243us; 50%=003.654us; 80%=038ms978.659us; 90%=192ms495.718us; 95%=208ms893.403us; 99%=221ms394.520usMetric: ExecuteTimeTotalSamples: 1220Accumulator: 04m22s555ms668.071usValueRate: 923ms872.877us / secondRate: 4.33049 / secondPercentiles: 1%=045ms041.018us; 5%=213ms379.757us; 10%=215ms434.912us; 20%=217ms036.764us; 50%=219ms206.894us; 80%=222ms335.146us; 90%=227ms592.924us; 95%=231ms814.500us; 99%=239ms691.472usCounter: CachedCompileValue: 1215Counter: CreateCompileHandlesValue: 5...Train Your Transformer on Cloud TPUsTo configure your VM and Cloud TPUs, please follow “Set up a Compute Engine instance” and “Launch a Cloud TPU resource” (pytorch-1.7 version as of writing) sections. Once you have your VM and Cloud TPU created, using them is as simple as SSHing to your GCE VM and running the following commands to get bert-large-uncased training kicked off (batch size is for v3-8 device, may OOM on v2-8):conda activate torch-xla-1.7export TPU_IP_ADDRESS="ENTER_YOUR_TPU_IP_ADDRESS" # ex. 10.0.0.2export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470"git clone -b v4.2.2 https://github.com/huggingface/transformers.gitcd transformers && pip install .pip install datasets==1.2.1python examples/xla_spawn.py \--num_cores 8 \examples/language-modeling/run_mlm.py \--dataset_name wikitext \--dataset_config_name wikitext-103-raw-v1 \--max_seq_length 512 \--pad_to_max_length \--logging_dir ./tensorboard-metrics \--cache_dir ./cache_dir \--do_train \--do_eval \--overwrite_output_dir \--output_dir language-modeling \--overwrite_cache \--tpu_metrics_debug \--model_name_or_path bert-large-uncased \--num_train_epochs 3 \--per_device_train_batch_size 8 \--per_device_eval_batch_size 8 \--save_steps 500000The above should complete training in roughly less than 200 minutes with an eval perplexity of ~3.25.Performance BenchmarkingThe following table shows the performance of training bert-large-uncased on a v3-8 Cloud TPU system (containing 4 TPU v3 chips) running PyTorch / XLA. The dataset used for all benchmarking measurements is the WikiText103 dataset, and we use the run_mlm.py script provided in Hugging Face examples. To ensure that the workloads are not host-CPU-bound, we use the n1-standard-96 CPU configuration for these tests, but you may be able to use smaller configurations as well without impacting performance.NameDatasetHardwareGlobal Batch SizePrecisionTraining Time (mins)bert-large-uncasedWikiText1034 TPUv3 chips (i.e. v3-8)64FP32178.4bert-large-uncasedWikiText1034 TPUv3 chips (i.e. v3-8)128BF16106.4Get Started with PyTorch / XLA on TPUsSee the “Running on TPUs” section under the Hugging Face examples to get started. For a more detailed description of our APIs, check out our API_GUIDE, and for performance best practices, take a look at our TROUBLESHOOTING guide. For generic PyTorch / XLA examples, run the following Colab Notebooks we offer with free Cloud TPU access. To run directly on GCP, please see our tutorials labeled “PyTorch” on our documentation site.Have any other questions or issues? Please open an issue or question at https://github.com/huggingface/transformers/issues or directly at https://github.com/pytorch/xla/issues.
https://huggingface.co/blog/tf-serving
Faster TensorFlow models in Hugging Face Transformers
Julien Plu
January 26, 2021
In the last few months, the Hugging Face team has been working hard on improving Transformers’ TensorFlow models to make them more robust and faster. The recent improvements are mainly focused on two aspects:Computational performance: BERT, RoBERTa, ELECTRA and MPNet have been improved in order to have a much faster computation time. This gain of computational performance is noticeable for all the computational aspects: graph/eager mode, TF Serving and for CPU/GPU/TPU devices.TensorFlow Serving: each of these TensorFlow model can be deployed with TensorFlow Serving to benefit of this gain of computational performance for inference.Computational PerformanceTo demonstrate the computational performance improvements, we have done a thorough benchmark where we compare BERT's performance with TensorFlow Serving of v4.2.0 to the official implementation from Google. The benchmark has been run on a GPU V100 using a sequence length of 128 (times are in millisecond):Batch sizeGoogle implementationv4.2.0 implementationRelative difference Google/v4.2.0 implem16.76.266.79%29.48.687.96%414.413.19.45%82421.510.99%1646.642.39.67%3283.980.44.26%64171.51569.47%128338.53099.11%The current implementation of Bert in v4.2.0 is faster than the Google implementation by up to ~10%. Apart from that it is also twice as fast as the implementations in the 4.1.1 release.TensorFlow ServingThe previous section demonstrates that the brand new Bert model got a dramatic increase in computational performance in the last version of Transformers. In this section, we will show you step-by-step how to deploy a Bert model with TensorFlow Serving to benefit from the increase in computational performance in a production environment.What is TensorFlow Serving?TensorFlow Serving belongs to the set of tools provided by TensorFlow Extended (TFX) that makes the task of deploying a model to a server easier than ever. TensorFlow Serving provides two APIs, one that can be called upon using HTTP requests and another one using gRPC to run inference on the server.What is a SavedModel?A SavedModel contains a standalone TensorFlow model, including its weights and its architecture. It does not require the original source of the model to be run, which makes it useful for sharing or deploying with any backend that supports reading a SavedModel such as Java, Go, C++ or JavaScript among others. The internal structure of a SavedModel is represented as such:savedmodel/assets-> here the needed assets by the model (if any)/variables-> here the model checkpoints that contains the weightssaved_model.pb -> protobuf file representing the model graphHow to install TensorFlow Serving?There are three ways to install and use TensorFlow Serving:through a Docker container, through an apt package,or using pip.To make things easier and compliant with all the existing OS, we will use Docker in this tutorial.How to create a SavedModel?SavedModel is the format expected by TensorFlow Serving. Since Transformers v4.2.0, creating a SavedModel has three additional features:The sequence length can be modified freely between runs.All model inputs are available for inference.hidden states or attention are now grouped into a single output when returning them with output_hidden_states=True or output_attentions=True.Below, you can find the inputs and outputs representations of a TFBertForSequenceClassification saved as a TensorFlow SavedModel:The given SavedModel SignatureDef contains the following input(s):inputs['attention_mask'] tensor_info:dtype: DT_INT32shape: (-1, -1)name: serving_default_attention_mask:0inputs['input_ids'] tensor_info:dtype: DT_INT32shape: (-1, -1)name: serving_default_input_ids:0inputs['token_type_ids'] tensor_info:dtype: DT_INT32shape: (-1, -1)name: serving_default_token_type_ids:0The given SavedModel SignatureDef contains the following output(s):outputs['attentions'] tensor_info:dtype: DT_FLOATshape: (12, -1, 12, -1, -1)name: StatefulPartitionedCall:0outputs['logits'] tensor_info:dtype: DT_FLOATshape: (-1, 2)name: StatefulPartitionedCall:1Method name is: tensorflow/serving/predictTo directly pass inputs_embeds (the token embeddings) instead of input_ids (the token IDs) as input, we need to subclass the model to have a new serving signature. The following snippet of code shows how to do so:from transformers import TFBertForSequenceClassificationimport tensorflow as tf# Creation of a subclass in order to define a new serving signatureclass MyOwnModel(TFBertForSequenceClassification):# Decorate the serving method with the new input_signature# an input_signature represents the name, the data type and the shape of an expected input@tf.function(input_signature=[{"inputs_embeds": tf.TensorSpec((None, None, 768), tf.float32, name="inputs_embeds"),"attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"),"token_type_ids": tf.TensorSpec((None, None), tf.int32, name="token_type_ids"),}])def serving(self, inputs):# call the model to process the inputsoutput = self.call(inputs)# return the formated outputreturn self.serving_output(output)# Instantiate the model with the new serving methodmodel = MyOwnModel.from_pretrained("bert-base-cased")# save it with saved_model=True in order to have a SavedModel version along with the h5 weights.model.save_pretrained("my_model", saved_model=True)The serving method has to be overridden by the new input_signature argument of the tf.function decorator. See the official documentation to know more about the input_signature argument. The serving method is used to define how will behave a SavedModel when deployed with TensorFlow Serving. Now the SavedModel looks like as expected, see the new inputs_embeds input:The given SavedModel SignatureDef contains the following input(s):inputs['attention_mask'] tensor_info:dtype: DT_INT32shape: (-1, -1)name: serving_default_attention_mask:0inputs['inputs_embeds'] tensor_info:dtype: DT_FLOATshape: (-1, -1, 768)name: serving_default_inputs_embeds:0inputs['token_type_ids'] tensor_info:dtype: DT_INT32shape: (-1, -1)name: serving_default_token_type_ids:0The given SavedModel SignatureDef contains the following output(s):outputs['attentions'] tensor_info:dtype: DT_FLOATshape: (12, -1, 12, -1, -1)name: StatefulPartitionedCall:0outputs['logits'] tensor_info:dtype: DT_FLOATshape: (-1, 2)name: StatefulPartitionedCall:1Method name is: tensorflow/serving/predictHow to deploy and use a SavedModel?Let’s see step by step how to deploy and use a BERT model for sentiment classification.Step 1Create a SavedModel. To create a SavedModel, the Transformers library lets you load a PyTorch model called nateraw/bert-base-uncased-imdb trained on the IMDB dataset and convert it to a TensorFlow Keras model for you:from transformers import TFBertForSequenceClassificationmodel = TFBertForSequenceClassification.from_pretrained("nateraw/bert-base-uncased-imdb", from_pt=True)# the saved_model parameter is a flag to create a SavedModel version of the model in same time than the h5 weightsmodel.save_pretrained("my_model", saved_model=True)Step 2Create a Docker container with the SavedModel and run it. First, pull the TensorFlow Serving Docker image for CPU (for GPU replace serving by serving:latest-gpu):docker pull tensorflow/servingNext, run a serving image as a daemon named serving_base:docker run -d --name serving_base tensorflow/servingcopy the newly created SavedModel into the serving_base container's models folder:docker cp my_model/saved_model serving_base:/models/bertcommit the container that serves the model by changing MODEL_NAME to match the model's name (here bert), the name (bert) corresponds to the name we want to give to our SavedModel:docker commit --change "ENV MODEL_NAME bert" serving_base my_bert_modeland kill the serving_base image ran as a daemon because we don't need it anymore:docker kill serving_baseFinally, Run the image to serve our SavedModel as a daemon and we map the ports 8501 (REST API), and 8500 (gRPC API) in the container to the host and we name the the container bert.docker run -d -p 8501:8501 -p 8500:8500 --name bert my_bert_modelStep 3Query the model through the REST API:from transformers import BertTokenizerFast, BertConfigimport requestsimport jsonimport numpy as npsentence = "I love the new TensorFlow update in transformers."# Load the corresponding tokenizer of our SavedModeltokenizer = BertTokenizerFast.from_pretrained("nateraw/bert-base-uncased-imdb")# Load the model config of our SavedModelconfig = BertConfig.from_pretrained("nateraw/bert-base-uncased-imdb")# Tokenize the sentencebatch = tokenizer(sentence)# Convert the batch into a proper dictbatch = dict(batch)# Put the example into a list of size 1, that corresponds to the batch sizebatch = [batch]# The REST API needs a JSON that contains the key instances to declare the examples to processinput_data = {"instances": batch}# Query the REST API, the path corresponds to http://host:port/model_version/models_root_folder/model_name:methodr = requests.post("http://localhost:8501/v1/models/bert:predict", data=json.dumps(input_data))# Parse the JSON result. The results are contained in a list with a root key called "predictions"# and as there is only one example, takes the first element of the listresult = json.loads(r.text)["predictions"][0]# The returned results are probabilities, that can be positive or negative hence we take their absolute valueabs_scores = np.abs(result)# Take the argmax that correspond to the index of the max probability.label_id = np.argmax(abs_scores)# Print the proper LABEL with its indexprint(config.id2label[label_id])This should return POSITIVE. It is also possible to pass by the gRPC (google Remote Procedure Call) API to get the same result:from transformers import BertTokenizerFast, BertConfigimport numpy as npimport tensorflow as tffrom tensorflow_serving.apis import predict_pb2from tensorflow_serving.apis import prediction_service_pb2_grpcimport grpcsentence = "I love the new TensorFlow update in transformers."tokenizer = BertTokenizerFast.from_pretrained("nateraw/bert-base-uncased-imdb")config = BertConfig.from_pretrained("nateraw/bert-base-uncased-imdb")# Tokenize the sentence but this time with TensorFlow tensors as output already batch sized to 1. Ex:# {# 'input_ids': <tf.Tensor: shape=(1, 3), dtype=int32, numpy=array([[ 101, 19082, 102]])>,# 'token_type_ids': <tf.Tensor: shape=(1, 3), dtype=int32, numpy=array([[0, 0, 0]])>,# 'attention_mask': <tf.Tensor: shape=(1, 3), dtype=int32, numpy=array([[1, 1, 1]])># }batch = tokenizer(sentence, return_tensors="tf")# Create a channel that will be connected to the gRPC port of the containerchannel = grpc.insecure_channel("localhost:8500")# Create a stub made for prediction. This stub will be used to send the gRPC request to the TF Server.stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)# Create a gRPC request made for predictionrequest = predict_pb2.PredictRequest()# Set the name of the model, for this use case it is bertrequest.model_spec.name = "bert"# Set which signature is used to format the gRPC query, here the default onerequest.model_spec.signature_name = "serving_default"# Set the input_ids input from the input_ids given by the tokenizer# tf.make_tensor_proto turns a TensorFlow tensor into a Protobuf tensorrequest.inputs["input_ids"].CopyFrom(tf.make_tensor_proto(batch["input_ids"]))# Same with attention maskrequest.inputs["attention_mask"].CopyFrom(tf.make_tensor_proto(batch["attention_mask"]))# Same with token type idsrequest.inputs["token_type_ids"].CopyFrom(tf.make_tensor_proto(batch["token_type_ids"]))# Send the gRPC request to the TF Serverresult = stub.Predict(request)# The output is a protobuf where the only one output is a list of probabilities# assigned to the key logits. As the probabilities as in float, the list is# converted into a numpy array of floats with .float_valoutput = result.outputs["logits"].float_val# Print the proper LABEL with its indexprint(config.id2label[np.argmax(np.abs(output))])ConclusionThanks to the last updates applied on the TensorFlow models in transformers, one can now easily deploy its models in production using TensorFlow Serving. One of the next steps we are thinking about is to directly integrate the preprocessing part inside the SavedModel to make things even easier.
https://huggingface.co/blog/zero-deepspeed-fairscale
Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
Stas Bekman
January 19, 2021
A guest blog post by Hugging Face fellow Stas BekmanAs recent Machine Learning models have been growing much faster than the amount of GPU memory added to newly released cards, many users are unable to train or even just load some of those huge models onto their hardware. While there is an ongoing effort to distill some of those huge models to be of a more manageable size -- that effort isn't producing models small enough soon enough.In the fall of 2019 Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase and Yuxiong He published a paper:ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, which contains a plethora of ingenious new ideas on how one could make their hardware do much more than what it was thought possible before. A short time later DeepSpeed has been released and it gave to the world the open source implementation of most of the ideas in that paper (a few ideas are still in works) and in parallel a team from Facebook released FairScale which also implemented some of the core ideas from the ZeRO paper.If you use the Hugging Face Trainer, as of transformers v4.2.0 you have the experimental support for DeepSpeed's and FairScale's ZeRO features. The new --sharded_ddp and --deepspeed command line Trainer arguments provide FairScale and DeepSpeed integration respectively. Here is the full documentation.This blog post will describe how you can benefit from ZeRO regardless of whether you own just a single GPU or a whole stack of them.Huge Speedups with Multi-GPU SetupsLet's do a small finetuning with translation task experiment, using a t5-large model and the finetune_trainer.py script which you can find under examples/seq2seq in the transformers GitHub repo.We have 2x 24GB (Titan RTX) GPUs to test with.This is just a proof of concept benchmarks so surely things can be improved further, so we will benchmark on a small sample of 2000 items for training and 500 items for evalulation to perform the comparisons. Evaluation does by default a beam search of size 4, so it's slower than training with the same number of samples, that's why 4x less eval items were used in these tests.Here are the key command line arguments of our baseline:export BS=16python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py \--model_name_or_path t5-large --n_train 2000 --n_val 500 \--per_device_eval_batch_size $BS --per_device_train_batch_size $BS \--task translation_en_to_ro [...]We are just using the DistributedDataParallel (DDP) and nothing else to boost the performance for the baseline. I was able to fit a batch size (BS) of 16 before hitting Out of Memory (OOM) error.Note, that for simplicity and to make it easier to understand, I have only shownthe command line arguments important for this demonstration. You will find the complete command line atthis post.Next, we are going to re-run the benchmark every time adding one of the following:--fp16--sharded_ddp (fairscale)--sharded_ddp --fp16 (fairscale)--deepspeed without cpu offloading--deepspeed with cpu offloadingSince the key optimization here is that each technique deploys GPU RAM more efficiently, we will try to continually increase the batch size and expect the training and evaluation to complete faster (while keeping the metrics steady or even improving some, but we won't focus on these here).Remember that training and evaluation stages are very different from each other, because during training model weights are being modified, gradients are being calculated, and optimizer states are stored. During evaluation, none of these happen, but in this particular task of translation the model will try to search for the best hypothesis, so it actually has to do multiple runs before it's satisfied. That's why it's not fast, especially when a model is large.Let's look at the results of these six test runs:Methodmax BStrain timeeval timebaseline1630.945856.3310fp162021.494353.4675sharded_ddp3025.908547.5589sharded_ddp+fp163017.383845.6593deepspeed w/o cpu offload4010.400734.9289deepspeed w/ cpu offload5020.970632.1409It's easy to see that both FairScale and DeepSpeed provide great improvements over the baseline, in the total train and evaluation time, but also in the batch size. DeepSpeed implements more magic as of this writing and seems to be the short term winner, but Fairscale is easier to deploy. For DeepSpeed you need to write a simple configuration file and change your command line's launcher, with Fairscale you only need to add the --sharded_ddp command line argument, so you may want to try it first as it's the most low-hanging fruit.Following the 80:20 rule, I have only spent a few hours on these benchmarks and I haven't tried to squeeze every MB and second by refining the command line arguments and configuration, since it's pretty obvious from the simple table what you'd want to try next. When you will face a real project that will be running for hours and perhaps days, definitely spend more time to make sure you use the most optimal hyper-parameters to get your job done faster and at a minimal cost.If you would like to experiment with this benchmark yourself or want to know more details about the hardware and software used to run it, please, refer to this post.Fitting A Huge Model Onto One GPUWhile Fairscale gives us a boost only with multiple GPUs, DeepSpeed has a gift even for those of us with a single GPU.Let's try the impossible - let's train t5-3b on a 24GB RTX-3090 card.First let's try to finetune the huge t5-3b using the normal single GPU setup:export BS=1CUDA_VISIBLE_DEVICES=0 ./finetune_trainer.py \--model_name_or_path t5-3b --n_train 60 --n_val 10 \--per_device_eval_batch_size $BS --per_device_train_batch_size $BS \--task translation_en_to_ro --fp16 [...]No cookie, even with BS=1 we get:RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 23.70 GiB total capacity;21.37 GiB already allocated; 45.69 MiB free; 22.05 GiB reserved in total by PyTorch)Note, as earlier I'm showing only the important parts and the full command line arguments can be foundhere.Now update your transformers to v4.2.0 or higher, then install DeepSpeed:pip install deepspeedand let's try again, this time adding DeepSpeed to the command line:export BS=20CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 ./finetune_trainer.py \--model_name_or_path t5-3b --n_train 60 --n_val 10 \--per_device_eval_batch_size $BS --per_device_train_batch_size $BS \--task translation_en_to_ro --fp16 --deepspeed ds_config_1gpu.json [...]et voila! We get a batch size of 20 trained just fine. I could probably push it even further. The program failed with OOM at BS=30.Here are the relevant results:2021-01-12 19:06:31 | INFO | __main__ | train_n_objs = 602021-01-12 19:06:31 | INFO | __main__ | train_runtime = 8.85112021-01-12 19:06:35 | INFO | __main__ | val_n_objs = 102021-01-12 19:06:35 | INFO | __main__ | val_runtime = 3.5329We can't compare these to the baseline, since the baseline won't even start and immediately failed with OOM.Simply amazing!I used only a tiny sample since I was primarily interested in being able to train and evaluate with this huge model that normally won't fit onto a 24GB GPU.If you would like to experiment with this benchmark yourself or want to know more details about the hardware and software used to run it, please, refer to this post.The Magic Behind ZeROSince transformers only integrated these fabulous solutions and wasn't part of their invention I will share the resources where you can discover all the details for yourself. But here are a few quick insights that may help understand how ZeRO manages these amazing feats.The key feature of ZeRO is adding distributed data storage to the quite familiar concept of data parallel training.The computation on each GPU is exactly the same as data parallel training, but the parameter, gradients and optimizer states are stored in a distributed/partitioned fashion across all the GPUs and fetched only when needed.The following diagram, coming from this blog post illustrates how this works:ZeRO's ingenious approach is to partition the params, gradients and optimizer states equally across all GPUs and give each GPU just a single partition (also referred to as a shard). This leads to zero overlap in data storage between GPUs. At runtime each GPU builds up each layer's data on the fly by asking participating GPUs to send the information it's lacking.This idea could be difficult to grasp, and you will find my attempt at an explanation here.As of this writing FairScale and DeepSpeed only perform Partitioning (Sharding) for the optimizer states and gradients. Model parameters sharding is supposedly coming soon in DeepSpeed and FairScale.The other powerful feature is ZeRO-Offload (paper). This feature offloads some of the processing and memory needs to the host's CPU, thus allowing more to be fit onto the GPU. You saw its dramatic impact in the success at running t5-3b on a 24GB GPU.One other problem that a lot of people complain about on pytorch forums is GPU memory fragmentation. One often gets an OOM error that may look like this:RuntimeError: CUDA out of memory. Tried to allocate 1.48 GiB (GPU 0; 23.65 GiB total capacity;16.22 GiB already allocated; 111.12 MiB free; 22.52 GiB reserved in total by PyTorch)The program wants to allocate ~1.5GB and the GPU still has some 6-7GBs of unused memory, but it reports to have only ~100MB of contiguous free memory and it fails with the OOM error. This happens as chunks of different size get allocated and de-allocated again and again, and over time holes get created leading to memory fragmentation, where there is a lot of unused memory but no contiguous chunks of the desired size. In the example above the program could probably allocate 100MB of contiguous memory, but clearly it can't get 1.5GB in a single chunk.DeepSpeed attacks this problem by managing GPU memory by itself and ensuring that long term memory allocations don't mix with short-term ones and thus there is much less fragmentation. While the paper doesn't go into details, the source code is available, so it's possible to see how DeepSpeed accomplishes that.As ZeRO stands for Zero Redundancy Optimizer, it's easy to see that it lives up to its name.The FutureBesides the anticipated upcoming support for model params sharding in DeepSpeed, it already released new features that we haven't explored yet. These include DeepSpeed Sparse Attention and 1-bit Adam, which are supposed to decrease memory usage and dramatically reduce inter-GPU communication overhead, which should lead to an even faster training and support even bigger models.I trust we are going to see new gifts from the FairScale team as well. I think they are working on ZeRO stage 3 as well.Even more exciting, ZeRO is being integrated into pytorch.DeploymentIf you found the results shared in this blog post enticing, please proceed here for details on how to use DeepSpeed and FairScale with the transformers Trainer.You can, of course, modify your own trainer to integrate DeepSpeed and FairScale, based on each project's instructions or you can "cheat" and see how we did it in the transformers Trainer. If you go for the latter, to find your way around grep the source code for deepspeed and/or sharded_ddp.The good news is that ZeRO requires no model modification. The only required modifications are in the training code.IssuesIf you encounter any issues with the integration part of either of these projects please open an Issue in transformers.But if you have problems with DeepSpeed and FairScale installation, configuration and deployment - you need to ask the experts in their domains, therefore, please, use DeepSpeed Issue or FairScale Issue instead.ResourcesWhile you don't really need to understand how any of these projects work and you can just deploy them via the transformers Trainer, should you want to figure out the whys and hows please refer to the following resources.FairScale GitHubDeepSpeed GitHubPaper: ZeRO: Memory Optimizations Toward Training Trillion Parameter Models. The paper is very interesting, but it's very terse.Here is a good video discussion of the paper with visualsPaper: ZeRO-Offload: Democratizing Billion-Scale Model Training. Just published - this one goes into the details of ZeRO Offload feature.DeepSpeed configuration and tutorialsIn addition to the paper, I highly recommend to read the following detailed blog posts with diagrams:DeepSpeed: Extreme-scale model training for everyoneZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parametersTuring-NLG: A 17-billion-parameter language model by MicrosoftDeepSpeed examples on GitHubGratitudeWe were quite astonished at the amazing level of support we received from the FairScale and DeepSpeed developer teams while working on integrating those projects into transformers.In particular I'd like to thank:Benjamin Lefaudeux @blefaudeuxMandeep Baines @msbainesfrom the FairScale team and:Jeff Rasley @jeffraOlatunji Ruwase @tjruwaseSamyam Rajbhandari @samyamfrom the DeepSpeed team for your generous and caring support and prompt resolution of the issues we have encountered.And HuggingFace for providing access to hardware the benchmarks were run on.Sylvain Gugger @sgugger and Stas Bekman @stas00 worked on the integration of these projects.
https://huggingface.co/blog/accelerated-inference
How we sped up transformer inference 100x for 🤗 API customers
No authors found
January 18, 2021
🤗 Transformers has become the default library for data scientists all around the world to explore state of the art NLP models and build new NLP features. With over 5,000 pre-trained and fine-tuned models available, in over 250 languages, it is a rich playground, easily accessible whichever framework you are working in.While experimenting with models in 🤗 Transformers is easy, deploying these large models into production with maximum performance, and managing them into an architecture that scales with usage is a hard engineering challenge for any Machine Learning Engineer. This 100x performance gain and built-in scalability is why subscribers of our hosted Accelerated Inference API chose to build their NLP features on top of it. To get to the last 10x of performance boost, the optimizations need to be low-level, specific to the model, and to the target hardware.This post shares some of our approaches squeezing every drop of compute juice for our customers. 🍋 Getting to the first 10x speedup The first leg of the optimization journey is the most accessible, all about using the best combination of techniques offered by the Hugging Face libraries, independent of the target hardware. We use the most efficient methods built into Hugging Face model pipelines to reduce the amount of computation during each forward pass. These methods are specific to the architecture of the model and the target task, for instance for a text-generation task on a GPT architecture, we reduce the dimensionality of the attention matrices computation by focusing on the new attention of the last token in each pass:-Naive versionOptimized version-Tokenization is often a bottleneck for efficiency during inference. We use the most efficient methods from the 🤗 Tokenizers library, leveraging the Rust implementation of the model tokenizer in combination with smart caching to get up to 10x speedup for the overall latency.Leveraging the latest features of the Hugging Face libraries, we achieve a reliable 10x speed up compared to an out-of-box deployment for a given model/hardware pair. As new releases of Transformers and Tokenizers typically ship every month, our API customers do not need to constantly adapt to new optimization opportunities, their models just keep running faster. Compilation FTW: the hard to get 10x Now this is where it gets really tricky. In order to get the best possible performance we will need to modify the model and compile it targeting the specific hardware for inference. The choice of hardware itself will depend on both the model (size in memory) and the demand profile (request batching). Even when serving predictions from the same model, some API customers may benefit more from Accelerated CPU inference, and others from Accelerated GPU inference, each with different optimization techniques and libraries applied.Once the compute platform has been selected for the use case, we can go to work. Here are some CPU-specific techniques that can be applied with a static graph:Optimizing the graph (Removing unused flow)Fusing layers (with specific CPU instructions)Quantizing the operationsUsing out-of-box functions from open source libraries (e.g. 🤗 Transformers with ONNX Runtime) won’t produce the best results, or could result in a significant loss of accuracy, particularly during quantization. There is no silver bullet, and the best path is different for each model architecture. But diving deep into the Transformers code and ONNX Runtime documentation, the stars can be aligned to achieve another 10x speedup. Unfair advantage The Transformer architecture was a decisive inflection point for Machine Learning performance, starting with NLP, and over the last 3 years the rate of improvement in Natural Language Understanding and Generation has been steep and accelerating. Another metric which accelerated accordingly, is the average size of the models, from the 110M parameters of BERT to the now 175Bn of GPT-3.This trend has introduced daunting challenges for Machine Learning Engineers when deploying the latest models into production. While 100x speedup is a high bar to reach, that’s what it takes to serve predictions with acceptable latency in real-time consumer applications.To reach that bar, as Machine Learning Engineers at Hugging Face we certainly have an unfair advantage sitting in the same (virtual) offices as the 🤗 Transformers and 🤗 Tokenizers maintainers 😬. We are also extremely lucky for the rich partnerships we have developed through open source collaborations with hardware and cloud vendors like Intel, NVIDIA, Qualcomm, Amazon and Microsoft that enable us to tune our models x infrastructure with the latest hardware optimizations techniques.If you want to feel the speed on our infrastructure, start a free trial and we’ll get in touch.If you want to benefit from our experience optimizing inference on your own infrastructure participate in our 🤗 Expert Acceleration Program.
https://huggingface.co/blog/ray-tune
Hyperparameter Search with Transformers and Ray Tune
Ray Project (Anyscale)
November 2, 2020
With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face transformers library has become critical to the success and growth of natural language processing today.For any machine learning model to achieve good performance, users often need to implement some form of parameter tuning. Yet, nearly everyone (1, 2) either ends up disregarding hyperparameter tuning or opting to do a simplistic grid search with a small search space.However, simple experiments are able to show the benefit of using an advanced tuning technique. Below is a recent experiment run on a BERT model from Hugging Face transformers on the RTE dataset. Genetic optimization techniques like PBT can provide large performance improvements compared to standard hyperparameter optimization techniques.AlgorithmBest Val Acc.Best Test Acc.Total GPU minTotal $ costGrid Search74%65.4%45 min$2.30Bayesian Optimization +Early Stop77%66.9%104 min$5.30Population-based Training78%70.5%48 min$2.45If you’re leveraging Transformers, you’ll want to have a way to easily access powerful hyperparameter tuning solutions without giving up the customizability of the Transformers framework.In the Transformers 3.1 release, Hugging Face Transformers and Ray Tune teamed up to provide a simple yet powerful integration. Ray Tune is a popular Python library for hyperparameter tuning that provides many state-of-the-art algorithms out of the box, along with integrations with the best-of-class tooling, such as Weights and Biases and tensorboard.To demonstrate this new Hugging Face + Ray Tune integration, we leverage the Hugging Face Datasets library to fine tune BERT on MRPC.To run this example, please first run:pip install "ray[tune]" transformers datasets scipy sklearn torchSimply plug in one of Ray’s standard tuning algorithms by just adding a few lines of code.from datasets import load_dataset, load_metricfrom transformers import (AutoModelForSequenceClassification, AutoTokenizer,Trainer, TrainingArguments)tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')dataset = load_dataset('glue', 'mrpc')metric = load_metric('glue', 'mrpc')def encode(examples):outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True)return outputsencoded_dataset = dataset.map(encode, batched=True)def model_init():return AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased', return_dict=True)def compute_metrics(eval_pred):predictions, labels = eval_predpredictions = predictions.argmax(axis=-1)return metric.compute(predictions=predictions, references=labels)# Evaluate during training and a bit more often# than the default to be able to prune bad trials early.# Disabling tqdm is a matter of preference.training_args = TrainingArguments("test", evaluation_strategy="steps", eval_steps=500, disable_tqdm=True)trainer = Trainer(args=training_args,tokenizer=tokenizer,train_dataset=encoded_dataset["train"],eval_dataset=encoded_dataset["validation"],model_init=model_init,compute_metrics=compute_metrics,)# Default objective is the sum of all metrics# when metrics are provided, so we have to maximize it.trainer.hyperparameter_search(direction="maximize", backend="ray", n_trials=10 # number of trials)By default, each trial will utilize 1 CPU, and optionally 1 GPU if available.You can leverage multiple GPUs for a parallel hyperparameter searchby passing in a resources_per_trial argument.You can also easily swap different parameter tuning algorithms such as HyperBand, Bayesian Optimization, Population-Based Training:To run this example, first run: pip install hyperoptfrom ray.tune.suggest.hyperopt import HyperOptSearchfrom ray.tune.schedulers import ASHASchedulertrainer = Trainer(args=training_args,tokenizer=tokenizer,train_dataset=encoded_dataset["train"],eval_dataset=encoded_dataset["validation"],model_init=model_init,compute_metrics=compute_metrics,)best_trial = trainer.hyperparameter_search(direction="maximize",backend="ray",# Choose among many libraries:# https://docs.ray.io/en/latest/tune/api_docs/suggestion.htmlsearch_alg=HyperOptSearch(metric="objective", mode="max"),# Choose among schedulers:# https://docs.ray.io/en/latest/tune/api_docs/schedulers.htmlscheduler=ASHAScheduler(metric="objective", mode="max"))It also works with Weights and Biases out of the box!Try it out today:pip install -U raypip install -U transformers datasetsCheck out the Hugging Face documentation and Discussion threadEnd-to-end example of using Hugging Face hyperparameter search for text classificationIf you liked this blog post, be sure to check out:Transformers + GLUE + Ray Tune exampleOur Weights and Biases report on Hyperparameter Optimization for TransformersThe simplest way to serve your NLP model from scratch
https://huggingface.co/blog/pytorch_block_sparse
Block Sparse Matrices for Smaller and Faster Language Models
François Lagunas
September 10, 2020
Saving space and time, one zero at a time In previous blogposts we introduced sparse matrices and what they could do to improve neural networks.The basic assumption is that full dense layers are often overkill and can be pruned without a significant loss in precision.In some cases sparse linear layers can even improve precision or/and generalization.The main issue is that currently available code that supports sparse algebra computation is severely lacking efficiency.We are also still waiting for official PyTorch support.That's why we ran out of patience and took some time this summer to address this "lacuna".Today, we are excited to release the extension pytorch_block_sparse.By itself, or even better combined with other methods likedistillationand quantization,this library enables networks which are both smaller and faster,something Hugging Face considers crucial to let anybody useneural networks in production at low cost, and to improve the experience for the end user. Usage The provided BlockSparseLinear module is a drop in replacement for torch.nn.Linear, and it is trivial to use it in your models:# from torch.nn import Linearfrom pytorch_block_sparse import BlockSparseLinear...# self.fc = nn.Linear(1024, 256)self.fc = BlockSparseLinear(1024, 256, density=0.1)The extension also provides a BlockSparseModelPatcher that allows to modify an existing model "on the fly",which is shown in this example notebook.Such a model can then be trained as usual, without any change in your model source code. NVIDIA CUTLASS This extension is based on the cutlass tilesparse proof of concept by Yulhwa Kim.It is using C++ CUDA templates for block-sparse matrix multiplicationbased on CUTLASS.CUTLASS is a collection of CUDA C++ templates for implementing high-performance CUDA kernels.With CUTLASS, approching cuBLAS performance on custom kernels is possible without resorting to assembly language code.The latest versions include all the Ampere Tensor Core primitives, providing x10 or more speedups with a limited loss of precision.Next versions of pytorch_block_sparse will make use of these primitives,as block sparsity is 100% compatible with Tensor Cores requirements. Performance At the current stage of the library, the performances for sparse matrices are roughlytwo times slower than their cuBLAS optimized dense counterpart, and we are confidentthat we can improve this in the future.This is a huge improvement on PyTorch sparse matrices: their current implementation is an order of magnitude slowerthan the dense one.But the more important point is that the performance gain of using sparse matrices grows with the sparsity,so a 75% sparse matrix is roughly 2x faster than the dense equivalent.The memory savings are even more significant: for 75% sparsity, memory consumption is reduced by 4xas you would expect. Future work Being able to efficiently train block-sparse linear layers was just the first step.The sparsity pattern is currenly fixed at initialization, and of course optimizing it during learning will yield largeimprovements.So in future versions, you can expect tools to measure the "usefulness" of parameters to be able to optimize the sparsity pattern.NVIDIA Ampere 50% sparse pattern within blocks will probably yield another significant performance gain, just as upgradingto more recent versions of CUTLASS does.So, stay tuned for more sparsity goodness in a near future!
https://huggingface.co/blog/how-to-generate
How to generate text: using different decoding methods for language generation with Transformers
Patrick von Platen
March 1, 2020
Note: Edited on July 2023 with up-to-date references and examples.IntroductionIn recent years, there has been an increasing interest in open-endedlanguage generation thanks to the rise of large transformer-basedlanguage models trained on millions of webpages, including OpenAI's ChatGPTand Meta's LLaMA.The results on conditioned open-ended language generation are impressive, having shown togeneralize to new tasks,handle code,or take non-text data as input.Besides the improved transformer architecture and massive unsupervisedtraining data, better decoding methods have also played an importantrole.This blog post gives a brief overview of different decoding strategiesand more importantly shows how you can implement them with very littleeffort using the popular transformers library!All of the following functionalities can be used for auto-regressivelanguage generation (herea refresher). In short, auto-regressive language generation is basedon the assumption that the probability distribution of a word sequencecan be decomposed into the product of conditional next worddistributions:P(w1:T∣W0)=∏t=1TP(wt∣w1:t−1,W0) ,with w1:0=∅, P(w_{1:T} | W_0 ) = \prod_{t=1}^T P(w_{t} | w_{1: t-1}, W_0) \text{ ,with } w_{1: 0} = \emptyset, P(w1:T​∣W0​)=t=1∏T​P(wt​∣w1:t−1​,W0​) ,with w1:0​=∅,and W0W_0W0​ being the initial context word sequence. The length TTTof the word sequence is usually determined on-the-fly and correspondsto the timestep t=Tt=Tt=T the EOS token is generated from P(wt∣w1:t−1,W0)P(w_{t} | w_{1: t-1}, W_{0})P(wt​∣w1:t−1​,W0​).We will give a tour of the currently most prominent decoding methods,mainly Greedy search, Beam search, and Sampling.Let's quickly install transformers and load the model. We will use GPT2in PyTorch for demonstration, but the API is 1-to-1 the same forTensorFlow and JAX.!pip install -q transformersfrom transformers import AutoModelForCausalLM, AutoTokenizerimport torchtorch_device = "cuda" if torch.cuda.is_available() else "cpu"tokenizer = AutoTokenizer.from_pretrained("gpt2")# add the EOS token as PAD token to avoid warningsmodel = AutoModelForCausalLM.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id).to(torch_device)Greedy SearchGreedy search is the simplest decoding method.It selects the word with the highest probability asits next word: wt=argmaxwP(w∣w1:t−1)w_t = argmax_{w}P(w | w_{1:t-1})wt​=argmaxw​P(w∣w1:t−1​) at each timestepttt. The following sketch shows greedy search.Starting from the word "The",\text{"The"},"The", the algorithm greedily choosesthe next word of highest probability "nice"\text{"nice"}"nice" and so on, sothat the final generated word sequence is ("The","nice","woman")(\text{"The"}, \text{"nice"}, \text{"woman"})("The","nice","woman")having an overall probability of 0.5×0.4=0.20.5 \times 0.4 = 0.20.5×0.4=0.2 .In the following we will generate word sequences using GPT2 on thecontext ("I","enjoy","walking","with","my","cute","dog")(\text{"I"}, \text{"enjoy"}, \text{"walking"}, \text{"with"}, \text{"my"}, \text{"cute"}, \text{"dog"})("I","enjoy","walking","with","my","cute","dog"). Let'ssee how greedy search can be used in transformers:# encode context the generation is conditioned onmodel_inputs = tokenizer('I enjoy walking with my cute dog', return_tensors='pt').to(torch_device)# generate 40 new tokensgreedy_output = model.generate(**model_inputs, max_new_tokens=40)print("Output:" + 100 * '-')print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog.I'm not sureAlright! We have generated our first short text with GPT2 😊. Thegenerated words following the context are reasonable, but the modelquickly starts repeating itself! This is a very common problem inlanguage generation in general and seems to be even more so in greedyand beam search - check out Vijayakumar etal., 2016 and Shao etal., 2017.The major drawback of greedy search though is that it misses highprobability words hidden behind a low probability word as can be seen inour sketch above:The word "has"\text{"has"}"has"with its high conditional probability of 0.90.90.9is hidden behind the word "dog"\text{"dog"}"dog", which has only thesecond-highest conditional probability, so that greedy search misses theword sequence "The","dog","has"\text{"The"}, \text{"dog"}, \text{"has"}"The","dog","has" .Thankfully, we have beam search to alleviate this problem!Beam searchBeam search reduces the risk of missing hidden high probability wordsequences by keeping the most likely num_beams of hypotheses at eachtime step and eventually choosing the hypothesis that has the overallhighest probability. Let's illustrate with num_beams=2:At time step 1, besides the most likely hypothesis ("The","nice")(\text{"The"}, \text{"nice"})("The","nice"),beam search also keeps track of the secondmost likely one ("The","dog")(\text{"The"}, \text{"dog"})("The","dog").At time step 2, beam search finds that the word sequence ("The","dog","has")(\text{"The"}, \text{"dog"}, \text{"has"})("The","dog","has"),has with 0.360.360.36a higher probability than ("The","nice","woman")(\text{"The"}, \text{"nice"}, \text{"woman"})("The","nice","woman"),which has 0.20.20.2 . Great, it has found the most likely word sequence inour toy example!Beam search will always find an output sequence with higher probabilitythan greedy search, but is not guaranteed to find the most likelyoutput.Let's see how beam search can be used in transformers. We setnum_beams > 1 and early_stopping=True so that generation is finishedwhen all beam hypotheses reached the EOS token.# activate beam search and early_stoppingbeam_output = model.generate(**model_inputs,max_new_tokens=40,num_beams=5,early_stopping=True)print("Output:" + 100 * '-')print(tokenizer.decode(beam_output[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.I'm not sure if I'll ever be able to walk with him again. I'm not sureWhile the result is arguably more fluent, the output still includesrepetitions of the same word sequences.One of the available remedies is to introduce n-grams (a.k.a word sequences ofn words) penalties as introduced by Paulus et al.(2017) and Klein et al.(2017). The most common n-gramspenalty makes sure that no n-gram appears twice by manually settingthe probability of next words that could create an already seen n-gramto 0.Let's try it out by setting no_repeat_ngram_size=2 so that no 2-gramappears twice:# set no_repeat_ngram_size to 2beam_output = model.generate(**model_inputs,max_new_tokens=40,num_beams=5,no_repeat_ngram_size=2,early_stopping=True)print("Output:" + 100 * '-')print(tokenizer.decode(beam_output[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.I've been thinking about this for a while now, and I think it's time for me toNice, that looks much better! We can see that the repetition does notappear anymore. Nevertheless, n-gram penalties have to be used withcare. An article generated about the city New York should not use a2-gram penalty or otherwise, the name of the city would only appearonce in the whole text!Another important feature about beam search is that we can compare thetop beams after generation and choose the generated beam that fits ourpurpose best.In transformers, we simply set the parameter num_return_sequences tothe number of highest scoring beams that should be returned. Make surethough that num_return_sequences <= num_beams!# set return_num_sequences > 1beam_outputs = model.generate(**model_inputs,max_new_tokens=40,num_beams=5,no_repeat_ngram_size=2,num_return_sequences=5,early_stopping=True)# now we have 3 output sequencesprint("Output:" + 100 * '-')for i, beam_output in enumerate(beam_outputs):print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=True)))Output:----------------------------------------------------------------------------------------------------0: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.I've been thinking about this for a while now, and I think it's time for me to1: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with her again.I've been thinking about this for a while now, and I think it's time for me to2: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.I've been thinking about this for a while now, and I think it's a good idea to3: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.I've been thinking about this for a while now, and I think it's time to take a4: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.I've been thinking about this for a while now, and I think it's a good idea.As can be seen, the five beam hypotheses are only marginally differentto each other - which should not be too surprising when using only 5beams.In open-ended generation, a couple of reasons have been broughtforward why beam search might not be the best possible option:Beam search can work very well in tasks where the length of thedesired generation is more or less predictable as in machinetranslation or summarization - see Murray et al.(2018) and Yang et al.(2018). But this is not the casefor open-ended generation where the desired output length can varygreatly, e.g. dialog and story generation.We have seen that beam search heavily suffers from repetitivegeneration. This is especially hard to control with n-gram- orother penalties in story generation since finding a good trade-offbetween inhibiting repetition and repeating cycles of identicaln-grams requires a lot of finetuning.As argued in Ari Holtzman et al.(2019), high quality humanlanguage does not follow a distribution of high probability nextwords. In other words, as humans, we want generated text to surpriseus and not to be boring/predictable. The authors show this nicely byplotting the probability, a model would give to human text vs. whatbeam search does.So let's stop being boring and introduce some randomness 🤪.SamplingIn its most basic form, sampling means randomly picking the next word wtw_twt​ according to its conditional probability distribution:wt∼P(w∣w1:t−1) w_t \sim P(w|w_{1:t-1}) wt​∼P(w∣w1:t−1​)Taking the example from above, the following graphic visualizes languagegeneration when sampling.It becomes obvious that language generation using sampling is notdeterministic anymore. The word ("car")(\text{"car"})("car") is sampled from theconditioned probability distribution P(w∣"The")P(w | \text{"The"})P(w∣"The"), followedby sampling ("drives")(\text{"drives"})("drives") from P(w∣"The","car")P(w | \text{"The"}, \text{"car"})P(w∣"The","car") .In transformers, we set do_sample=True and deactivate Top-Ksampling (more on this later) via top_k=0. In the following, we willfix the random seed for illustration purposes. Feel free to change theset_seed argument to obtain different results, or to remove it for non-determinism.# set seed to reproduce results. Feel free to change the seed though to get different resultsfrom transformers import set_seedset_seed(42)# activate sampling and deactivate top_k by setting top_k sampling to 0sample_output = model.generate(**model_inputs,max_new_tokens=40,do_sample=True,top_k=0)print("Output:" + 100 * '-')print(tokenizer.decode(sample_output[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------I enjoy walking with my cute dog for the rest of the day, but this had me staying in an unusual room and not going on nights out with friends (which will always be wondered for a mere minute or so at this point).Interesting! The text seems alright - but when taking a closer look, itis not very coherent and doesn't sound like it was written by ahuman. That is the big problem when sampling word sequences: The modelsoften generate incoherent gibberish, cf. Ari Holtzman et al.(2019).A trick is to make the distribution P(w∣w1:t−1)P(w|w_{1:t-1})P(w∣w1:t−1​) sharper(increasing the likelihood of high probability words and decreasing thelikelihood of low probability words) by lowering the so-calledtemperature of thesoftmax.An illustration of applying temperature to our example from above couldlook as follows.The conditional next word distribution of step t=1t=1t=1 becomes muchsharper leaving almost no chance for word ("car")(\text{"car"})("car") to beselected.Let's see how we can cool down the distribution in the library bysetting temperature=0.6:# set seed to reproduce results. Feel free to change the seed though to get different resultsset_seed(42)# use temperature to decrease the sensitivity to low probability candidatessample_output = model.generate(**model_inputs,max_new_tokens=40,do_sample=True,top_k=0,temperature=0.6,)print("Output:" + 100 * '-')print(tokenizer.decode(sample_output[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------I enjoy walking with my cute dog, but I don't like to chew on it. I like to eat it and not chew on it. I like to be able to walk with my dog."So how did you decideOK. There are less weird n-grams and the output is a bit more coherentnow! While applying temperature can make a distribution less random, inits limit, when setting temperature →0\to 0→0, temperature scaledsampling becomes equal to greedy decoding and will suffer from the sameproblems as before.Top-K SamplingFan et. al (2018) introduced asimple, but very powerful sampling scheme, called Top-K sampling.In Top-K sampling, the K most likely next words are filtered and theprobability mass is redistributed among only those K next words. GPT2adopted this sampling scheme, which was one of the reasons for itssuccess in story generation.We extend the range of words used for both sampling steps in the exampleabove from 3 words to 10 words to better illustrate Top-K sampling.Having set K=6K = 6K=6, in both sampling steps we limit our sampling poolto 6 words. While the 6 most likely words, defined asVtop-KV_{\text{top-K}}Vtop-K​ encompass only ca. two-thirds of the wholeprobability mass in the first step, it includes almost all of theprobability mass in the second step. Nevertheless, we see that itsuccessfully eliminates the rather weird candidates (“not",“the",“small",“told")(\text{``not"}, \text{``the"}, \text{``small"}, \text{``told"})(“not",“the",“small",“told") in the second sampling step.Let's see how Top-K can be used in the library by setting top_k=50:# set seed to reproduce results. Feel free to change the seed though to get different resultsset_seed(42)# set top_k to 50sample_output = model.generate(**model_inputs,max_new_tokens=40,do_sample=True,top_k=50)print("Output:" + 100 * '-')print(tokenizer.decode(sample_output[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------I enjoy walking with my cute dog for the rest of the day, but this time it was hard for me to figure out what to do with it. (One reason I asked this for a few months back is that I had aNot bad at all! The text is arguably the most human-sounding text sofar. One concern though with Top-K sampling is that it does notdynamically adapt the number of words that are filtered from the nextword probability distribution P(w∣w1:t−1)P(w|w_{1:t-1})P(w∣w1:t−1​). This can beproblematic as some words might be sampled from a very sharpdistribution (distribution on the right in the graph above), whereasothers from a much more flat distribution (distribution on the left inthe graph above).In step t=1t=1t=1, Top-K eliminates the possibility to sample("people","big","house","cat")(\text{"people"}, \text{"big"}, \text{"house"}, \text{"cat"})("people","big","house","cat"), which seem like reasonablecandidates. On the other hand, in step t=2t=2t=2 the method includes thearguably ill-fitted words ("down","a")(\text{"down"}, \text{"a"})("down","a") in the sample pool ofwords. Thus, limiting the sample pool to a fixed size K could endangerthe model to produce gibberish for sharp distributions and limit themodel's creativity for flat distribution. This intuition led AriHoltzman et al. (2019) to createTop-p- or nucleus-sampling.Top-p (nucleus) samplingInstead of sampling only from the most likely K words, in Top-psampling chooses from the smallest possible set of words whosecumulative probability exceeds the probability p. The probability massis then redistributed among this set of words. This way, the size of theset of words (a.k.a the number of words in the set) can dynamicallyincrease and decrease according to the next word's probabilitydistribution. Ok, that was very wordy, let's visualize.Having set p=0.92p=0.92p=0.92, Top-p sampling picks the minimum number ofwords to exceed together p=92%p=92\%p=92% of the probability mass, defined asVtop-pV_{\text{top-p}}Vtop-p​. In the first example, this included the 9 mostlikely words, whereas it only has to pick the top 3 words in the secondexample to exceed 92%. Quite simple actually! It can be seen that itkeeps a wide range of words where the next word is arguably lesspredictable, e.g. P(w∣"The”)P(w | \text{"The''})P(w∣"The”), and only a few words whenthe next word seems more predictable, e.g.P(w∣"The","car")P(w | \text{"The"}, \text{"car"})P(w∣"The","car").Alright, time to check it out in transformers! We activate Top-psampling by setting 0 < top_p < 1:# set seed to reproduce results. Feel free to change the seed though to get different resultsset_seed(42)# set top_k to 50sample_output = model.generate(**model_inputs,max_new_tokens=40,do_sample=True,top_p=0.92,top_k=0)print("Output:" + 100 * '-')print(tokenizer.decode(sample_output[0], skip_special_tokens=True))Output:----------------------------------------------------------------------------------------------------I enjoy walking with my cute dog for the rest of the day, but this had me staying in an unusual room and not going on nights out with friends (which will always be my yearning for such a spacious screen on my deskGreat, that sounds like it could have been written by a human. Well,maybe not quite yet.While in theory, Top-p seems more elegant than Top-K, both methodswork well in practice. Top-p can also be used in combination withTop-K, which can avoid very low ranked words while allowing for somedynamic selection.Finally, to get multiple independently sampled outputs, we can againset the parameter num_return_sequences > 1:# set seed to reproduce results. Feel free to change the seed though to get different resultsset_seed(42)# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3sample_outputs = model.generate(**model_inputs,max_new_tokens=40,do_sample=True,top_k=50,top_p=0.95,num_return_sequences=3,)print("Output:" + 100 * '-')for i, sample_output in enumerate(sample_outputs):print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))Output:----------------------------------------------------------------------------------------------------0: I enjoy walking with my cute dog for the rest of the day, but this time it was hard for me to figure out what to do with it. When I finally looked at this for a few moments, I immediately thought, "1: I enjoy walking with my cute dog. The only time I felt like walking was when I was working, so it was awesome for me. I didn't want to walk for days. I am really curious how she can walk with me2: I enjoy walking with my cute dog (Chama-I-I-I-I-I), and I really enjoy running. I play in a little game I play with my brother in which I take pictures of our houses.Cool, now you should have all the tools to let your model write yourstories with transformers!ConclusionAs ad-hoc decoding methods, top-p and top-K sampling seem toproduce more fluent text than traditional greedy - and beam searchon open-ended language generation. There isevidence that the apparent flaws of greedy and beam search -mainly generating repetitive word sequences - are caused by the model(especially the way the model is trained), rather than the decodingmethod, cf. Welleck et al.(2019). Also, as demonstrated inWelleck et al. (2020), it looks astop-K and top-p sampling also suffer from generating repetitive wordsequences.In Welleck et al. (2019), theauthors show that according to human evaluations, beam search cangenerate more fluent text than Top-p sampling, when adapting themodel's training objective.Open-ended language generation is a rapidly evolving field of researchand as it is often the case there is no one-size-fits-all method here,so one has to see what works best in one's specific use case.Fortunately, you can try out all the different decoding methods intransfomers 🤗 -- you can have an overview of the available methodshere.Thanks to everybody, who has contributed to the blog post: Alexander Rush, Julien Chaumand, Thomas Wolf, Victor Sanh, Sam Shleifer, Clément Delangue, Yacine Jernite, Oliver Åstrand and John de Wasseige.Appendixgenerate has evolved into a highly composable method, with flags to manipulate the resulting text in manydirections that were not covered in this blog post. Here are a few helpful pages to guide you:How to parameterize generateHow to stream the outputFull list of decoding optionsgenerate API referenceLLM score leaderboardIf you find that navigating our docs is challenging and you can't easily find what you're looking for, drop us a message in this GitHub issue. Your feedback is critical to set our future direction! 🤗
https://huggingface.co/blog/how-to-train
How to train a new language model from scratch using Transformers and Tokenizers
Julien Chaumond
February 14, 2020
Over the past few months, we made several improvements to our transformers and tokenizers libraries, with the goal of making it easier than ever to train a new language model from scratch.In this post we’ll demo how to train a “small” model (84 M parameters = 6 layers, 768 hidden size, 12 attention heads) – that’s the same number of layers & heads as DistilBERT – on Esperanto. We’ll then fine-tune the model on a downstream task of part-of-speech tagging.Esperanto is a constructed language with a goal of being easy to learn. We pick it for this demo for several reasons:it is a relatively low-resource language (even though it’s spoken by ~2 million people) so this demo is less boring than training one more English model 😁its grammar is highly regular (e.g. all common nouns end in -o, all adjectives in -a) so we should get interesting linguistic results even on a small dataset.finally, the overarching goal at the foundation of the language is to bring people closer (fostering world peace and international understanding) which one could argue is aligned with the goal of the NLP community 💚N.B. You won’t need to understand Esperanto to understand this post, but if you do want to learn it, Duolingo has a nice course with 280k active learners.Our model is going to be called… wait for it… EsperBERTo 😂1. Find a datasetFirst, let us find a corpus of text in Esperanto. Here we’ll use the Esperanto portion of the OSCAR corpus from INRIA.OSCAR is a huge multilingual corpus obtained by language classification and filtering of Common Crawl dumps of the Web.The Esperanto portion of the dataset is only 299M, so we’ll concatenate with the Esperanto sub-corpus of the Leipzig Corpora Collection, which is comprised of text from diverse sources like news, literature, and wikipedia.The final training corpus has a size of 3 GB, which is still small – for your model, you will get better results the more data you can get to pretrain on. 2. Train a tokenizerWe choose to train a byte-level Byte-pair encoding tokenizer (the same as GPT-2), with the same special tokens as RoBERTa. Let’s arbitrarily pick its size to be 52,000.We recommend training a byte-level BPE (rather than let’s say, a WordPiece tokenizer like BERT) because it will start building its vocabulary from an alphabet of single bytes, so all words will be decomposable into tokens (no more <unk> tokens!).#! pip install tokenizersfrom pathlib import Pathfrom tokenizers import ByteLevelBPETokenizerpaths = [str(x) for x in Path("./eo_data/").glob("**/*.txt")]# Initialize a tokenizertokenizer = ByteLevelBPETokenizer()# Customize trainingtokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=["<s>","<pad>","</s>","<unk>","<mask>",])# Save files to disktokenizer.save_model(".", "esperberto")And here’s a slightly accelerated capture of the output:On our dataset, training took about ~5 minutes.🔥🔥 Wow, that was fast! ⚡️🔥We now have both a vocab.json, which is a list of the most frequent tokens ranked by frequency, and a merges.txt list of merges.{"<s>": 0,"<pad>": 1,"</s>": 2,"<unk>": 3,"<mask>": 4,"!": 5,"\"": 6,"#": 7,"$": 8,"%": 9,"&": 10,"'": 11,"(": 12,")": 13,# ...}# merges.txtl aĠ ko nĠ lat aĠ eĠ dĠ p# ...What is great is that our tokenizer is optimized for Esperanto. Compared to a generic tokenizer trained for English, more native words are represented by a single, unsplit token. Diacritics, i.e. accented characters used in Esperanto – ĉ, ĝ, ĥ, ĵ, ŝ, and ŭ – are encoded natively. We also represent sequences in a more efficient manner. Here on this corpus, the average length of encoded sequences is ~30% smaller as when using the pretrained GPT-2 tokenizer.Here’s how you can use it in tokenizers, including handling the RoBERTa special tokens – of course, you’ll also be able to use it directly from transformers.from tokenizers.implementations import ByteLevelBPETokenizerfrom tokenizers.processors import BertProcessingtokenizer = ByteLevelBPETokenizer("./models/EsperBERTo-small/vocab.json","./models/EsperBERTo-small/merges.txt",)tokenizer._tokenizer.post_processor = BertProcessing(("</s>", tokenizer.token_to_id("</s>")),("<s>", tokenizer.token_to_id("<s>")),)tokenizer.enable_truncation(max_length=512)print(tokenizer.encode("Mi estas Julien."))# Encoding(num_tokens=7, ...)# tokens: ['<s>', 'Mi', 'Ġestas', 'ĠJuli', 'en', '.', '</s>']3. Train a language model from scratchUpdate: The associated Colab notebook uses our new Trainer directly, instead of through a script. Feel free to pick the approach you like best.We will now train our language model using the run_language_modeling.py script from transformers (newly renamed from run_lm_finetuning.py as it now supports training from scratch more seamlessly). Just remember to leave --model_name_or_path to None to train from scratch vs. from an existing model or checkpoint.We’ll train a RoBERTa-like model, which is a BERT-like with a couple of changes (check the documentation for more details).As the model is BERT-like, we’ll train it on a task of Masked language modeling, i.e. the predict how to fill arbitrary tokens that we randomly mask in the dataset. This is taken care of by the example script.We just need to do two things:implement a simple subclass of Dataset that loads data from our text filesDepending on your use case, you might not even need to write your own subclass of Dataset, if one of the provided examples (TextDataset and LineByLineTextDataset) works – but there are lots of custom tweaks that you might want to add based on what your corpus looks like.Choose and experiment with different sets of hyperparameters.Here’s a simple version of our EsperantoDataset.from torch.utils.data import Datasetclass EsperantoDataset(Dataset):def __init__(self, evaluate: bool = False):tokenizer = ByteLevelBPETokenizer("./models/EsperBERTo-small/vocab.json","./models/EsperBERTo-small/merges.txt",)tokenizer._tokenizer.post_processor = BertProcessing(("</s>", tokenizer.token_to_id("</s>")),("<s>", tokenizer.token_to_id("<s>")),)tokenizer.enable_truncation(max_length=512)# or use the RobertaTokenizer from `transformers` directly.self.examples = []src_files = Path("./data/").glob("*-eval.txt") if evaluate else Path("./data/").glob("*-train.txt")for src_file in src_files:print("🔥", src_file)lines = src_file.read_text(encoding="utf-8").splitlines()self.examples += [x.ids for x in tokenizer.encode_batch(lines)]def __len__(self):return len(self.examples)def __getitem__(self, i):# We’ll pad at the batch level.return torch.tensor(self.examples[i])If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step.Here is one specific set of hyper-parameters and arguments we pass to the script:--output_dir ./models/EsperBERTo-small-v1--model_type roberta--mlm--config_name ./models/EsperBERTo-small--tokenizer_name ./models/EsperBERTo-small--do_train--do_eval--learning_rate 1e-4--num_train_epochs 5--save_total_limit 2--save_steps 2000--per_gpu_train_batch_size 16--evaluate_during_training--seed 42As usual, pick the largest batch size you can fit on your GPU(s). 🔥🔥🔥 Let’s start training!! 🔥🔥🔥Here you can check our Tensorboard for one particular set of hyper-parameters:Our example scripts log into the Tensorboard format by default, under runs/. Then to view your board just run tensorboard dev upload --logdir runs – this will set up tensorboard.dev, a Google-managed hosted version that lets you share your ML experiment with anyone.4. Check that the LM actually trainedAside from looking at the training and eval losses going down, the easiest way to check whether our language model is learning anything interesting is via the FillMaskPipeline.Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, <mask>) and return a list of the most probable filled sequences, with their probabilities.from transformers import pipelinefill_mask = pipeline("fill-mask",model="./models/EsperBERTo-small",tokenizer="./models/EsperBERTo-small")# The sun <mask>.# =>result = fill_mask("La suno <mask>.")# {'score': 0.2526160776615143, 'sequence': '<s> La suno brilis.</s>', 'token': 10820}# {'score': 0.0999930202960968, 'sequence': '<s> La suno lumis.</s>', 'token': 23833}# {'score': 0.04382849484682083, 'sequence': '<s> La suno brilas.</s>', 'token': 15006}# {'score': 0.026011141017079353, 'sequence': '<s> La suno falas.</s>', 'token': 7392}# {'score': 0.016859788447618484, 'sequence': '<s> La suno pasis.</s>', 'token': 4552}Ok, simple syntax/grammar works. Let’s try a slightly more interesting prompt:fill_mask("Jen la komenco de bela <mask>.")# This is the beginning of a beautiful <mask>.# =># {# 'score':0.06502299010753632# 'sequence':'<s> Jen la komenco de bela vivo.</s>'# 'token':1099# }# {# 'score':0.0421181358397007# 'sequence':'<s> Jen la komenco de bela vespero.</s>'# 'token':5100# }# {# 'score':0.024884626269340515# 'sequence':'<s> Jen la komenco de bela laboro.</s>'# 'token':1570# }# {# 'score':0.02324388362467289# 'sequence':'<s> Jen la komenco de bela tago.</s>'# 'token':1688# }# {# 'score':0.020378097891807556# 'sequence':'<s> Jen la komenco de bela festo.</s>'# 'token':4580# }“Jen la komenco de bela tago”, indeed!With more complex prompts, you can probe whether your language model captured more semantic knowledge or even some sort of (statistical) common sense reasoning.5. Fine-tune your LM on a downstream taskWe now can fine-tune our new Esperanto language model on a downstream task of Part-of-speech tagging.As mentioned before, Esperanto is a highly regular language where word endings typically condition the grammatical part of speech. Using a dataset of annotated Esperanto POS tags formatted in the CoNLL-2003 format (see example below), we can use the run_ner.py script from transformers.POS tagging is a token classification task just as NER so we can just use the exact same script.Again, here’s the hosted Tensorboard for this fine-tuning. We train for 3 epochs using a batch size of 64 per GPU.Training and eval losses converge to small residual values as the task is rather easy (the language is regular) – it’s still fun to be able to train it end-to-end 😃.This time, let’s use a TokenClassificationPipeline:from transformers import TokenClassificationPipeline, pipelineMODEL_PATH = "./models/EsperBERTo-small-pos/"nlp = pipeline("ner",model=MODEL_PATH,tokenizer=MODEL_PATH,)# or instantiate a TokenClassificationPipeline directly.nlp("Mi estas viro kej estas tago varma.")# {'entity': 'PRON', 'score': 0.9979867339134216, 'word': ' Mi'}# {'entity': 'VERB', 'score': 0.9683094620704651, 'word': ' estas'}# {'entity': 'VERB', 'score': 0.9797462821006775, 'word': ' estas'}# {'entity': 'NOUN', 'score': 0.8509314060211182, 'word': ' tago'}# {'entity': 'ADJ', 'score': 0.9996201395988464, 'word': ' varma'}Looks like it worked! 🔥For a more challenging dataset for NER, @stefan-it recommended that we could train on the silver standard dataset from WikiANN6. Share your model 🎉Finally, when you have a nice model, please think about sharing it with the community:upload your model using the CLI: transformers-cli uploadwrite a README.md model card and add it to the repository under model_cards/. Your model card should ideally include:a model description,training params (dataset, preprocessing, hyperparameters), evaluation results,intended uses & limitationswhatever else is helpful! 🤓TADA!➡️ Your model has a page on https://huggingface.co/models and everyone can load it using AutoModel.from_pretrained("username/model_name").If you want to take a look at models in different languages, check https://huggingface.co/modelsThank you!