|
--- |
|
title: Hugging Face Machine Learning Optimization Team |
|
emoji: 🤗 |
|
colorFrom: yellow |
|
colorTo: yellow |
|
sdk: static |
|
pinned: false |
|
short_description: Hugging Face ML Opt Team Page |
|
--- |
|
|
|
# Hugging Face Machine Learning Optimizations Team |
|
|
|
## About Hugging Face's mission |
|
|
|
Our mission is to democratize good machine learning. |
|
|
|
We want to build the platform for AI builder empowering all the communities towards building collaborative technologies. |
|
|
|
Hugging Face is a decentralized, highly impact-oriented, autonomous-driven company. |
|
|
|
## What does it mean to be part of the Machine Learning Optimization Team at Hugging Face? |
|
|
|
Being part of the Machine Learning Optimization Team usually involves new hire to jump into a program with one (or multiple) partner(s) as its main project, supporting Hugging Face overall monetization strategy. |
|
|
|
There is no real definition of what projects look like, every partner have different maturity, targets and scopes. |
|
We kind of surf over what we observe from a community and Hugging Face products usages to drive the features development with our partners. |
|
|
|
While most of the work will usually happen for a partner, we also encourage members of the team to have some time to work on personal project they think would be relevant towards driving more revenues for Hugging Face. |
|
|
|
Last but not least, while belonging to the monetization side of the company, we are very central and open-source builders. There are many opportunities to collaborate with other teams and projects from OSS / Community, the Hugging Face Hub and also the Infrastructure... |
|
|
|
## References |
|
|
|
Looking for some real use-cases of what we are diving for Hugging Face? Here is a non-exhausitive list of projects/achievements/sprints we did in the past: |
|
- [Hugging Face on AMD Instinct MI300 GPU](https://huggingface.co/blog/huggingface-amd-mi300) |
|
- [Hugging Face Text Generation Inference available for AWS Inferentia2](https://huggingface.co/blog/text-generation-inference-on-inferentia2) |
|
- [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) |
|
- [Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator](https://huggingface.co/blog/habana-gaudi-2-bloom) |
|
- [Scaling up BERT-like model Inference on modern CPU](https://huggingface.co/blog/bert-cpu-scaling-part-1) |