Help & Support
We have a variety of Inference Endpoints blog posts to help you at https://huggingface.co/blog:
- Getting Started with Hugging Face Inference Endpoints
- Why we’re switching to Hugging Face Inference Endpoints, and maybe you should too
- Deploy LLMs with Hugging Face Inference Endpoints
- 🤗 LLM suggestions in Argilla with HuggingFace Inference Endpoints
- Deploy MusicGen in no time with Inference Endpoints
- Programmatically manage Inference Endpoints
- New! TGI Multi-LoRA: Deploy Once, Serve 30 models
- New! Deploy open LLMs with vLLM on Inference Endpoints
- New! Llama 3.1 - 405B, 70B & 8B with multilinguality and long context
Need more help?
Feel free to ask questions on the forum so the community can also benefit from the answers: https://discuss.huggingface.co/. If you have any other questions or issues, please contact us at api-enterprise@huggingface.co.
< > Update on GitHub