source
stringclasses
40 values
url
stringlengths
53
184
file_type
stringclasses
1 value
chunk
stringlengths
3
512
chunk_id
stringlengths
5
8
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
Let's get started with a text-to-image task: ```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient()
24_2_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
>>> image = client.text_to_image("An astronaut riding a horse on the moon.") >>> image.save("astronaut.png") # 'image' is a PIL.Image object ```
24_2_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
``` In the example above, we initialized an [`InferenceClient`] with the default parameters. The only thing you need to know is the [task](#supported-tasks) you want to perform. By default, the client will connect to the Inference API and select a model to complete the task. In our example, we generated an image from a text prompt. The returned value is a `PIL.Image` object that can be saved to a file. For more details, check out the [`~InferenceClient.text_to_image`] documentation.
24_2_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
Let's now see an example using the [~`InferenceClient.chat_completion`] API. This task uses an LLM to generate a response from a list of messages: ```python >>> from huggingface_hub import InferenceClient >>> messages = [{"role": "user", "content": "What is the capital of France?"}] >>> client = InferenceClient("meta-llama/Meta-Llama-3-8B-Instruct") >>> client.chat_completion(messages, max_tokens=100) ChatCompletionOutput( choices=[ ChatCompletionOutputComplete( finish_reason='eos_token', index=0,
24_2_3
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
ChatCompletionOutput( choices=[ ChatCompletionOutputComplete( finish_reason='eos_token', index=0, message=ChatCompletionOutputMessage( role='assistant', content='The capital of France is Paris.', name=None, tool_calls=None ), logprobs=None ) ], created=1719907176, id='', model='meta-llama/Meta-Llama-3-8B-Instruct', object='text_completion', system_fingerprint='2.0.4-sha-f426a33', usage=ChatCompletionOutputUsage( completion_tokens=8, prompt_tokens=17, total_tokens=25 ) ) ```
24_2_4
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
In this example, we specified which model we want to use (`"meta-llama/Meta-Llama-3-8B-Instruct"`). You can find a list of compatible models [on this page](https://huggingface.co/models?other=conversational&sort=likes). We then gave a list of messages to complete (here, a single question) and passed an additional parameter to API (`max_token=100`). The output is a `ChatCompletionOutput` object that follows the OpenAI specification. The generated content can be accessed with
24_2_5
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
The output is a `ChatCompletionOutput` object that follows the OpenAI specification. The generated content can be accessed with `output.choices[0].message.content`. For more details, check out the [`~InferenceClient.chat_completion`] documentation.
24_2_6
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#getting-started
.md
<Tip warning={true}> The API is designed to be simple. Not all parameters and options are available or described for the end user. Check out [this page](https://huggingface.co/docs/api-inference/detailed_parameters) if you are interested in learning more about all the parameters available for each task. </Tip>
24_2_7
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#using-a-specific-model
.md
What if you want to use a specific model? You can specify it either as a parameter or directly at an instance level: ```python >>> from huggingface_hub import InferenceClient # Initialize client for a specific model >>> client = InferenceClient(model="prompthero/openjourney-v4") >>> client.text_to_image(...) # Or use a generic client but pass your model as an argument >>> client = InferenceClient() >>> client.text_to_image(..., model="prompthero/openjourney-v4") ``` <Tip>
24_3_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#using-a-specific-model
.md
>>> client = InferenceClient() >>> client.text_to_image(..., model="prompthero/openjourney-v4") ``` <Tip> There are more than 200k models on the Hugging Face Hub! Each task in the [`InferenceClient`] comes with a recommended model. Be aware that the HF recommendation can change over time without prior notice. Therefore it is best to explicitly set a model once you are decided. Also, in most cases you'll be interested in finding a model specific to _your_ needs.
24_3_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#using-a-specific-model
.md
set a model once you are decided. Also, in most cases you'll be interested in finding a model specific to _your_ needs. Visit the [Models](https://huggingface.co/models) page on the Hub to explore your possibilities. </Tip>
24_3_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#using-a-specific-url
.md
The examples we saw above use the Serverless Inference API. This proves to be very useful for prototyping and testing things quickly. Once you're ready to deploy your model to production, you'll need to use a dedicated infrastructure. That's where [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) comes into play. It allows you to deploy any model and expose it as a private API. Once deployed, you'll get a URL that you can connect to using exactly the same
24_4_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#using-a-specific-url
.md
any model and expose it as a private API. Once deployed, you'll get a URL that you can connect to using exactly the same code as before, changing only the `model` parameter: ```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient(model="https://uu149rez6gw9ehej.eu-west-1.aws.endpoints.huggingface.cloud/deepfloyd-if") # or >>> client = InferenceClient() >>> client.text_to_image(..., model="https://uu149rez6gw9ehej.eu-west-1.aws.endpoints.huggingface.cloud/deepfloyd-if") ```
24_4_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#authentication
.md
Calls made with the [`InferenceClient`] can be authenticated using a [User Access Token](https://huggingface.co/docs/hub/security-tokens). By default, it will use the token saved on your machine if you are logged in (check out [how to authenticate](https://huggingface.co/docs/huggingface_hub/quick-start#authentication)). If you are not logged in, you can pass your token as an instance parameter: ```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient(token="hf_***") ```
24_5_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#authentication
.md
```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient(token="hf_***") ``` <Tip> Authentication is NOT mandatory when using the Inference API. However, authenticated users get a higher free-tier to play with the service. Token is also mandatory if you want to run inference on your private models or on private endpoints. </Tip>
24_5_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
The `chat_completion` task follows [OpenAI's Python client](https://github.com/openai/openai-python) syntax. What does it mean for you? It means that if you are used to play with `OpenAI`'s APIs you will be able to switch to `huggingface_hub.InferenceClient` to work with open-source models by updating just 2 line of code! ```diff - from openai import OpenAI + from huggingface_hub import InferenceClient - client = OpenAI( + client = InferenceClient( base_url=..., api_key=..., )
24_6_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
- client = OpenAI( + client = InferenceClient( base_url=..., api_key=..., ) output = client.chat.completions.create( model="meta-llama/Meta-Llama-3-8B-Instruct", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Count to 10"}, ], stream=True, max_tokens=1024, )
24_6_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
for chunk in output: print(chunk.choices[0].delta.content) ```
24_6_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
And that's it! The only required changes are to replace `from openai import OpenAI` by `from huggingface_hub import InferenceClient` and `client = OpenAI(...)` by `client = InferenceClient(...)`. You can choose any LLM model from the Hugging Face Hub by passing its model id as `model` parameter. [Here is a list](https://huggingface.co/models?pipeline_tag=text-generation&other=conversational,text-generation-inference&sort=trending) of supported models. For authentication, you should pass a valid [User
24_6_3
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
of supported models. For authentication, you should pass a valid [User Access Token](https://huggingface.co/settings/tokens) as `api_key` or authenticate using `huggingface_hub` (see the [authentication guide](https://huggingface.co/docs/huggingface_hub/quick-start#authentication)).
24_6_4
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
All input parameters and output format are strictly the same. In particular, you can pass `stream=True` to receive tokens as they are generated. You can also use the [`AsyncInferenceClient`] to run inference using `asyncio`: ```diff import asyncio - from openai import AsyncOpenAI + from huggingface_hub import AsyncInferenceClient
24_6_5
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
- client = AsyncOpenAI() + client = AsyncInferenceClient() async def main(): stream = await client.chat.completions.create( model="meta-llama/Meta-Llama-3-8B-Instruct", messages=[{"role": "user", "content": "Say this is a test"}], stream=True, ) async for chunk in stream: print(chunk.choices[0].delta.content or "", end="")
24_6_6
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
asyncio.run(main()) ``` You might wonder why using [`InferenceClient`] instead of OpenAI's client? There are a few reasons for that: 1. [`InferenceClient`] is configured for Hugging Face services. You don't need to provide a `base_url` to run models on the serverless Inference API. You also don't need to provide a `token` or `api_key` if your machine is already correctly logged in.
24_6_7
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
2. [`InferenceClient`] is tailored for both Text-Generation-Inference (TGI) and `transformers` frameworks, meaning you are assured it will always be on-par with the latest updates. 3. [`InferenceClient`] is integrated with our Inference Endpoints service, making it easier to launch an Inference Endpoint, check its status and run inference on it. Check out the [Inference Endpoints](./inference_endpoints.md) guide for more details. <Tip>
24_6_8
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#openai-compatibility
.md
<Tip> `InferenceClient.chat.completions.create` is simply an alias for `InferenceClient.chat_completion`. Check out the package reference of [`~InferenceClient.chat_completion`] for more details. `base_url` and `api_key` parameters when instantiating the client are also aliases for `model` and `token`. These aliases have been defined to reduce friction when switching from `OpenAI` to `InferenceClient`. </Tip>
24_6_9
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
[`InferenceClient`]'s goal is to provide the easiest interface to run inference on Hugging Face models. It has a simple API that supports the most common tasks. Here is a list of the currently supported tasks: | Domain | Task | Supported | Documentation | |--------|--------------------------------|--------------|------------------------------------|
24_7_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
|--------|--------------------------------|--------------|------------------------------------| | Audio | [Audio Classification](https://huggingface.co/tasks/audio-classification) | βœ… | [`~InferenceClient.audio_classification`] | | Audio | [Audio-to-Audio](https://huggingface.co/tasks/audio-to-audio) | βœ… | [`~InferenceClient.audio_to_audio`] |
24_7_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| Audio | [Audio-to-Audio](https://huggingface.co/tasks/audio-to-audio) | βœ… | [`~InferenceClient.audio_to_audio`] | | | [Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition) | βœ… | [`~InferenceClient.automatic_speech_recognition`] | | | [Text-to-Speech](https://huggingface.co/tasks/text-to-speech) | βœ… | [`~InferenceClient.text_to_speech`] |
24_7_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| | [Text-to-Speech](https://huggingface.co/tasks/text-to-speech) | βœ… | [`~InferenceClient.text_to_speech`] | | Computer Vision | [Image Classification](https://huggingface.co/tasks/image-classification) | βœ… | [`~InferenceClient.image_classification`] | | | [Image Segmentation](https://huggingface.co/tasks/image-segmentation) | βœ… | [`~InferenceClient.image_segmentation`] |
24_7_3
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| | [Image-to-Image](https://huggingface.co/tasks/image-to-image) | βœ… | [`~InferenceClient.image_to_image`] | | | [Image-to-Text](https://huggingface.co/tasks/image-to-text) | βœ… | [`~InferenceClient.image_to_text`] | | | [Object Detection](https://huggingface.co/tasks/object-detection) | βœ… | [`~InferenceClient.object_detection`] | | | [Text-to-Image](https://huggingface.co/tasks/text-to-image) | βœ… | [`~InferenceClient.text_to_image`] |
24_7_4
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| | [Text-to-Image](https://huggingface.co/tasks/text-to-image) | βœ… | [`~InferenceClient.text_to_image`] | | | [Zero-Shot-Image-Classification](https://huggingface.co/tasks/zero-shot-image-classification) | βœ… | [`~InferenceClient.zero_shot_image_classification`] | | Multimodal | [Documentation Question Answering](https://huggingface.co/tasks/document-question-answering) | βœ… | [`~InferenceClient.document_question_answering`]
24_7_5
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| | [Visual Question Answering](https://huggingface.co/tasks/visual-question-answering) | βœ… | [`~InferenceClient.visual_question_answering`] | | NLP | Conversational | | *deprecated*, use Chat Completion | | | [Chat Completion](https://huggingface.co/tasks/text-generation) | βœ… | [`~InferenceClient.chat_completion`] | | | [Feature Extraction](https://huggingface.co/tasks/feature-extraction) | βœ… | [`~InferenceClient.feature_extraction`] |
24_7_6
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| | [Fill Mask](https://huggingface.co/tasks/fill-mask) | βœ… | [`~InferenceClient.fill_mask`] | | | [Question Answering](https://huggingface.co/tasks/question-answering) | βœ… | [`~InferenceClient.question_answering`] | | [Sentence Similarity](https://huggingface.co/tasks/sentence-similarity) | βœ… | [`~InferenceClient.sentence_similarity`] | | | [Summarization](https://huggingface.co/tasks/summarization) | βœ… | [`~InferenceClient.summarization`] |
24_7_7
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| | [Summarization](https://huggingface.co/tasks/summarization) | βœ… | [`~InferenceClient.summarization`] | | | [Table Question Answering](https://huggingface.co/tasks/table-question-answering) | βœ… | [`~InferenceClient.table_question_answering`] | | | [Text Classification](https://huggingface.co/tasks/text-classification) | βœ… | [`~InferenceClient.text_classification`] |
24_7_8
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| | [Text Generation](https://huggingface.co/tasks/text-generation) | βœ… | [`~InferenceClient.text_generation`] | | | [Token Classification](https://huggingface.co/tasks/token-classification) | βœ… | [`~InferenceClient.token_classification`] | | | [Translation](https://huggingface.co/tasks/translation) | βœ… | [`~InferenceClient.translation`] | | | [Zero Shot Classification](https://huggingface.co/tasks/zero-shot-classification) | βœ… | [`~InferenceClient.zero_shot_classification`] |
24_7_9
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#supported-tasks
.md
| Tabular | [Tabular Classification](https://huggingface.co/tasks/tabular-classification) | βœ… | [`~InferenceClient.tabular_classification`] | | | [Tabular Regression](https://huggingface.co/tasks/tabular-regression) | βœ… | [`~InferenceClient.tabular_regression`] | <Tip> Check out the [Tasks](https://huggingface.co/tasks) page to learn more about each task, how to use them, and the most popular models for each task. </Tip>
24_7_10
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#custom-requests
.md
However, it is not always possible to cover all use cases. For custom requests, the [`InferenceClient.post`] method gives you the flexibility to send any request to the Inference API. For example, you can specify how to parse the inputs and outputs. In the example below, the generated image is returned as raw bytes instead of parsing it as a `PIL Image`. This can be helpful if you don't have `Pillow` installed in your setup and just care about the binary content of the
24_8_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#custom-requests
.md
This can be helpful if you don't have `Pillow` installed in your setup and just care about the binary content of the image. [`InferenceClient.post`] is also useful to handle tasks that are not yet officially supported. ```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient() >>> response = client.post(json={"inputs": "An astronaut riding a horse on the moon."}, model="stabilityai/stable-diffusion-2-1") >>> response.content # raw bytes b'...' ```
24_8_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#async-client
.md
An async version of the client is also provided, based on `asyncio` and `aiohttp`. You can either install `aiohttp` directly or use the `[inference]` extra: ```sh pip install --upgrade huggingface_hub[inference] # or # pip install aiohttp ``` After installation all async API endpoints are available via [`AsyncInferenceClient`]. Its initialization and APIs are strictly the same as the sync-only version. ```py # Code must be run in an asyncio concurrent context. # $ python -m asyncio
24_9_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#async-client
.md
strictly the same as the sync-only version. ```py # Code must be run in an asyncio concurrent context. # $ python -m asyncio >>> from huggingface_hub import AsyncInferenceClient >>> client = AsyncInferenceClient()
24_9_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#async-client
.md
>>> image = await client.text_to_image("An astronaut riding a horse on the moon.") >>> image.save("astronaut.png") >>> async for token in await client.text_generation("The Huggingface Hub is", stream=True): ... print(token, end="") a platform for sharing and discussing ML-related content. ``` For more information about the `asyncio` module, please refer to the [official documentation](https://docs.python.org/3/library/asyncio.html).
24_9_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#advanced-tips
.md
In the above section, we saw the main aspects of [`InferenceClient`]. Let's dive into some more advanced tips.
24_10_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#timeout
.md
When doing inference, there are two main causes for a timeout: - The inference process takes a long time to complete. - The model is not available, for example when Inference API is loading it for the first time. [`InferenceClient`] has a global `timeout` parameter to handle those two aspects. By default, it is set to `None`, meaning that the client will wait indefinitely for the inference to complete. If you want more control in your workflow,
24_11_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#timeout
.md
meaning that the client will wait indefinitely for the inference to complete. If you want more control in your workflow, you can set it to a specific value in seconds. If the timeout delay expires, an [`InferenceTimeoutError`] is raised. You can catch it and handle it in your code: ```python >>> from huggingface_hub import InferenceClient, InferenceTimeoutError >>> client = InferenceClient(timeout=30) >>> try: ... client.text_to_image(...) ... except InferenceTimeoutError:
24_11_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#timeout
.md
>>> client = InferenceClient(timeout=30) >>> try: ... client.text_to_image(...) ... except InferenceTimeoutError: ... print("Inference timed out after 30s.") ```
24_11_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#binary-inputs
.md
Some tasks require binary inputs, for example, when dealing with images or audio files. In this case, [`InferenceClient`] tries to be as permissive as possible and accept different types: - raw `bytes` - a file-like object, opened as binary (`with open("audio.flac", "rb") as f: ...`) - a path (`str` or `Path`) pointing to a local file - a URL (`str`) pointing to a remote file (e.g. `https://...`). In this case, the file will be downloaded locally before sending it to the Inference API. ```py
24_12_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#binary-inputs
.md
sending it to the Inference API. ```py >>> from huggingface_hub import InferenceClient >>> client = InferenceClient() >>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg") [{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...] ```
24_12_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#legacy-inferenceapi-client
.md
[`InferenceClient`] acts as a replacement for the legacy [`InferenceApi`] client. It adds specific support for tasks and handles inference on both [Inference API](https://huggingface.co/docs/api-inference/index) and [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index). Here is a short guide to help you migrate from [`InferenceApi`] to [`InferenceClient`].
24_13_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#initialization
.md
Change from ```python >>> from huggingface_hub import InferenceApi >>> inference = InferenceApi(repo_id="bert-base-uncased", token=API_TOKEN) ``` to ```python >>> from huggingface_hub import InferenceClient >>> inference = InferenceClient(model="bert-base-uncased", token=API_TOKEN) ```
24_14_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-on-a-specific-task
.md
Change from ```python >>> from huggingface_hub import InferenceApi >>> inference = InferenceApi(repo_id="paraphrase-xlm-r-multilingual-v1", task="feature-extraction") >>> inference(...) ``` to ```python >>> from huggingface_hub import InferenceClient >>> inference = InferenceClient() >>> inference.feature_extraction(..., model="paraphrase-xlm-r-multilingual-v1") ``` <Tip> This is the recommended way to adapt your code to [`InferenceClient`]. It lets you benefit from the task-specific
24_15_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-on-a-specific-task
.md
``` <Tip> This is the recommended way to adapt your code to [`InferenceClient`]. It lets you benefit from the task-specific methods like `feature_extraction`. </Tip>
24_15_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-custom-request
.md
Change from ```python >>> from huggingface_hub import InferenceApi >>> inference = InferenceApi(repo_id="bert-base-uncased") >>> inference(inputs="The goal of life is [MASK].") [{'sequence': 'the goal of life is life.', 'score': 0.10933292657136917, 'token': 2166, 'token_str': 'life'}] ``` to ```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient() >>> response = client.post(json={"inputs": "The goal of life is [MASK]."}, model="bert-base-uncased") >>> response.json()
24_16_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-custom-request
.md
>>> response = client.post(json={"inputs": "The goal of life is [MASK]."}, model="bert-base-uncased") >>> response.json() [{'sequence': 'the goal of life is life.', 'score': 0.10933292657136917, 'token': 2166, 'token_str': 'life'}] ```
24_16_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-with-parameters
.md
Change from ```python >>> from huggingface_hub import InferenceApi >>> inference = InferenceApi(repo_id="typeform/distilbert-base-uncased-mnli") >>> inputs = "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!" >>> params = {"candidate_labels":["refund", "legal", "faq"]} >>> inference(inputs, params)
24_17_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-with-parameters
.md
>>> params = {"candidate_labels":["refund", "legal", "faq"]} >>> inference(inputs, params) {'sequence': 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!', 'labels': ['refund', 'faq', 'legal'], 'scores': [0.9378499388694763, 0.04914155602455139, 0.013008488342165947]} ``` to ```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient()
24_17_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-with-parameters
.md
``` to ```python >>> from huggingface_hub import InferenceClient >>> client = InferenceClient() >>> inputs = "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!" >>> params = {"candidate_labels":["refund", "legal", "faq"]} >>> response = client.post(json={"inputs": inputs, "parameters": params}, model="typeform/distilbert-base-uncased-mnli") >>> response.json()
24_17_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/inference.md
https://huggingface.co/docs/huggingface_hub/en/guides/inference/#run-with-parameters
.md
>>> response.json() {'sequence': 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!', 'labels': ['refund', 'faq', 'legal'], 'scores': [0.9378499388694763, 0.04914155602455139, 0.013008488342165947]} ```
24_17_3
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/search.md
https://huggingface.co/docs/huggingface_hub/en/guides/search/
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
25_0_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/search.md
https://huggingface.co/docs/huggingface_hub/en/guides/search/#search-the-hub
.md
In this tutorial, you will learn how to search models, datasets and spaces on the Hub using `huggingface_hub`.
25_1_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/search.md
https://huggingface.co/docs/huggingface_hub/en/guides/search/#how-to-list-repositories-
.md
`huggingface_hub` library includes an HTTP client [`HfApi`] to interact with the Hub. Among other things, it can list models, datasets and spaces stored on the Hub: ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> models = api.list_models() ``` The output of [`list_models`] is an iterator over the models stored on the Hub. Similarly, you can use [`list_datasets`] to list datasets and [`list_spaces`] to list Spaces.
25_2_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/search.md
https://huggingface.co/docs/huggingface_hub/en/guides/search/#how-to-filter-repositories-
.md
Listing repositories is great but now you might want to filter your search. The list helpers have several attributes like: - `filter` - `author` - `search` - ... Let's see an example to get all models on the Hub that does image classification, have been trained on the imagenet dataset and that runs with PyTorch. ```py models = hf_api.list_models( task="image-classification", library="pytorch", trained_dataset="imagenet", ) ```
25_3_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/search.md
https://huggingface.co/docs/huggingface_hub/en/guides/search/#how-to-filter-repositories-
.md
```py models = hf_api.list_models( task="image-classification", library="pytorch", trained_dataset="imagenet", ) ``` While filtering, you can also sort the models and take only the top results. For example, the following example fetches the top 5 most downloaded datasets on the Hub: ```py >>> list(list_datasets(sort="downloads", direction=-1, limit=5)) [DatasetInfo( id='argilla/databricks-dolly-15k-curated-en', author='argilla', sha='4dcd1dedbe148307a833c931b21ca456a1fc4281',
25_3_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/search.md
https://huggingface.co/docs/huggingface_hub/en/guides/search/#how-to-filter-repositories-
.md
[DatasetInfo( id='argilla/databricks-dolly-15k-curated-en', author='argilla', sha='4dcd1dedbe148307a833c931b21ca456a1fc4281', last_modified=datetime.datetime(2023, 10, 2, 12, 32, 53, tzinfo=datetime.timezone.utc), private=False, downloads=8889377, (...) ``` To explore available filters on the Hub, visit [models](https://huggingface.co/models) and [datasets](https://huggingface.co/datasets) pages in your browser, search for some parameters and look at the values in the URL.
25_3_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
26_0_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
In this section, you will find practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use huggingface_hub to solve real-world problems: <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./repository">
26_1_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./repository"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Repository </div><p class="text-gray-700"> How to create a repository on the Hub? How to configure it? How to interact with it? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
26_1_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./download"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Download files </div><p class="text-gray-700"> How do I download a file from the Hub? How do I download a repository? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
26_1_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./upload"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Upload files </div><p class="text-gray-700"> How to upload a file or a folder? How to make changes to an existing repository on the Hub? </p> </a>
26_1_3
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
How to upload a file or a folder? How to make changes to an existing repository on the Hub? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./search"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Search </div><p class="text-gray-700"> How to efficiently search through the 200k+ public models, datasets and spaces? </p> </a>
26_1_4
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
</div><p class="text-gray-700"> How to efficiently search through the 200k+ public models, datasets and spaces? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./hf_file_system"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> HfFileSystem </div><p class="text-gray-700">
26_1_5
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
HfFileSystem </div><p class="text-gray-700"> How to interact with the Hub through a convenient interface that mimics Python's file interface? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./inference"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Inference </div><p class="text-gray-700">
26_1_6
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
Inference </div><p class="text-gray-700"> How to make predictions using the accelerated Inference API? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./community"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Community Tab </div><p class="text-gray-700"> How to interact with the Community tab (Discussions and Pull Requests)? </p> </a>
26_1_7
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
</div><p class="text-gray-700"> How to interact with the Community tab (Discussions and Pull Requests)? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./collections"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Collections </div><p class="text-gray-700"> How to programmatically build collections? </p> </a>
26_1_8
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
Collections </div><p class="text-gray-700"> How to programmatically build collections? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./manage-cache"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Cache </div><p class="text-gray-700"> How does the cache-system work? How to benefit from it? </p> </a>
26_1_9
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
Cache </div><p class="text-gray-700"> How does the cache-system work? How to benefit from it? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./model-cards"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Model Cards </div><p class="text-gray-700"> How to create and share Model Cards? </p> </a>
26_1_10
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
Model Cards </div><p class="text-gray-700"> How to create and share Model Cards? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./manage-spaces"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Manage your Space </div><p class="text-gray-700"> How to manage your Space hardware and configuration? </p> </a>
26_1_11
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
Manage your Space </div><p class="text-gray-700"> How to manage your Space hardware and configuration? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./integrations"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Integrate a library </div><p class="text-gray-700"> What does it mean to integrate a library with the Hub? And how to do it? </p>
26_1_12
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
</div><p class="text-gray-700"> What does it mean to integrate a library with the Hub? And how to do it? </p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./webhooks_server"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed"> Webhooks server </div><p class="text-gray-700"> How to create a server to receive Webhooks and deploy it as a Space? </p>
26_1_13
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/overview.md
https://huggingface.co/docs/huggingface_hub/en/guides/overview/#how-to-guides
.md
Webhooks server </div><p class="text-gray-700"> How to create a server to receive Webhooks and deploy it as a Space? </p> </a> </div> </div>
26_1_14
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
27_0_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#collections
.md
A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this [guide](https://huggingface.co/docs/hub/collections) to understand in more detail what collections are and how they look on the Hub.
27_1_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#collections
.md
You can directly manage collections in the browser, but in this guide, we will focus on how to manage them programmatically.
27_1_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#fetch-a-collection
.md
Use [`get_collection`] to fetch your collections or any public ones. You must have the collection's *slug* to retrieve a collection. A slug is an identifier for a collection based on the title and a unique ID. You can find the slug in the URL of the collection page. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hfh_collection_slug.png"/> </div>
27_2_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#fetch-a-collection
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hfh_collection_slug.png"/> </div> Let's fetch the collection with, `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`: ```py >>> from huggingface_hub import get_collection >>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026") >>> collection Collection( slug='TheBloke/recent-models-64f9a55bb3115b4f513ec026', title='Recent models', owner='TheBloke', items=[...],
27_2_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#fetch-a-collection
.md
Collection( slug='TheBloke/recent-models-64f9a55bb3115b4f513ec026', title='Recent models', owner='TheBloke', items=[...], last_updated=datetime.datetime(2023, 10, 2, 22, 56, 48, 632000, tzinfo=datetime.timezone.utc), position=1, private=False, theme='green', upvotes=90, description="Models I've recently quantized. Please note that currently this list has to be updated manually, and therefore is not guaranteed to be up-to-date." ) >>> collection.items[0] CollectionItem(
27_2_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#fetch-a-collection
.md
) >>> collection.items[0] CollectionItem( item_object_id='651446103cd773a050bf64c2', item_id='TheBloke/U-Amethyst-20B-AWQ', item_type='model', position=88, note=None ) ``` The [`Collection`] object returned by [`get_collection`] contains: - high-level metadata: `slug`, `owner`, `title`, `description`, etc. - a list of [`CollectionItem`] objects; each item represents a model, a dataset, a Space, or a paper. All collection items are guaranteed to have:
27_2_3
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#fetch-a-collection
.md
All collection items are guaranteed to have: - a unique `item_object_id`: this is the id of the collection item in the database - an `item_id`: this is the id on the Hub of the underlying item (model, dataset, Space, paper); it is not necessarily unique, and only the `item_id`/`item_type` pair is unique - an `item_type`: model, dataset, Space, paper - the `position` of the item in the collection, which can be updated to reorganize your collection (see [`update_collection_item`] below)
27_2_4
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#fetch-a-collection
.md
A `note` can also be attached to the item. This is useful to add additional information about the item (a comment, a link to a blog post, etc.). The attribute still has a `None` value if an item doesn't have a note. In addition to these base attributes, returned items can have additional attributes depending on their type: `author`, `private`, `lastModified`, `gated`, `title`, `likes`, `upvotes`, etc. None of these attributes are guaranteed to be returned.
27_2_5
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#list-collections
.md
We can also retrieve collections using [`list_collections`]. Collections can be filtered using some parameters. Let's list all the collections from the user [`teknium`](https://huggingface.co/teknium). ```py >>> from huggingface_hub import list_collections
27_3_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#list-collections
.md
>>> collections = list_collections(owner="teknium") ``` This returns an iterable of `Collection` objects. We can iterate over them to print, for example, the number of upvotes for each collection. ```py >>> for collection in collections: ... print("Number of upvotes:", collection.upvotes) Number of upvotes: 1 Number of upvotes: 5 ``` <Tip warning={true}>
27_3_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#list-collections
.md
... print("Number of upvotes:", collection.upvotes) Number of upvotes: 1 Number of upvotes: 5 ``` <Tip warning={true}> When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items from a collection, you must use [`get_collection`]. </Tip>
27_3_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#list-collections
.md
</Tip> It is possible to do more advanced filtering. Let's get all collections containing the model [TheBloke/OpenHermes-2.5-Mistral-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF), sorted by trending, and limit the count to 5. ```py >>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5): >>> for collection in collections: ... print(collection.slug) teknium/quantized-models-6544690bb978e0b0f7328748
27_3_3
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#list-collections
.md
>>> for collection in collections: ... print(collection.slug) teknium/quantized-models-6544690bb978e0b0f7328748 AmeerH/function-calling-65560a2565d7a6ef568527af PostArchitekt/7bz-65479bb8c194936469697d8c gnomealone/need-to-test-652007226c6ce4cdacf9c233 Crataco/favorite-7b-models-651944072b4fffcb41f8b568 ``` Parameter `sort` must be one of `"last_modified"`, `"trending"` or `"upvotes"`. Parameter `item` accepts any particular item. For example: * `"models/teknium/OpenHermes-2.5-Mistral-7B"`
27_3_4
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#list-collections
.md
* `"models/teknium/OpenHermes-2.5-Mistral-7B"` * `"spaces/julien-c/open-gpt-rhyming-robot"` * `"datasets/squad"` * `"papers/2311.12983"` For more details, please check out [`list_collections`] reference.
27_3_5
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#create-a-new-collection
.md
Now that we know how to get a [`Collection`], let's create our own! Use [`create_collection`] with a title and description. To create a collection on an organization page, pass `namespace="my-cool-org"` when creating the collection. Finally, you can also create private collections by passing `private=True`. ```py >>> from huggingface_hub import create_collection
27_4_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#create-a-new-collection
.md
>>> collection = create_collection( ... title="ICCV 2023", ... description="Portfolio of models, papers and demos I presented at ICCV 2023", ... ) ``` It will return a [`Collection`] object with the high-level metadata (title, description, owner, etc.) and an empty list of items. You will now be able to refer to this collection using its `slug`. ```py >>> collection.slug 'owner/iccv-2023-15e23b46cb98efca45' >>> collection.title "ICCV 2023" >>> collection.owner "username" >>> collection.url
27_4_1
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#create-a-new-collection
.md
'owner/iccv-2023-15e23b46cb98efca45' >>> collection.title "ICCV 2023" >>> collection.owner "username" >>> collection.url 'https://huggingface.co/collections/owner/iccv-2023-15e23b46cb98efca45' ```
27_4_2
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#manage-items-in-a-collection
.md
Now that we have a [`Collection`], we want to add items to it and organize them.
27_5_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#add-items
.md
Items have to be added one by one using [`add_collection_item`]. You only need to know the `collection_slug`, `item_id` and `item_type`. Optionally, you can also add a `note` to the item (500 characters maximum). ```py >>> from huggingface_hub import create_collection, add_collection_item >>> collection = create_collection(title="OS Week Highlights - Sept 18 - 24", namespace="osanseviero") >>> collection.slug "osanseviero/os-week-highlights-sept-18-24-650bfed7f795a59f491afb80"
27_6_0
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/collections.md
https://huggingface.co/docs/huggingface_hub/en/guides/collections/#add-items
.md
>>> add_collection_item(collection.slug, item_id="coqui/xtts", item_type="space") >>> add_collection_item( ... collection.slug, ... item_id="warp-ai/wuerstchen", ... item_type="model", ... note="WΓΌrstchen is a new fast and efficient high resolution text-to-image architecture and model" ... ) >>> add_collection_item(collection.slug, item_id="lmsys/lmsys-chat-1m", item_type="dataset")
27_6_1