Update README.md
Browse files
README.md
CHANGED
@@ -5,94 +5,21 @@ tags:
|
|
5 |
- endpoints-template
|
6 |
license: bsd-3-clause
|
7 |
library_name: generic
|
8 |
-
duplicated_from: florentgbelidji/blip_captioning
|
9 |
---
|
10 |
|
11 |
-
#
|
|
|
12 |
|
13 |
-
|
14 |
-
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
|
15 |
-
### expected Request payload
|
16 |
```json
|
17 |
{
|
18 |
-
"
|
19 |
}
|
20 |
```
|
21 |
-
below is an example on how to run a request using Python and `requests`.
|
22 |
-
## Run Request
|
23 |
-
1. prepare an image.
|
24 |
-
```bash
|
25 |
-
!wget https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
|
26 |
-
```
|
27 |
-
2.run request
|
28 |
-
|
29 |
-
```python
|
30 |
-
import json
|
31 |
-
from typing import List
|
32 |
-
import requests as r
|
33 |
-
import base64
|
34 |
-
|
35 |
-
ENDPOINT_URL = ""
|
36 |
-
HF_TOKEN = ""
|
37 |
-
|
38 |
-
def predict(path_to_image: str = None):
|
39 |
-
with open(path_to_image, "rb") as i:
|
40 |
-
image = i.read()
|
41 |
-
payload = {
|
42 |
-
"inputs": [image],
|
43 |
-
"parameters": {
|
44 |
-
"do_sample": True,
|
45 |
-
"top_p":0.9,
|
46 |
-
"min_length":5,
|
47 |
-
"max_length":20
|
48 |
-
}
|
49 |
-
}
|
50 |
-
response = r.post(
|
51 |
-
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
|
52 |
-
)
|
53 |
-
return response.json()
|
54 |
-
prediction = predict(
|
55 |
-
path_to_image="palace.jpg"
|
56 |
-
)
|
57 |
-
|
58 |
-
```
|
59 |
-
Example parameters depending on the decoding strategy:
|
60 |
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
}
|
68 |
-
```
|
69 |
-
|
70 |
-
2. Nucleus sampling
|
71 |
-
|
72 |
-
```
|
73 |
-
"parameters": {
|
74 |
-
"num_beams":1,
|
75 |
-
"max_length":20,
|
76 |
-
"do_sample": True,
|
77 |
-
"top_k":50,
|
78 |
-
"top_p":0.95
|
79 |
-
}
|
80 |
-
```
|
81 |
-
|
82 |
-
3. Contrastive search
|
83 |
-
|
84 |
-
```
|
85 |
-
"parameters": {
|
86 |
-
"penalty_alpha":0.6,
|
87 |
-
"top_k":4
|
88 |
-
"max_length":512
|
89 |
-
}
|
90 |
-
```
|
91 |
-
|
92 |
-
See [generate()](https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate) doc for additional detail
|
93 |
-
|
94 |
-
|
95 |
-
expected output
|
96 |
-
```python
|
97 |
-
['buckingham palace with flower beds and red flowers']
|
98 |
-
```
|
|
|
5 |
- endpoints-template
|
6 |
license: bsd-3-clause
|
7 |
library_name: generic
|
|
|
8 |
---
|
9 |
|
10 |
+
# Image captioning
|
11 |
+
For deployment as an inference endpoint, using a Custom task type – a fixed version of [this repo](https://huggingface.co/florentgbelidji/blip_captioning) (updated to decode the base64 image strings)
|
12 |
|
13 |
+
## Request payload
|
|
|
|
|
14 |
```json
|
15 |
{
|
16 |
+
"inputs": ["/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC...."], // base64-encoded image
|
17 |
}
|
18 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
+
## Response payload
|
21 |
+
```json
|
22 |
+
{
|
23 |
+
"captions": ["inferred caption for image"]
|
24 |
+
}
|
25 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|