Commit
•
322bbb4
1
Parent(s):
696a8e1
Update README.md
Browse files
README.md
CHANGED
@@ -7,91 +7,17 @@ license: bsd-3-clause
|
|
7 |
library_name: generic
|
8 |
---
|
9 |
|
10 |
-
# Fork of [salesforce/BLIP](https://github.com/salesforce/BLIP) for
|
|
|
11 |
|
12 |
-
This repository implements a `custom` task for `image-captioning` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/florentgbelidji/blip_captioning/blob/main/pipeline.py).
|
13 |
-
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
|
14 |
### expected Request payload
|
|
|
15 |
```json
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
below is an example on how to run a request using Python and `requests`.
|
21 |
-
## Run Request
|
22 |
-
1. prepare an image.
|
23 |
-
```bash
|
24 |
-
!wget https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
|
25 |
-
```
|
26 |
-
2.run request
|
27 |
-
|
28 |
-
```python
|
29 |
-
import json
|
30 |
-
from typing import List
|
31 |
-
import requests as r
|
32 |
-
import base64
|
33 |
-
|
34 |
-
ENDPOINT_URL = ""
|
35 |
-
HF_TOKEN = ""
|
36 |
-
|
37 |
-
def predict(path_to_image: str = None):
|
38 |
-
with open(path_to_image, "rb") as i:
|
39 |
-
image = i.read()
|
40 |
-
payload = {
|
41 |
-
"inputs": [image],
|
42 |
-
"parameters": {
|
43 |
-
"do_sample": True,
|
44 |
-
"top_p":0.9,
|
45 |
-
"min_length":5,
|
46 |
-
"max_length":20
|
47 |
-
}
|
48 |
-
}
|
49 |
-
response = r.post(
|
50 |
-
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
|
51 |
-
)
|
52 |
-
return response.json()
|
53 |
-
prediction = predict(
|
54 |
-
path_to_image="palace.jpg"
|
55 |
-
)
|
56 |
-
|
57 |
-
```
|
58 |
-
Example parameters depending on the decoding strategy:
|
59 |
-
|
60 |
-
1. Beam search
|
61 |
-
|
62 |
-
```
|
63 |
-
"parameters": {
|
64 |
-
"num_beams":5,
|
65 |
-
"max_length":20
|
66 |
-
}
|
67 |
-
```
|
68 |
-
|
69 |
-
2. Nucleus sampling
|
70 |
-
|
71 |
-
```
|
72 |
-
"parameters": {
|
73 |
-
"num_beams":1,
|
74 |
-
"max_length":20,
|
75 |
-
"do_sample": True,
|
76 |
-
"top_k":50,
|
77 |
-
"top_p":0.95
|
78 |
-
}
|
79 |
-
```
|
80 |
-
|
81 |
-
3. Contrastive search
|
82 |
-
|
83 |
-
```
|
84 |
-
"parameters": {
|
85 |
-
"penalty_alpha":0.6,
|
86 |
-
"top_k":4
|
87 |
-
"max_length":512
|
88 |
-
}
|
89 |
```
|
90 |
-
|
91 |
-
See [generate()](https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate) doc for additional detail
|
92 |
-
|
93 |
-
|
94 |
-
expected output
|
95 |
```python
|
96 |
-
['buckingham palace with flower beds and red flowers']
|
97 |
```
|
|
|
7 |
library_name: generic
|
8 |
---
|
9 |
|
10 |
+
# Fork of [salesforce/BLIP](https://github.com/salesforce/BLIP) for 🤗Inference endpoint deployment.
|
11 |
+
|
12 |
|
|
|
|
|
13 |
### expected Request payload
|
14 |
+
Curl:
|
15 |
```json
|
16 |
+
curl URL \
|
17 |
+
-X POST \
|
18 |
+
--data-binary @car.png \
|
19 |
+
-H "Content-Type: image/png"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
```
|
21 |
+
Python:
|
|
|
|
|
|
|
|
|
22 |
```python
|
|
|
23 |
```
|