Commit
•
e547693
1
Parent(s):
d5850ce
Update README.md
Browse files
README.md
CHANGED
@@ -1,118 +1,59 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
tags:
|
4 |
- object-detection
|
5 |
-
- vision
|
6 |
-
|
7 |
-
-
|
8 |
-
|
9 |
-
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
|
10 |
-
example_title: Savanna
|
11 |
-
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
|
12 |
-
example_title: Football Match
|
13 |
-
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
|
14 |
-
example_title: Airport
|
15 |
---
|
16 |
|
17 |
-
#
|
18 |
|
19 |
-
|
20 |
|
21 |
-
|
22 |
|
23 |
-
## Model
|
24 |
|
25 |
-
|
|
|
|
|
|
|
26 |
|
27 |
-
|
28 |
|
29 |
-
|
30 |
|
31 |
-
##
|
32 |
|
33 |
-
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
|
|
38 |
|
39 |
-
|
40 |
-
from transformers import DetrImageProcessor, DetrForObjectDetection
|
41 |
-
import torch
|
42 |
-
from PIL import Image
|
43 |
-
import requests
|
44 |
|
45 |
-
|
46 |
-
|
47 |
|
48 |
-
|
49 |
-
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
|
50 |
-
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
|
51 |
|
52 |
-
|
53 |
-
outputs = model(**inputs)
|
54 |
|
55 |
-
# convert outputs (bounding boxes and class logits) to COCO API
|
56 |
-
# let's only keep detections with score > 0.9
|
57 |
-
target_sizes = torch.tensor([image.size[::-1]])
|
58 |
-
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0]
|
59 |
|
60 |
-
|
61 |
-
box = [round(i, 2) for i in box.tolist()]
|
62 |
-
print(
|
63 |
-
f"Detected {model.config.id2label[label.item()]} with confidence "
|
64 |
-
f"{round(score.item(), 3)} at location {box}"
|
65 |
-
)
|
66 |
-
```
|
67 |
-
This should output:
|
68 |
-
```
|
69 |
-
Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98]
|
70 |
-
Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66]
|
71 |
-
Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76]
|
72 |
-
Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93]
|
73 |
-
Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72]
|
74 |
-
```
|
75 |
|
76 |
-
|
77 |
|
78 |
-
##
|
79 |
|
80 |
-
|
|
|
|
|
81 |
|
82 |
-
## Training procedure
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
|
89 |
-
|
90 |
-
### Training
|
91 |
-
|
92 |
-
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
|
93 |
-
|
94 |
-
## Evaluation results
|
95 |
-
|
96 |
-
This model achieves an AP (average precision) of **42.0** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
|
97 |
-
### BibTeX entry and citation info
|
98 |
-
|
99 |
-
```bibtex
|
100 |
-
@article{DBLP:journals/corr/abs-2005-12872,
|
101 |
-
author = {Nicolas Carion and
|
102 |
-
Francisco Massa and
|
103 |
-
Gabriel Synnaeve and
|
104 |
-
Nicolas Usunier and
|
105 |
-
Alexander Kirillov and
|
106 |
-
Sergey Zagoruyko},
|
107 |
-
title = {End-to-End Object Detection with Transformers},
|
108 |
-
journal = {CoRR},
|
109 |
-
volume = {abs/2005.12872},
|
110 |
-
year = {2020},
|
111 |
-
url = {https://arxiv.org/abs/2005.12872},
|
112 |
-
archivePrefix = {arXiv},
|
113 |
-
eprint = {2005.12872},
|
114 |
-
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
|
115 |
-
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
|
116 |
-
bibsource = {dblp computer science bibliography, https://dblp.org}
|
117 |
-
}
|
118 |
-
```
|
|
|
1 |
---
|
|
|
2 |
tags:
|
3 |
- object-detection
|
4 |
+
- computer-vision
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: object-detection
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
+
# AISAK-Visual
|
11 |
|
12 |
+
## Overview:
|
13 |
|
14 |
+
AISAK-Visual, part of the AISAK system, is a pretrained model for image captioning based on the BLIP framework. Altered by the AISAK team from the https://huggingface.co/Salesforce/blip-image-captioning-large model, this model utilizes a ViT base backbone for unified vision-language understanding and generation.
|
15 |
|
16 |
+
## Model Information:
|
17 |
|
18 |
+
- **Model Name**: AISAK-Visual
|
19 |
+
- **Version**: 2.0
|
20 |
+
- **Model Architecture**: Transformer with ViT base backbone
|
21 |
+
- **Specialization**: AISAK-Visual is part of the broader AISAK system and is specialized in image captioning tasks.
|
22 |
|
23 |
+
## Intended Use:
|
24 |
|
25 |
+
AISAK-Visual, as part of AISAK, is designed to provide accurate and contextually relevant captions for images. Whether used for conditional or unconditional image captioning tasks, AISAK-Visual offers strong performance across various vision-language understanding and generation tasks.
|
26 |
|
27 |
+
## Performance:
|
28 |
|
29 |
+
AISAK-Visual, based on the BLIP framework, achieves state-of-the-art results on image captioning tasks, including image-text retrieval, image captioning, and VQA. Its generalization ability is demonstrated by its strong performance on video-language tasks in a zero-shot manner.
|
30 |
|
31 |
+
## Ethical Considerations:
|
32 |
|
33 |
+
- **Bias Mitigation**: Efforts have been made to mitigate bias during training; however, users are encouraged to remain vigilant about potential biases in the model's output.
|
34 |
+
- **Fair Use**: Users should exercise caution when using AISAK-Visual in sensitive contexts and ensure fair and ethical use of the generated image captions.
|
35 |
|
36 |
+
## Limitations:
|
|
|
|
|
|
|
|
|
37 |
|
38 |
+
- While AISAK-Visual demonstrates proficiency in image captioning tasks, it may not be suitable for tasks requiring domain-specific knowledge.
|
39 |
+
- Performance may vary when presented with highly specialized or out-of-domain images.
|
40 |
|
41 |
+
## Deployment:
|
|
|
|
|
42 |
|
43 |
+
Inferencing for AISAK-Visual will be handled as part of the full deployment of the AISAK system in the future. The process is lengthy and intensive in many areas, emphasizing the goal of achieving the optimal system rather than the quickest. However, work is being done as fast as humanly possible. Updates will be provided as frequently as possible.
|
|
|
44 |
|
|
|
|
|
|
|
|
|
45 |
|
46 |
+
## Caveats:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
+
- Users should verify important decisions based on AISAK-Visual's image captions, particularly in critical or high-stakes scenarios.
|
49 |
|
50 |
+
## Model Card Information:
|
51 |
|
52 |
+
- **Model Card Created**: February 1, 2024
|
53 |
+
- **Last Updated**: February 19, 2024
|
54 |
+
- **Contact Information**: For any inquiries or communication regarding AISAK, please contact me at mandelakorilogan@gmail.com.
|
55 |
|
|
|
56 |
|
57 |
+
**© 2024 Mandela Logan. All rights reserved.**
|
58 |
|
59 |
+
No part of this model may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the copyright holder. Users are expressly prohibited from creating replications or spaces derived from this model, whether in whole or in part, without the explicit authorization of the copyright holder. Unauthorized use or reproduction of this model is strictly prohibited by copyright law.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|