Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for SDSU AI Club Image Captioning Model

Model Details

Model Description

This image captioning model uses the transformers, datasets, and pytorch libraries to fine tune a pre-trained BLIP model with a subset of the Flickr 30k image dataset. The model then generates a new caption for the image.

  • Developed by: Charisma Meyer, Daniel Aguilar, Erica Lee, Evan Tardiff, Steven Trujillo, Vincent Huynh
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: Image Captioning
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: Salesforce/blip-image-captioning-base

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper: "BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation" by Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

[More Information Needed]

Training Procedure

The training procedure was a process where we took a large dataset and a pre-trained model card from Hugging Face, and created a model that performs the image captioning function on this dataset. This generates a tailored caption given an input image. The biggest problem encountered was that our computers could not train a model with the given size of the dataset as it took too much RAM, which crashed our computers. Figuring out to be able to train this model using the dataset given, our solution included creating a subset of the data provided in the original set as well as manipulating the batch size to increase the number of images being read at a time. We trained the model with this new dataset which allowed us to successfully train the model in a more reasonable amount of time.

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation

@misc{li2022blip, title={BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, author={Junnan Li and Dongxu Li and Caiming Xiong and Steven Hoi}, year={2022}, eprint={2201.12086}, archivePrefix={arXiv}, primaryClass={cs.CV} }

@misc{, author={younesbelkada, RocketKnight1}, title={Fine-tune BLIP using Hugging Face transformers and datasets}, year={2023}, url={\url{https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb}} }

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .