Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Model Card for BLIP: Bootstrapping Language-Image Pre-training

BLIP is a unified vision-language model designed for image captioning, visual question answering, and related tasks. The current implementation is pretrained for image captioning and fine-tuned on a food-specific dataset.

Model Details

Model Description

BLIP (Bootstrapping Language-Image Pre-training) leverages vision transformers (ViT) for feature extraction and connects them with language models for unified vision-language understanding and generation. This particular model is fine-tuned to generate captions for food-related images.

  • Developed by: Salesforce AI Research
  • Funded by: Salesforce
  • Shared by: Official BLIP repository
  • Model type: Vision-language model
  • Language(s) English
  • Finetuned from model: BLIP base pretrained on COCO dataset

Model Sources

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support