moondream-caption / README.md
wraps's picture
Update README.md
29de163 verified
---
license: apache-2.0
datasets:
- wraps/flux1_dev-small
base_model: vikhyatk/moondream2
pipeline_tag: image-text-to-text
library_name: transformers
---
# Moondream-Caption: Custom Small Vision Model based on Moondream2
Moondream-Caption is a custom small vision model based on [moondream2](https://huggingface.co/vikhyatk/moondream2) by vikhyatk. It has been fine-tuned on a specific dataset to enhance its image description capabilities.
### Key Features:
- Based on the moondream2 architecture
- Fine-tuned for image caption generation
- Trained on a high-quality custom dataset
## Dataset
The dataset used for training Moondream-Caption is specifically designed for image captioning tasks. It has the following characteristics:
- Images generated with flux1_dev
- Highly accurate and verified descriptive captions
- Wide variety of visual content
## Usage
You can use Moondream-Caption for image captioning tasks by leveraging the Hugging Face Transformers library. Here's a quick example of how to generate captions for an image:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from PIL import Image
moondream = AutoModelForCausalLM.from_pretrained(
"wraps/moondream-caption", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("wraps/moondream-caption")
image = Image.open("path/to/your/image.jpg")
enc_image = moondream.encode_image(image)
caption = model.answer_question(enc_image, "Write a long caption for this image")
print(caption)
```
## Example
![image/png](https://cdn-uploads.huggingface.co/production/uploads/643fd05fdc984afcbbbb47d0/0o8Ev_eB69A-2uCqT3QV2.png)
**Output Caption**: A close-up portrait of a green alien with a large oval head, enormous black almond-shaped eyes, small nostrils, and a tiny mouth. The alien has a long, thin neck and is wearing a black t-shirt with white text that reads 'humans scare me'. The background shows a pale blue sky with soft, wispy clouds.
## Limitations
While Moondream-Caption is designed to generate accurate and relevant image captions, it may not perform optimally on images that significantly differ from the training dataset. Additionally, the model may struggle with complex or abstract images that deviate from the dataset's content. Please open an issue on the model's repository if you encounter any limitations or issues.