File size: 947 Bytes
b0ae254 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Session by [johko](https://github.com/johko)
## Recording πΊ
[YouTube](https://www.youtube.com/watch?v=k0DAtZCCl1w&pp=ygUdaHVnZ2luZyBmYWNlIHN0dWR5IGdyb3VwIHN3aW4%3D)
## Session Slides π₯οΈ
[Google Drive](https://docs.google.com/presentation/d/1Y_8Qu0CMlt7jvCd8Jw0c_ILh8LHB0XgnlrvXObe5FYs/edit?usp=sharing)
## Original Paper π
[Hugging Face](https://huggingface.co/papers/2301.12597) /
[arxiv](https://arxiv.org/abs/2301.12597)
## GitHub Repo π§π½βπ»
https://github.com/salesforce/lavis
## Additional Resources π
- [BLIP-2 Demo Space](https://huggingface.co/spaces/hysts/BLIP2-with-transformers)
- [BLIP-2 Transformers Example Notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BLIP-2) by Niels Rogge
- [BLIP-2 Transformers Docs](https://huggingface.co/docs/transformers/model_doc/blip-2)
|