# Masked Autoencoders are Scalable Vision Learners Session by [johko](https://github.com/johko) ## Recording 📺 [YouTube](https://www.youtube.com/watch?v=AC6flxUFLrg&pp=ygUdaHVnZ2luZyBmYWNlIHN0dWR5IGdyb3VwIHN3aW4%3D) ## Session Slides 🖥️ [Google Drive](https://docs.google.com/presentation/d/10ZZ-Rl1D57VX005a58OmqNeOB6gPnE54/edit?usp=sharing&ouid=107717747412022342990&rtpof=true&sd=true) ## Original Paper 📄 [Hugging Face](https://huggingface.co/papers/2111.06377) / [arxiv](https://arxiv.org/abs/2111.06377) ## GitHub Repo 🧑🏽‍💻 https://github.com/facebookresearch/mae ## Additional Resources 📚 - [Transformers Docs ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae) - [Transformers ViTMAE Demo Notebook](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ViTMAE) by Niels Rogge