--- license: mit --- # Transformers from Scratch This project consists of code for Transformer Block, Single Head Attention and Multi-head attention and Casual Mask from Scratch. ## Model Details ### Model Description To solidify knowledge and for reference, attention block is based on paper "Attention is all you need". ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6319030647a84df2a5dd106c/DtZER9tQF37i2vSKXCS8k.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6319030647a84df2a5dd106c/mSDuN8zci2QiZEvpQwIeM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6319030647a84df2a5dd106c/vHY84pugVJnx10TNTPAaz.png) - **Developed by:** Michael Peres - **Model type:** Vanilla Transformer from Scratch - **Language(s) (NLP):** English - **License:** MIT ### Model Sources - **Paper [Attention is all you need]:** https://arxiv.org/abs/1706.03762 ## Uses [More Information Needed] ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** RTX 3070Ti - **Hours used:** 0.1hr ### Model Architecture and Objective Objective in this model was to understand Transformers, and the basic self attention module. Self Attention, Multi-Head Attention and Casual Mask and Transformer Block ## Model Card Contact - michaelperes1@gmail.com - ec20433@qmul.ac.uk