OratioAI
Sequecne to Sequence anguage translation, implimenting the methodes outlined in 'attention is all you need'
- Input Tokenization:
The source and target sentences are tokenized using custom WordPiece tokenizers. Tokens are mapped to embeddings via the InputEmbeddings module, scaled by the model dimension.
- Positional Encoding:
Positional information is added to token embeddings using a fixed sinusoidal encoding strategy.
- Encoding Phase:
The encoder processes the source sequence, transforming token embeddings into contextual representations using stacked EncoderBlock modules.
- Decoding Phase:
The decoder autoregressively generates target tokens by attending to both previous tokens and encoder outputs. Cross-attention layers align source and target sequences effectively.
- Projection:
Final decoder outputs are projected into the target vocabulary space for token prediction.
- Output Generation:
Decoding is performed using a beam search or greedy approach to produce the final translated sentence.
Dataset |
Description |
Dataset |
Dataset Used for main model training. |