File size: 1,580 Bytes
936e9bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
tags:
- text-classification
- switch-transformers
- mixture-of-experts

---
## Tensorflow Keras Implementation of Switch Transformers for Text Classification.

This repo contains the models [Switch Transformers for Text Classification](https://keras.io/examples/nlp/text_classification_with_switch_transformer/).

Credits: [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/) - Original Author

HF Contribution: [Rishav Chandra Varma](https://huggingface.co/reichenbach)  

## Background Information

### Introduction

In this example, we demonstrates implementation of the [Switch Transformer](https://arxiv.org/abs/2101.03961) model for text classification. For the purpose of this example, we are imdb dataset present in Keras Module.

### What is specialty of Switch Transformer ?

The Switch Transformer replaces the feed forward network (FFN) layer in the standard Transformer with a Mixture of Expert (MoE) routing layer, where each expert operates independently on the tokens in the sequence. This allows increasing the model size without increasing the computation needed to process each example.

Note that, for training the Switch Transformer efficiently, data and model parallelism need to be applied, so that expert modules can run simultaneously, each on its own accelerator. While the implementation described in the paper uses the [TensorFlow Mesh](https://github.com/tensorflow/mesh) framework for distributed training, this example presents a simple, non-distributed implementation of the Switch Transformer model for demonstration purposes.