Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-
|
|
| 15 |
|
| 16 |
# BERT Attention Visualizer API
|
| 17 |
|
| 18 |
-
This is the backend API for the BERT Attention Visualizer, a tool that allows you to visualize attention patterns in BERT and
|
| 19 |
|
| 20 |
## API Endpoints
|
| 21 |
|
|
@@ -157,7 +157,8 @@ Request body:
|
|
| 157 |
```json
|
| 158 |
{
|
| 159 |
"text": "The cat sat on the mat",
|
| 160 |
-
"model_name": "bert-base-uncased"
|
|
|
|
| 161 |
}
|
| 162 |
```
|
| 163 |
|
|
@@ -203,7 +204,8 @@ Request body:
|
|
| 203 |
"text": "The cat sat on the mat",
|
| 204 |
"masked_index": 2,
|
| 205 |
"replacement_word": "dog",
|
| 206 |
-
"model_name": "bert-base-uncased"
|
|
|
|
| 207 |
}
|
| 208 |
```
|
| 209 |
|
|
@@ -222,8 +224,20 @@ Response:
|
|
| 222 |
|
| 223 |
## Available Models
|
| 224 |
|
| 225 |
-
- `bert-base-uncased`: BERT Base Uncased model
|
| 226 |
-
- `roberta-base`: RoBERTa Base model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 227 |
|
| 228 |
## RoBERTa Token Handling
|
| 229 |
|
|
|
|
| 15 |
|
| 16 |
# BERT Attention Visualizer API
|
| 17 |
|
| 18 |
+
This is the backend API for the BERT Attention Visualizer, a tool that allows you to visualize attention patterns in BERT, RoBERTa, DistilBERT, and TinyBERT models.
|
| 19 |
|
| 20 |
## API Endpoints
|
| 21 |
|
|
|
|
| 157 |
```json
|
| 158 |
{
|
| 159 |
"text": "The cat sat on the mat",
|
| 160 |
+
"model_name": "bert-base-uncased",
|
| 161 |
+
"visualization_method": "raw"
|
| 162 |
}
|
| 163 |
```
|
| 164 |
|
|
|
|
| 204 |
"text": "The cat sat on the mat",
|
| 205 |
"masked_index": 2,
|
| 206 |
"replacement_word": "dog",
|
| 207 |
+
"model_name": "bert-base-uncased",
|
| 208 |
+
"visualization_method": "raw"
|
| 209 |
}
|
| 210 |
```
|
| 211 |
|
|
|
|
| 224 |
|
| 225 |
## Available Models
|
| 226 |
|
| 227 |
+
- `bert-base-uncased`: BERT Base Uncased model (12 layers, 768 hidden dimensions)
|
| 228 |
+
- `roberta-base`: RoBERTa Base model (12 layers, 768 hidden dimensions)
|
| 229 |
+
- `distilbert-base-uncased`: DistilBERT Base Uncased model (6 layers, 768 hidden dimensions)
|
| 230 |
+
- `EdwinXhen/TinyBert_6Layer_MLM`: TinyBERT 6 Layer model (6 layers, knowledge distilled from BERT)
|
| 231 |
+
|
| 232 |
+
## Attention Visualization Methods
|
| 233 |
+
|
| 234 |
+
The API supports three attention visualization methods, which can be specified using the `visualization_method` parameter in the `/attention` and `/attention_comparison` endpoints:
|
| 235 |
+
|
| 236 |
+
- `raw`: Shows the raw attention weights from each attention head. This is the direct output from the model's attention mechanism.
|
| 237 |
+
|
| 238 |
+
- `rollout`: Implements Attention Rollout, which recursively combines attention weights across all layers through matrix multiplication. This accounts for how attention propagates through the network and incorporates the effect of residual connections, providing a more holistic view of token relationships.
|
| 239 |
+
|
| 240 |
+
- `flow`: Implements Attention Flow, which treats the multi-layer attention weights as a graph network and uses maximum flow algorithms to measure information flow between tokens. This method accounts for all possible paths through the network, revealing important connections that might not be apparent in raw attention weights.
|
| 241 |
|
| 242 |
## RoBERTa Token Handling
|
| 243 |
|