File size: 2,667 Bytes
05e28b5
 
 
 
 
a8c701c
05e28b5
 
7a75a86
05e28b5
 
 
61fec8d
597bf7d
17ba05a
597bf7d
 
 
 
17ba05a
 
bb162b6
17ba05a
597bf7d
 
 
408486e
17ba05a
 
 
 
 
 
597bf7d
 
 
 
 
17ba05a
597bf7d
 
17ba05a
 
597bf7d
 
 
17ba05a
 
597bf7d
 
 
17ba05a
 
597bf7d
 
 
fca8d88
17ba05a
 
 
 
 
 
597bf7d
 
 
 
 
17ba05a
597bf7d
 
 
 
17ba05a
597bf7d
 
17ba05a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
title: ExplaiNER
emoji: 🏷️
colorFrom: blue
colorTo: indigo
python_version: 3.9
sdk: streamlit
sdk_version: 1.10.0
app_file: src/app.py
pinned: true
---

# 🏷️ ExplaiNER: Error Analysis for NER models & datasets

Error Analysis is an important but often overlooked part of the data science project lifecycle, for which there is still very little tooling available. Practitioners tend to write throwaway code or, worse, skip this crucial step of understanding their models' errors altogether. This project tries to provide an extensive toolkit to probe any NER model/dataset combination, find labeling errors and understand the models' and datasets' limitations, leading the user on her way to further improvements.

## Sections


### Activations

A group of neurons tends to fire in response to commas and other punctuation. Other groups of neurons tend to fire in response to pronouns. Use this visualization to factorize neuron activity in individual FFNN layers or in the entire model.


### Embeddings

For every token in the dataset, we take its hidden state and project it onto a two-dimensional plane. Data points are colored by label/prediction, with disagreements marked by a small black border.


### Probing

A very direct and interactive way to test your model is by providing it with a list of text inputs and then inspecting the model outputs. The application features a multiline text field so the user can input multiple texts separated by newlines. For each text, the app will show a data frame containing the tokenized string, token predictions, probabilities and a visual indicator for low probability predictions -- these are the ones you should inspect first for prediction errors.


### Metrics

The metrics page contains precision, recall and f-score metrics as well as a confusion matrix over all the classes. By default, the confusion matrix is normalized. There's an option to zero out the diagonal, leaving only prediction errors (here it makes sense to turn off normalization, so you get raw error counts).


### Misclassified

This page contains all misclassified examples and allows filtering by specific error types.


### Loss by Token/Label

Show count, mean and median loss per token and label.


### Samples by Loss

Show every example sorted by loss (descending) for close inspection.


### Random Samples

Show random samples. Simple method, but it often turns up interesting things.


### Find Duplicates

Find potential duplicates in the data using cosine similarity.


### Inspect

Inspect your whole dataset, either unfiltered or by id.


### Raw data

See the data as seen by your model.


### Debug

Debug info.