nazneen
commited on
Commit
•
2443328
0
Parent(s):
app
Browse files- LICENSE +201 -0
- README.md +43 -0
- app.py +403 -0
- requirements.txt +139 -0
LICENSE
ADDED
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Apache License
|
2 |
+
Version 2.0, January 2004
|
3 |
+
http://www.apache.org/licenses/
|
4 |
+
|
5 |
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
6 |
+
|
7 |
+
1. Definitions.
|
8 |
+
|
9 |
+
"License" shall mean the terms and conditions for use, reproduction,
|
10 |
+
and distribution as defined by Sections 1 through 9 of this document.
|
11 |
+
|
12 |
+
"Licensor" shall mean the copyright owner or entity authorized by
|
13 |
+
the copyright owner that is granting the License.
|
14 |
+
|
15 |
+
"Legal Entity" shall mean the union of the acting entity and all
|
16 |
+
other entities that control, are controlled by, or are under common
|
17 |
+
control with that entity. For the purposes of this definition,
|
18 |
+
"control" means (i) the power, direct or indirect, to cause the
|
19 |
+
direction or management of such entity, whether by contract or
|
20 |
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
21 |
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
22 |
+
|
23 |
+
"You" (or "Your") shall mean an individual or Legal Entity
|
24 |
+
exercising permissions granted by this License.
|
25 |
+
|
26 |
+
"Source" form shall mean the preferred form for making modifications,
|
27 |
+
including but not limited to software source code, documentation
|
28 |
+
source, and configuration files.
|
29 |
+
|
30 |
+
"Object" form shall mean any form resulting from mechanical
|
31 |
+
transformation or translation of a Source form, including but
|
32 |
+
not limited to compiled object code, generated documentation,
|
33 |
+
and conversions to other media types.
|
34 |
+
|
35 |
+
"Work" shall mean the work of authorship, whether in Source or
|
36 |
+
Object form, made available under the License, as indicated by a
|
37 |
+
copyright notice that is included in or attached to the work
|
38 |
+
(an example is provided in the Appendix below).
|
39 |
+
|
40 |
+
"Derivative Works" shall mean any work, whether in Source or Object
|
41 |
+
form, that is based on (or derived from) the Work and for which the
|
42 |
+
editorial revisions, annotations, elaborations, or other modifications
|
43 |
+
represent, as a whole, an original work of authorship. For the purposes
|
44 |
+
of this License, Derivative Works shall not include works that remain
|
45 |
+
separable from, or merely link (or bind by name) to the interfaces of,
|
46 |
+
the Work and Derivative Works thereof.
|
47 |
+
|
48 |
+
"Contribution" shall mean any work of authorship, including
|
49 |
+
the original version of the Work and any modifications or additions
|
50 |
+
to that Work or Derivative Works thereof, that is intentionally
|
51 |
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
52 |
+
or by an individual or Legal Entity authorized to submit on behalf of
|
53 |
+
the copyright owner. For the purposes of this definition, "submitted"
|
54 |
+
means any form of electronic, verbal, or written communication sent
|
55 |
+
to the Licensor or its representatives, including but not limited to
|
56 |
+
communication on electronic mailing lists, source code control systems,
|
57 |
+
and issue tracking systems that are managed by, or on behalf of, the
|
58 |
+
Licensor for the purpose of discussing and improving the Work, but
|
59 |
+
excluding communication that is conspicuously marked or otherwise
|
60 |
+
designated in writing by the copyright owner as "Not a Contribution."
|
61 |
+
|
62 |
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
63 |
+
on behalf of whom a Contribution has been received by Licensor and
|
64 |
+
subsequently incorporated within the Work.
|
65 |
+
|
66 |
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
67 |
+
this License, each Contributor hereby grants to You a perpetual,
|
68 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
69 |
+
copyright license to reproduce, prepare Derivative Works of,
|
70 |
+
publicly display, publicly perform, sublicense, and distribute the
|
71 |
+
Work and such Derivative Works in Source or Object form.
|
72 |
+
|
73 |
+
3. Grant of Patent License. Subject to the terms and conditions of
|
74 |
+
this License, each Contributor hereby grants to You a perpetual,
|
75 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
76 |
+
(except as stated in this section) patent license to make, have made,
|
77 |
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
78 |
+
where such license applies only to those patent claims licensable
|
79 |
+
by such Contributor that are necessarily infringed by their
|
80 |
+
Contribution(s) alone or by combination of their Contribution(s)
|
81 |
+
with the Work to which such Contribution(s) was submitted. If You
|
82 |
+
institute patent litigation against any entity (including a
|
83 |
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
84 |
+
or a Contribution incorporated within the Work constitutes direct
|
85 |
+
or contributory patent infringement, then any patent licenses
|
86 |
+
granted to You under this License for that Work shall terminate
|
87 |
+
as of the date such litigation is filed.
|
88 |
+
|
89 |
+
4. Redistribution. You may reproduce and distribute copies of the
|
90 |
+
Work or Derivative Works thereof in any medium, with or without
|
91 |
+
modifications, and in Source or Object form, provided that You
|
92 |
+
meet the following conditions:
|
93 |
+
|
94 |
+
(a) You must give any other recipients of the Work or
|
95 |
+
Derivative Works a copy of this License; and
|
96 |
+
|
97 |
+
(b) You must cause any modified files to carry prominent notices
|
98 |
+
stating that You changed the files; and
|
99 |
+
|
100 |
+
(c) You must retain, in the Source form of any Derivative Works
|
101 |
+
that You distribute, all copyright, patent, trademark, and
|
102 |
+
attribution notices from the Source form of the Work,
|
103 |
+
excluding those notices that do not pertain to any part of
|
104 |
+
the Derivative Works; and
|
105 |
+
|
106 |
+
(d) If the Work includes a "NOTICE" text file as part of its
|
107 |
+
distribution, then any Derivative Works that You distribute must
|
108 |
+
include a readable copy of the attribution notices contained
|
109 |
+
within such NOTICE file, excluding those notices that do not
|
110 |
+
pertain to any part of the Derivative Works, in at least one
|
111 |
+
of the following places: within a NOTICE text file distributed
|
112 |
+
as part of the Derivative Works; within the Source form or
|
113 |
+
documentation, if provided along with the Derivative Works; or,
|
114 |
+
within a display generated by the Derivative Works, if and
|
115 |
+
wherever such third-party notices normally appear. The contents
|
116 |
+
of the NOTICE file are for informational purposes only and
|
117 |
+
do not modify the License. You may add Your own attribution
|
118 |
+
notices within Derivative Works that You distribute, alongside
|
119 |
+
or as an addendum to the NOTICE text from the Work, provided
|
120 |
+
that such additional attribution notices cannot be construed
|
121 |
+
as modifying the License.
|
122 |
+
|
123 |
+
You may add Your own copyright statement to Your modifications and
|
124 |
+
may provide additional or different license terms and conditions
|
125 |
+
for use, reproduction, or distribution of Your modifications, or
|
126 |
+
for any such Derivative Works as a whole, provided Your use,
|
127 |
+
reproduction, and distribution of the Work otherwise complies with
|
128 |
+
the conditions stated in this License.
|
129 |
+
|
130 |
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
131 |
+
any Contribution intentionally submitted for inclusion in the Work
|
132 |
+
by You to the Licensor shall be under the terms and conditions of
|
133 |
+
this License, without any additional terms or conditions.
|
134 |
+
Notwithstanding the above, nothing herein shall supersede or modify
|
135 |
+
the terms of any separate license agreement you may have executed
|
136 |
+
with Licensor regarding such Contributions.
|
137 |
+
|
138 |
+
6. Trademarks. This License does not grant permission to use the trade
|
139 |
+
names, trademarks, service marks, or product names of the Licensor,
|
140 |
+
except as required for reasonable and customary use in describing the
|
141 |
+
origin of the Work and reproducing the content of the NOTICE file.
|
142 |
+
|
143 |
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
144 |
+
agreed to in writing, Licensor provides the Work (and each
|
145 |
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
146 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
147 |
+
implied, including, without limitation, any warranties or conditions
|
148 |
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
149 |
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
150 |
+
appropriateness of using or redistributing the Work and assume any
|
151 |
+
risks associated with Your exercise of permissions under this License.
|
152 |
+
|
153 |
+
8. Limitation of Liability. In no event and under no legal theory,
|
154 |
+
whether in tort (including negligence), contract, or otherwise,
|
155 |
+
unless required by applicable law (such as deliberate and grossly
|
156 |
+
negligent acts) or agreed to in writing, shall any Contributor be
|
157 |
+
liable to You for damages, including any direct, indirect, special,
|
158 |
+
incidental, or consequential damages of any character arising as a
|
159 |
+
result of this License or out of the use or inability to use the
|
160 |
+
Work (including but not limited to damages for loss of goodwill,
|
161 |
+
work stoppage, computer failure or malfunction, or any and all
|
162 |
+
other commercial damages or losses), even if such Contributor
|
163 |
+
has been advised of the possibility of such damages.
|
164 |
+
|
165 |
+
9. Accepting Warranty or Additional Liability. While redistributing
|
166 |
+
the Work or Derivative Works thereof, You may choose to offer,
|
167 |
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
168 |
+
or other liability obligations and/or rights consistent with this
|
169 |
+
License. However, in accepting such obligations, You may act only
|
170 |
+
on Your own behalf and on Your sole responsibility, not on behalf
|
171 |
+
of any other Contributor, and only if You agree to indemnify,
|
172 |
+
defend, and hold each Contributor harmless for any liability
|
173 |
+
incurred by, or claims asserted against, such Contributor by reason
|
174 |
+
of your accepting any such warranty or additional liability.
|
175 |
+
|
176 |
+
END OF TERMS AND CONDITIONS
|
177 |
+
|
178 |
+
APPENDIX: How to apply the Apache License to your work.
|
179 |
+
|
180 |
+
To apply the Apache License to your work, attach the following
|
181 |
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
182 |
+
replaced with your own identifying information. (Don't include
|
183 |
+
the brackets!) The text should be enclosed in the appropriate
|
184 |
+
comment syntax for the file format. We also recommend that a
|
185 |
+
file or class name and description of purpose be included on the
|
186 |
+
same "printed page" as the copyright notice for easier
|
187 |
+
identification within third-party archives.
|
188 |
+
|
189 |
+
Copyright [yyyy] [name of copyright owner]
|
190 |
+
|
191 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
192 |
+
you may not use this file except in compliance with the License.
|
193 |
+
You may obtain a copy of the License at
|
194 |
+
|
195 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
196 |
+
|
197 |
+
Unless required by applicable law or agreed to in writing, software
|
198 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
199 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
200 |
+
See the License for the specific language governing permissions and
|
201 |
+
limitations under the License.
|
README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: Systematic Error Analysis and Labeling
|
3 |
+
emoji: 🦭
|
4 |
+
colorFrom: yellow
|
5 |
+
colorTo: pink
|
6 |
+
sdk: streamlit
|
7 |
+
sdk_version: 1.10.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: apache-2.0
|
11 |
+
---
|
12 |
+
# SEAL
|
13 |
+
Systematic Error Analysis and Labeling (SEAL) is an interactive tool for discovering systematic errors in NLP models via clustering on high-loss example groups and semantic labeling for interpretability of those error-groups. It supports fine-grained analytical visualization for interactively zooming into potential systematic bugs and features for crafting prompts to label those bugs semantically.
|
14 |
+
|
15 |
+
🎥 [Demo screencast](https://vimeo.com/736659216)
|
16 |
+
|
17 |
+
<p>
|
18 |
+
<img src="./assets/website/seal.gif" alt="Demo gif"/>
|
19 |
+
</p>
|
20 |
+
|
21 |
+
## Table of Contents
|
22 |
+
- [Installation](#installation)
|
23 |
+
- [Quickstart](#quickstart)
|
24 |
+
- [Running Locally](#running-locally)
|
25 |
+
- [Citation](#citation)
|
26 |
+
|
27 |
+
## Installation
|
28 |
+
Please use python>=3.8 since some dependencies require that for installation.
|
29 |
+
```shell
|
30 |
+
git clone https://huggingface.co/spaces/nazneen/seal
|
31 |
+
cd seal
|
32 |
+
pip install --upgrade pip
|
33 |
+
pip install -r requirements.txt
|
34 |
+
```
|
35 |
+
|
36 |
+
## Quickstart
|
37 |
+
```
|
38 |
+
streamlit run app.py
|
39 |
+
```
|
40 |
+
|
41 |
+
## Running Locally
|
42 |
+
|
43 |
+
## Citation
|
app.py
ADDED
@@ -0,0 +1,403 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## LIBRARIES ###
|
2 |
+
## Data
|
3 |
+
import numpy as np
|
4 |
+
from numpy.core.numeric import outer
|
5 |
+
import pandas as pd
|
6 |
+
import torch
|
7 |
+
import pickle
|
8 |
+
from tqdm import tqdm
|
9 |
+
from math import floor
|
10 |
+
from collections import defaultdict
|
11 |
+
from transformers import AutoTokenizer
|
12 |
+
#pd.set_option('precision', 2)
|
13 |
+
#pd.options.display.float_format = '${:,.2f}'.format
|
14 |
+
|
15 |
+
# Analysis
|
16 |
+
# from gensim.models.doc2vec import Doc2Vec
|
17 |
+
# from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
|
18 |
+
import nltk
|
19 |
+
from nltk.cluster import KMeansClusterer
|
20 |
+
import scipy.spatial.distance as sdist
|
21 |
+
from scipy.spatial import distance_matrix
|
22 |
+
# nltk.download('punkt') #make sure that punkt is downloaded
|
23 |
+
|
24 |
+
# App & Visualization
|
25 |
+
import streamlit as st
|
26 |
+
import altair as alt
|
27 |
+
import plotly.graph_objects as go
|
28 |
+
from streamlit_vega_lite import altair_component
|
29 |
+
|
30 |
+
|
31 |
+
# utils
|
32 |
+
from random import sample
|
33 |
+
from seal import utils as ut
|
34 |
+
|
35 |
+
|
36 |
+
def down_samp(embedding):
|
37 |
+
"""Down sample a data frame for altiar visualization """
|
38 |
+
# total number of positive and negative sentiments in the class
|
39 |
+
#embedding = embedding.groupby('slice').apply(lambda x: x.sample(frac=0.3))
|
40 |
+
total_size = embedding.groupby(['slice', 'label'], as_index=False).count()
|
41 |
+
|
42 |
+
user_data = 0
|
43 |
+
# if 'Your Sentences' in str(total_size['slice']):
|
44 |
+
# tmp = embedding.groupby(['slice'], as_index=False).count()
|
45 |
+
# val = int(tmp[tmp['slice'] == "Your Sentences"]['source'])
|
46 |
+
# user_data = val
|
47 |
+
|
48 |
+
max_sample = total_size.groupby('slice').max()['content']
|
49 |
+
|
50 |
+
# # down sample to meeting altair's max values
|
51 |
+
# # but keep the proportional representation of groups
|
52 |
+
down_samp = 1/(sum(max_sample.astype(float))/(1000-user_data))
|
53 |
+
|
54 |
+
max_samp = max_sample.apply(lambda x: floor(
|
55 |
+
x*down_samp)).astype(int).to_dict()
|
56 |
+
max_samp['Your Sentences'] = user_data
|
57 |
+
|
58 |
+
# # sample down for each group in the data frame
|
59 |
+
embedding = embedding.groupby('slice').apply(
|
60 |
+
lambda x: x.sample(n=max_samp.get(x.name))).reset_index(drop=True)
|
61 |
+
|
62 |
+
# # order the embedding
|
63 |
+
return(embedding)
|
64 |
+
|
65 |
+
#down sample low loss points only so misclassified examples are not down sampled in viz
|
66 |
+
|
67 |
+
|
68 |
+
def down_samp_ll(embedding):
|
69 |
+
df_ll = embedding[embedding['slice'] == 'low-loss']
|
70 |
+
#if(len(df_ll)<5000):
|
71 |
+
# return embedding
|
72 |
+
#else:
|
73 |
+
df_hl = embedding[embedding['slice'] == 'high-loss']
|
74 |
+
down_samp = len(df_ll) - (1000-len(df_hl))
|
75 |
+
df_ll.sample(n=down_samp)
|
76 |
+
embedding.drop(df_ll.index)
|
77 |
+
return embedding
|
78 |
+
|
79 |
+
|
80 |
+
def data_comparison(df):
|
81 |
+
selection = alt.selection_multi(fields=['cluster', 'label'])
|
82 |
+
color = alt.condition(alt.datum.slice == 'high-loss', alt.Color('cluster:N', scale=alt.Scale(
|
83 |
+
domain=df.cluster.unique().tolist()), legend=None), alt.value("lightgray"))
|
84 |
+
opacity = alt.condition(selection, alt.value(0.7), alt.value(0.25))
|
85 |
+
|
86 |
+
# basic chart
|
87 |
+
scatter = alt.Chart(df).mark_point(size=100, filled=True).encode(
|
88 |
+
x=alt.X('x:Q', axis=None),
|
89 |
+
y=alt.Y('y:Q', axis=None),
|
90 |
+
color=color,
|
91 |
+
shape=alt.Shape('label:N', scale=alt.Scale(
|
92 |
+
range=['circle', 'diamond'])),
|
93 |
+
tooltip=['cluster:N', 'slice:N', 'content:N', 'label:N', 'pred:N'],
|
94 |
+
opacity=opacity
|
95 |
+
).properties(
|
96 |
+
width=1000,
|
97 |
+
height=800
|
98 |
+
).interactive()
|
99 |
+
|
100 |
+
legend = alt.Chart(df).mark_point(size=100, filled=True).encode(
|
101 |
+
x=alt.X("label:N"),
|
102 |
+
y=alt.Y('cluster:N', axis=alt.Axis(
|
103 |
+
orient='right'), sort='ascending', title=''),
|
104 |
+
shape=alt.Shape('label:N', scale=alt.Scale(
|
105 |
+
range=['circle', 'diamond']), legend=None),
|
106 |
+
color=color,
|
107 |
+
).add_selection(
|
108 |
+
selection
|
109 |
+
)
|
110 |
+
layered = scatter | legend
|
111 |
+
layered = layered.configure_axis(
|
112 |
+
grid=False
|
113 |
+
).configure_view(
|
114 |
+
strokeOpacity=0
|
115 |
+
)
|
116 |
+
|
117 |
+
content = legend.encode(text='content:N')
|
118 |
+
|
119 |
+
return layered
|
120 |
+
|
121 |
+
|
122 |
+
def viz_panel(embedding_df):
|
123 |
+
""" Visualization Panel Layout"""
|
124 |
+
all_metrics = {}
|
125 |
+
st.warning("**Error group visualization**")
|
126 |
+
with st.expander("How to read this chart:"):
|
127 |
+
st.markdown("* Each **point** is an input example.")
|
128 |
+
st.markdown("* Gray points have low-loss and the colored have high-loss. High-loss instances are clustered using **kmeans** and each color represents a cluster.")
|
129 |
+
st.markdown(
|
130 |
+
"* The **shape** of each point reflects the label category -- positive (diamond) or negative sentiment (circle).")
|
131 |
+
#st.altair_chart(data_comparison(down_samp(embedding_df)), use_container_width=True)
|
132 |
+
viz = data_comparison(embedding_df)
|
133 |
+
st.altair_chart(viz, use_container_width=True)
|
134 |
+
|
135 |
+
@st.cache()
|
136 |
+
def frequent_tokens(data, tokenizer, loss_quantile=0.95, top_k=200, smoothing=0.005):
|
137 |
+
unique_tokens = []
|
138 |
+
tokens = []
|
139 |
+
for row in tqdm(data['content']):
|
140 |
+
tokenized = tokenizer(row, padding=True, truncation=True, return_tensors='pt')
|
141 |
+
tokens.append(tokenized['input_ids'].flatten())
|
142 |
+
unique_tokens.append(torch.unique(tokenized['input_ids']))
|
143 |
+
losses = data['loss'].astype(float)
|
144 |
+
high_loss = losses.quantile(loss_quantile)
|
145 |
+
loss_weights = np.where(losses > high_loss,losses,0.0)
|
146 |
+
loss_weights = loss_weights / loss_weights.sum()
|
147 |
+
|
148 |
+
token_frequencies = defaultdict(float)
|
149 |
+
token_frequencies_error = defaultdict(float)
|
150 |
+
weights_uniform = np.full_like(loss_weights, 1 / len(loss_weights))
|
151 |
+
|
152 |
+
for i in tqdm(range(len(data))):
|
153 |
+
for token in unique_tokens[i]:
|
154 |
+
token_frequencies[token.item()] += weights_uniform[i]
|
155 |
+
token_frequencies_error[token.item()] += loss_weights[i]
|
156 |
+
|
157 |
+
token_lrs = {k: (smoothing+token_frequencies_error[k]) / (
|
158 |
+
smoothing+token_frequencies[k]) for k in token_frequencies}
|
159 |
+
tokens_sorted = list(map(lambda x: x[0], sorted(
|
160 |
+
token_lrs.items(), key=lambda x: x[1])[::-1]))
|
161 |
+
|
162 |
+
top_tokens = []
|
163 |
+
for i, (token) in enumerate(tokens_sorted[:top_k]):
|
164 |
+
top_tokens.append(['%10s' % (tokenizer.decode(token)), '%.4f' % (token_frequencies[token]), '%.4f' % (
|
165 |
+
token_frequencies_error[token]), '%4.2f' % (token_lrs[token])])
|
166 |
+
return pd.DataFrame(top_tokens, columns=['token', 'freq', 'error-freq', 'ratio'])
|
167 |
+
|
168 |
+
|
169 |
+
def load_precached_groups(data_ll, df_list, num_clusters, group_dict_path, group_idx_path, num_points=1000):
|
170 |
+
merged = dynamic_groups(df_list, num_clusters)
|
171 |
+
down_samp = len(data_ll) - (num_points-len(merged))
|
172 |
+
sample_idx = data_ll.sample(n=down_samp)
|
173 |
+
data_ll = data_ll.drop(sample_idx.index)
|
174 |
+
# put all the low loss data in one bigger cluster
|
175 |
+
data_ll['cluster'] = merged.loc[merged['cluster'].idxmax()].cluster + 1
|
176 |
+
merged = pd.concat([merged, data_ll])
|
177 |
+
# merged['cluster'] = merged['cluster'].astype('str')
|
178 |
+
# with open(group_dict_path, 'rb') as f:
|
179 |
+
# group_dict = pickle.load(f)
|
180 |
+
# with open(group_idx_path, 'rb') as f:
|
181 |
+
# group_idx_dict = pickle.load(f)
|
182 |
+
# for k,v in group_idx_dict.items():
|
183 |
+
# label = group_dict.get(k)
|
184 |
+
# merged.loc[merged.index.isin(v), ['cluster']] = label
|
185 |
+
return merged
|
186 |
+
|
187 |
+
|
188 |
+
def dynamic_groups(df_list, num_clusters):
|
189 |
+
merged = pd.DataFrame()
|
190 |
+
ind = 0
|
191 |
+
for df in df_list:
|
192 |
+
kmeans_df, assigned_clusters = kmeans(df, num_clusters=num_clusters)
|
193 |
+
kmeans_df['cluster'] = kmeans_df['cluster'] + ind*num_clusters
|
194 |
+
ind = ind+1
|
195 |
+
merged = pd.concat([merged, kmeans_df])
|
196 |
+
return merged
|
197 |
+
|
198 |
+
|
199 |
+
@st.cache(ttl=600)
|
200 |
+
def get_data(inference, emb):
|
201 |
+
preds = inference.outputs.numpy()
|
202 |
+
losses = inference.losses.numpy()
|
203 |
+
embeddings = pd.DataFrame(emb, columns=['x', 'y'])
|
204 |
+
num_examples = len(losses)
|
205 |
+
# dataset_labels = [dataset[i]['label'] for i in range(num_examples)]
|
206 |
+
return pd.concat([pd.DataFrame(np.transpose(np.vstack([dataset[:num_examples]['content'],
|
207 |
+
dataset[:num_examples]['label'], preds, losses])), columns=['content', 'label', 'pred', 'loss']), embeddings], axis=1)
|
208 |
+
|
209 |
+
|
210 |
+
def kmeans(data, num_clusters=3):
|
211 |
+
X = np.array(data['embedding'].to_list())
|
212 |
+
kclusterer = KMeansClusterer(
|
213 |
+
num_clusters, distance=nltk.cluster.util.cosine_distance,
|
214 |
+
repeats=25, avoid_empty_clusters=True)
|
215 |
+
assigned_clusters = kclusterer.cluster(X, assign_clusters=True)
|
216 |
+
data['cluster'] = pd.Series(
|
217 |
+
assigned_clusters, index=data.index).astype('int')
|
218 |
+
data['centroid'] = data['cluster'].apply(lambda x: kclusterer.means()[x])
|
219 |
+
return data, assigned_clusters
|
220 |
+
|
221 |
+
|
222 |
+
def distance_from_centroid(row):
|
223 |
+
return sdist.norm(row['embedding'] - row['centroid'].tolist())
|
224 |
+
|
225 |
+
|
226 |
+
@st.cache(ttl=600)
|
227 |
+
def craft_prompt(cluster_df):
|
228 |
+
instruction = "In this task, we'll assign a short and precise label to a cluster of documents based on the topics or concepts most relevant to these documents. The documents are all subsets of a sentiment classification dataset.\n"
|
229 |
+
if len(cluster_df) > 10:
|
230 |
+
content = cluster_df['content'].str[:600].tolist()
|
231 |
+
else:
|
232 |
+
content = cluster_df['content'].str[:1000].tolist()
|
233 |
+
examples = '\n - '.join(content)
|
234 |
+
text = instruction + '- ' + examples + '\n Cluster label:'
|
235 |
+
return text.strip()
|
236 |
+
|
237 |
+
|
238 |
+
@st.cache(ttl=600)
|
239 |
+
def topic_distribution(weights, smoothing=0.01):
|
240 |
+
topic_frequencies = defaultdict(float)
|
241 |
+
topic_frequencies_error = defaultdict(float)
|
242 |
+
weights_uniform = np.full_like(weights, 1 / len(weights))
|
243 |
+
num_examples = len(weights)
|
244 |
+
for i in range(num_examples):
|
245 |
+
example = dataset[i]
|
246 |
+
category = example['title']
|
247 |
+
topic_frequencies[category] += weights_uniform[i]
|
248 |
+
topic_frequencies_error[category] += weights[i]
|
249 |
+
|
250 |
+
topic_ratios = {c: (smoothing + topic_frequencies_error[c]) / (
|
251 |
+
smoothing + topic_frequencies[c]) for c in topic_frequencies}
|
252 |
+
|
253 |
+
categories_sorted = map(lambda x: x[0], sorted(
|
254 |
+
topic_ratios.items(), key=lambda x: x[1], reverse=True))
|
255 |
+
|
256 |
+
topic_distr = []
|
257 |
+
for category in categories_sorted:
|
258 |
+
topic_distr.append(['%.3f' % topic_frequencies[category], '%.3f' %
|
259 |
+
topic_frequencies_error[category], '%.2f' % topic_ratios[category], '%s' % category])
|
260 |
+
|
261 |
+
return pd.DataFrame(topic_distr, columns=['Overall frequency', 'Error frequency', 'Ratio', 'Category'])
|
262 |
+
|
263 |
+
|
264 |
+
def populate_session(dataset, model):
|
265 |
+
data_df = read_file_to_df(
|
266 |
+
'./assets/data/'+dataset + '_' + model+'.parquet')
|
267 |
+
if model == 'albert-base-v2-yelp-polarity':
|
268 |
+
tokenizer = AutoTokenizer.from_pretrained('textattack/'+model)
|
269 |
+
else:
|
270 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
271 |
+
# if "user_data" not in st.session_state:
|
272 |
+
# st.session_state["user_data"] = data_df
|
273 |
+
# if "selected_slice" not in st.session_state:
|
274 |
+
# st.session_state["selected_slice"] = None
|
275 |
+
return tokenizer
|
276 |
+
|
277 |
+
|
278 |
+
@st.cache(allow_output_mutation=True)
|
279 |
+
def read_file_to_df(file):
|
280 |
+
return pd.read_parquet(file)
|
281 |
+
|
282 |
+
|
283 |
+
if __name__ == "__main__":
|
284 |
+
### STREAMLIT APP CONGFIG ###
|
285 |
+
st.set_page_config(layout="wide", page_title="Interactive Error Analysis")
|
286 |
+
|
287 |
+
ut.init_style()
|
288 |
+
|
289 |
+
lcol, rcol = st.columns([5, 2])
|
290 |
+
# ******* loading the mode and the data
|
291 |
+
#st.sidebar.mardown("<h4>Interactive Error Analysis</h4>", unsafe_allow_html=True)
|
292 |
+
|
293 |
+
dataset = st.sidebar.selectbox(
|
294 |
+
"Dataset",
|
295 |
+
["amazon_polarity", "yelp_polarity", "imdb"],
|
296 |
+
index=1
|
297 |
+
)
|
298 |
+
|
299 |
+
model = st.sidebar.selectbox(
|
300 |
+
"Model",
|
301 |
+
["distilbert-base-uncased-finetuned-sst-2-english",
|
302 |
+
"albert-base-v2-yelp-polarity", "distilbert-imdb"],
|
303 |
+
)
|
304 |
+
|
305 |
+
### LOAD DATA AND TOKENIZER VARIABLES ###
|
306 |
+
##uncomment the next next line to run dynamically and not from file
|
307 |
+
#tokenizer = populate_session(dataset, model)
|
308 |
+
if dataset == 'imdb':
|
309 |
+
data_df = read_file_to_df('./assets/data/imdb_distilbert.parquet')
|
310 |
+
else:
|
311 |
+
data_df = read_file_to_df(
|
312 |
+
'./assets/data/'+dataset + '_' + model+'.parquet')
|
313 |
+
data_df = data_df[:20000]
|
314 |
+
|
315 |
+
loss_quantile = st.sidebar.slider(
|
316 |
+
"Loss Quantile", min_value=0.9, max_value=1.0, step=0.01, value=0.98
|
317 |
+
)
|
318 |
+
|
319 |
+
data_df['loss'] = data_df['loss'].astype(float)
|
320 |
+
data_df['pred'] = data_df['pred'].astype(int)
|
321 |
+
losses = data_df['loss']
|
322 |
+
high_loss = losses.quantile(loss_quantile)
|
323 |
+
data_df['slice'] = np.where(data_df['loss'] >= high_loss, 'high-loss', 'low-loss')
|
324 |
+
# drop rows that are not hl
|
325 |
+
data_hl = pd.DataFrame(data_df[data_df['slice'] == 'high-loss'])
|
326 |
+
#data_hl = data_hl.drop(data_hl[data_hl.pred==data_hl.label].index)
|
327 |
+
data_ll = pd.DataFrame(data_df[data_df['slice'] == 'low-loss'])
|
328 |
+
# this is to allow clustering over each error type. fp, fn for binary classification
|
329 |
+
df_list = [d for _, d in data_hl.groupby(['label'])]
|
330 |
+
|
331 |
+
run_kmeans = st.sidebar.radio(
|
332 |
+
"Cluster error group?", ('True', 'False'), index=0)
|
333 |
+
|
334 |
+
num_clusters = st.sidebar.slider(
|
335 |
+
"# clusters", min_value=1, max_value=60, step=1, value=3)
|
336 |
+
|
337 |
+
num_points = st.sidebar.slider(
|
338 |
+
"# data points to visualize", min_value=1000, max_value=5000, step=100, value=1000)
|
339 |
+
|
340 |
+
selected_cluster = st.sidebar.number_input(
|
341 |
+
label='Cluster #:', max_value=num_clusters-1, min_value=0)
|
342 |
+
|
343 |
+
if run_kmeans == 'True':
|
344 |
+
with st.spinner(text='running kmeans...'):
|
345 |
+
group_dict_path = './assets/data/cluster-labels/'+dataset+'.pkl'
|
346 |
+
group_idx_path = './assets/data/cluster-labels/'+dataset+'_idx.pkl'
|
347 |
+
#data_hl_path = './assets/data/high-loss/'+dataset+'.parquet'
|
348 |
+
merged = load_precached_groups(data_ll, df_list, int(
|
349 |
+
(num_clusters/2)), group_dict_path, group_idx_path, num_points=num_points)
|
350 |
+
#dynamic_groups(df_list,)
|
351 |
+
#tmp = pd.concat([data_ll, merged], axis =0, ignore_index=True)
|
352 |
+
|
353 |
+
cluster_content = craft_prompt(
|
354 |
+
merged.loc[merged['cluster'] == selected_cluster])
|
355 |
+
|
356 |
+
with lcol:
|
357 |
+
st.markdown('<h5>Error Groups</h5>', unsafe_allow_html=True)
|
358 |
+
with st.expander("How to read this table:"):
|
359 |
+
st.markdown(
|
360 |
+
"* *Error groups* refers to the subset of evaluation dataset the model performs poorly on.")
|
361 |
+
st.markdown(
|
362 |
+
"* The table displays model error groups on the evaluation dataset, sorted by loss.")
|
363 |
+
st.markdown(
|
364 |
+
"* Each row is an input example that includes the label, model pred, loss, and error group.")
|
365 |
+
with st.spinner(text='loading error groups...'):
|
366 |
+
#dataframe=read_file_to_df('./assets/data/'+dataset+ '_'+ model+'_error-slices.parquet')
|
367 |
+
#uncomment the next next line to run dynamically and not from file
|
368 |
+
dataframe = merged[['content', 'label', 'pred', 'loss', 'cluster']].sort_values(
|
369 |
+
by=['loss'], ascending=False)
|
370 |
+
#table_html = dataframe.to_html(columns=['content', 'label', 'pred', 'loss', 'cluster'], max_rows=50)
|
371 |
+
#table_html = table_html.replace("<th>", '<th align="left">') # left-align the headers
|
372 |
+
st.write(dataframe.style.format(
|
373 |
+
{'loss': '{:.2f}'}), width=1000, height=300)
|
374 |
+
|
375 |
+
with rcol:
|
376 |
+
with st.spinner(text='loading...'):
|
377 |
+
st.markdown('<h5>Word Distribution in Error Groups</h5>',
|
378 |
+
unsafe_allow_html=True)
|
379 |
+
#uncomment the next two lines to run dynamically and not from file
|
380 |
+
# if model == 'albert-base-v2-yelp-polarity':
|
381 |
+
# tokenizer = AutoTokenizer.from_pretrained('textattack/'+model)
|
382 |
+
# else:
|
383 |
+
# tokenizer = AutoTokenizer.from_pretrained(model)
|
384 |
+
# commontokens = frequent_tokens(data_df, tokenizer, loss_quantile=loss_quantile)
|
385 |
+
if dataset == 'imdb':
|
386 |
+
commontokens = read_file_to_df('./assets/data/imdb_distilbert_commontokens.parquet')
|
387 |
+
else:
|
388 |
+
commontokens = read_file_to_df(
|
389 |
+
'./assets/data/'+dataset + '_' + model+'_commontokens.parquet')
|
390 |
+
with st.expander("How to read this table:"):
|
391 |
+
st.markdown(
|
392 |
+
"* The table displays the most frequent tokens in error groups, relative to their frequencies in the val set.")
|
393 |
+
|
394 |
+
st.write(commontokens)
|
395 |
+
|
396 |
+
with st.spinner(text='loading visualization...'):
|
397 |
+
viz_panel(merged)
|
398 |
+
|
399 |
+
st.sidebar.download_button(
|
400 |
+
data=cluster_content,
|
401 |
+
label="Build prompt from data",
|
402 |
+
file_name='prompt'
|
403 |
+
)
|
requirements.txt
ADDED
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file may be used to create an environment using:
|
2 |
+
# $ conda create --name <env> --file <this file>
|
3 |
+
# platform: osx-arm64
|
4 |
+
absl-py==1.0.0; python_version >= '3.6'
|
5 |
+
aiohttp==3.8.0
|
6 |
+
aiosignal==1.2.0; python_version >= '3.6'
|
7 |
+
altair==4.1.0
|
8 |
+
antlr4-python3-runtime==4.8
|
9 |
+
appnope==0.1.2; sys_platform == 'darwin' and platform_system == 'Darwin'
|
10 |
+
argon2-cffi==21.1.0; python_version >= '3.5'
|
11 |
+
astor==0.8.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
12 |
+
async-timeout==4.0.1; python_version >= '3.6'
|
13 |
+
attrs==21.2.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
|
14 |
+
backcall==0.2.0
|
15 |
+
backports.zoneinfo==0.2.1; python_version >= '3.6' and python_version < '3.9'
|
16 |
+
base58==2.1.1; python_version >= '3.5'
|
17 |
+
bleach==4.1.0; python_version >= '3.6'
|
18 |
+
blinker==1.4
|
19 |
+
cachetools==4.2.4; python_version ~= '3.5'
|
20 |
+
certifi==2021.10.8
|
21 |
+
cffi==1.15.0
|
22 |
+
charset-normalizer==2.0.7; python_version >= '3'
|
23 |
+
click==7.1.2; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
|
24 |
+
cython==0.29.24; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
25 |
+
cytoolz==0.11.2; python_version >= '3.5'
|
26 |
+
dataclasses==0.6
|
27 |
+
datasets==1.15.1
|
28 |
+
debugpy==1.5.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
|
29 |
+
decorator==5.1.0; python_version >= '3.5'
|
30 |
+
defusedxml==0.7.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
|
31 |
+
dill==0.3.4; python_version >= '2.7' and python_version != '3.0'
|
32 |
+
entrypoints==0.3; python_version >= '2.7'
|
33 |
+
fastbpe==0.1.0
|
34 |
+
filelock==3.3.2; python_version >= '3.6'
|
35 |
+
frozenlist==1.2.0; python_version >= '3.6'
|
36 |
+
fsspec[http]==2021.11.0; python_version >= '3.6'
|
37 |
+
future==0.18.2; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
38 |
+
fuzzywuzzy==0.18.0
|
39 |
+
gitdb==4.0.9; python_version >= '3.6'
|
40 |
+
gitpython==3.1.24; python_version >= '3.7'
|
41 |
+
google-auth-oauthlib==0.4.6; python_version >= '3.6'
|
42 |
+
google-auth==2.3.3; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'
|
43 |
+
grpcio==1.41.1
|
44 |
+
idna==3.3; python_version >= '3'
|
45 |
+
importlib-resources==5.4.0; python_version < '3.9'
|
46 |
+
kaleido==0.2.1
|
47 |
+
markdown==3.3.4; python_version >= '3.6'
|
48 |
+
markupsafe==2.0.1; python_version >= '3.6'
|
49 |
+
matplotlib-inline==0.1.3; python_version >= '3.5'
|
50 |
+
meerkat-ml==0.1.2; python_version >= '3.7'
|
51 |
+
mistune==0.8.4
|
52 |
+
multidict==5.2.0; python_version >= '3.6'
|
53 |
+
multiprocess==0.70.12.2
|
54 |
+
nbclient==0.5.8; python_full_version >= '3.6.1'
|
55 |
+
nbconvert==6.3.0; python_version >= '3.7'
|
56 |
+
nbformat==5.1.3; python_version >= '3.5'
|
57 |
+
nest-asyncio==1.5.1; python_version >= '3.5'
|
58 |
+
nltk==3.6.5
|
59 |
+
notebook==6.4.5; python_version >= '3.6'
|
60 |
+
numpy==1.21.4
|
61 |
+
oauthlib==3.1.1; python_version >= '3.6'
|
62 |
+
omegaconf==2.1.1; python_version >= '3.6'
|
63 |
+
packaging==21.2; python_version >= '3.6'
|
64 |
+
pandas==1.3.4
|
65 |
+
pandocfilters==1.5.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
66 |
+
parso==0.8.2; python_version >= '3.6'
|
67 |
+
pexpect==4.8.0; sys_platform != 'win32'
|
68 |
+
pickleshare==0.7.5
|
69 |
+
pillow==8.4.0; python_version >= '3.6'
|
70 |
+
plotly==5.3.1
|
71 |
+
progressbar==2.5
|
72 |
+
prometheus-client==0.12.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
73 |
+
prompt-toolkit==3.0.22; python_full_version >= '3.6.2'
|
74 |
+
protobuf==3.19.1; python_version >= '3.5'
|
75 |
+
ptyprocess==0.7.0; os_name != 'nt'
|
76 |
+
pyahocorasick==1.4.2
|
77 |
+
pyarrow==6.0.0; python_version >= '3.6'
|
78 |
+
pyasn1-modules==0.2.8
|
79 |
+
pyasn1==0.4.8
|
80 |
+
pycparser==2.21; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
81 |
+
pydeck==0.7.1; python_version >= '3.7'
|
82 |
+
pydeprecate==0.3.1; python_version >= '3.6'
|
83 |
+
pygments==2.10.0; python_version >= '3.5'
|
84 |
+
pympler==0.9
|
85 |
+
pyparsing==2.4.7; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
86 |
+
pyrsistent==0.18.0; python_version >= '3.6'
|
87 |
+
python-dateutil==2.8.2; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
88 |
+
python-levenshtein==0.12.2
|
89 |
+
pytorch-lightning==1.5.1; python_version >= '3.6'
|
90 |
+
pytz-deprecation-shim==0.1.0.post0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'
|
91 |
+
pytz==2021.3
|
92 |
+
pyyaml==6.0; python_version >= '3.6'
|
93 |
+
pyzmq==22.3.0; python_version >= '3.6'
|
94 |
+
regex==2021.11.10
|
95 |
+
requests-oauthlib==1.3.0
|
96 |
+
requests==2.26.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'
|
97 |
+
robustnessgym==0.1.3
|
98 |
+
rsa==4.7.2; python_version >= '3.6'
|
99 |
+
sacremoses==0.0.46
|
100 |
+
scikit-learn==1.0.1; python_version >= '3.7'
|
101 |
+
scipy==1.7.2; python_version < '3.11' and python_version >= '3.7'
|
102 |
+
semver==2.13.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
103 |
+
send2trash==1.8.0
|
104 |
+
six==1.16.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
105 |
+
sklearn==0.0
|
106 |
+
smart-open==5.2.1; python_version >= '3.6' and python_version < '4'
|
107 |
+
smmap==5.0.0; python_version >= '3.6'
|
108 |
+
streamlit-vega-lite==0.1.0
|
109 |
+
streamlit==1.2.0
|
110 |
+
tenacity==8.0.1; python_version >= '3.6'
|
111 |
+
tensorboard-data-server==0.6.1; python_version >= '3.6'
|
112 |
+
tensorboard-plugin-wit==1.8.0
|
113 |
+
tensorboard==2.7.0; python_version >= '3.6'
|
114 |
+
terminado==0.12.1; python_version >= '3.6'
|
115 |
+
testpath==0.5.0; python_version >= '3.5'
|
116 |
+
threadpoolctl==3.0.0; python_version >= '3.6'
|
117 |
+
tokenizers==0.10.3
|
118 |
+
toml==0.10.2; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
119 |
+
toolz==0.11.2; python_version >= '3.5'
|
120 |
+
torch==1.10.0; python_full_version >= '3.6.2'
|
121 |
+
torchmetrics==0.6.0; python_version >= '3.6'
|
122 |
+
tornado==6.1; python_version >= '3.5'
|
123 |
+
tqdm==4.62.3; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
124 |
+
traitlets==5.1.1; python_version >= '3.7'
|
125 |
+
transformers==4.12.3; python_version >= '3.6'
|
126 |
+
typing-extensions==3.10.0.2; python_version < '3.10'
|
127 |
+
tzdata==2021.5; python_version >= '3.6'
|
128 |
+
tzlocal==4.1; python_version >= '3.6'
|
129 |
+
ujson==4.2.0; python_version >= '3.6'
|
130 |
+
urllib3==1.26.7; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4' and python_version < '4'
|
131 |
+
validators==0.18.2; python_version >= '3.4'
|
132 |
+
wcwidth==0.2.5
|
133 |
+
webencodings==0.5.1
|
134 |
+
werkzeug==2.0.2; python_version >= '3.6'
|
135 |
+
wheel==0.37.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
|
136 |
+
widgetsnbextension==3.5.2
|
137 |
+
xxhash==2.0.2; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
|
138 |
+
yarl==1.7.2; python_version >= '3.6'
|
139 |
+
zipp==3.6.0; python_version < '3.10'
|