aliasgerovs commited on
Commit
ca39c04
1 Parent(s): 9fc992f
Files changed (2) hide show
  1. analysis.py +4 -2
  2. nohup.out +409 -0
analysis.py CHANGED
@@ -50,8 +50,10 @@ def normalize(value, min_value, max_value):
50
  return max(0, min(100, normalized_value))
51
 
52
 
53
- def depth_analysis(input_text):
54
-
 
 
55
  usual_ranges = {
56
  "estimated_slightly_difficult_words_ratio": (
57
  0.2273693623058005,
 
50
  return max(0, min(100, normalized_value))
51
 
52
 
53
+ def depth_analysis(input_text, bias_buster_selected):
54
+ if bias_buster_selected:
55
+ text = update(text)
56
+
57
  usual_ranges = {
58
  "estimated_slightly_difficult_words_ratio": (
59
  0.2273693623058005,
nohup.out CHANGED
@@ -166,3 +166,412 @@ Traceback (most recent call last):
166
  File "/home/aliasgarov/copyright_checker/predictors.py", line 390, in predict_mc_scores
167
  for key in models
168
  NameError: name 'models' is not defined
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
166
  File "/home/aliasgarov/copyright_checker/predictors.py", line 390, in predict_mc_scores
167
  for key in models
168
  NameError: name 'models' is not defined
169
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
170
+ warnings.warn(
171
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
172
+ warnings.warn(
173
+ 2024-05-16 12:30:06.614564: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
174
+ To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
175
+ [nltk_data] Downloading package punkt to /root/nltk_data...
176
+ [nltk_data] Package punkt is already up-to-date!
177
+ [nltk_data] Downloading package stopwords to /root/nltk_data...
178
+ [nltk_data] Package stopwords is already up-to-date!
179
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
180
+ warnings.warn(
181
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
182
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
183
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
184
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
185
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
186
+ Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
187
+ - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
188
+ - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
189
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
190
+ Framework not specified. Using pt to export the model.
191
+ Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
192
+ - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
193
+ - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
194
+ Using the export variant default. Available variants are:
195
+ - default: The default ONNX variant.
196
+
197
+ ***** Exporting submodel 1/1: RobertaForSequenceClassification *****
198
+ Using framework PyTorch: 2.3.0+cu121
199
+ Overriding 1 configuration item(s)
200
+ - use_cache -> False
201
+ Framework not specified. Using pt to export the model.
202
+ Using the export variant default. Available variants are:
203
+ - default: The default ONNX variant.
204
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
205
+ Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}
206
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
207
+ warnings.warn(
208
+
209
+ ***** Exporting submodel 1/3: T5Stack *****
210
+ Using framework PyTorch: 2.3.0+cu121
211
+ Overriding 1 configuration item(s)
212
+ - use_cache -> False
213
+
214
+ ***** Exporting submodel 2/3: T5ForConditionalGeneration *****
215
+ Using framework PyTorch: 2.3.0+cu121
216
+ Overriding 1 configuration item(s)
217
+ - use_cache -> True
218
+ /usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
219
+ if causal_mask.shape[1] < attention_mask.shape[1]:
220
+
221
+ ***** Exporting submodel 3/3: T5ForConditionalGeneration *****
222
+ Using framework PyTorch: 2.3.0+cu121
223
+ Overriding 1 configuration item(s)
224
+ - use_cache -> True
225
+ /usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
226
+ elif past_key_value.shape[2] != key_value_states.shape[1]:
227
+ In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
228
+ In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
229
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
230
+ Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}
231
+ /usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML
232
+ warnings.warn("Can't initialize NVML")
233
+ [nltk_data] Downloading package cmudict to /root/nltk_data...
234
+ [nltk_data] Package cmudict is already up-to-date!
235
+ [nltk_data] Downloading package punkt to /root/nltk_data...
236
+ [nltk_data] Package punkt is already up-to-date!
237
+ [nltk_data] Downloading package stopwords to /root/nltk_data...
238
+ [nltk_data] Package stopwords is already up-to-date!
239
+ [nltk_data] Downloading package wordnet to /root/nltk_data...
240
+ [nltk_data] Package wordnet is already up-to-date!
241
+ /usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML
242
+ warnings.warn("Can't initialize NVML")
243
+ Collecting en-core-web-sm==3.7.1
244
+ Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl (12.8 MB)
245
+ Requirement already satisfied: spacy<3.8.0,>=3.7.2 in /usr/local/lib/python3.9/dist-packages (from en-core-web-sm==3.7.1) (3.7.2)
246
+ Requirement already satisfied: thinc<8.3.0,>=8.1.8 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.2.3)
247
+ Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.66.4)
248
+ Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.4.0)
249
+ Requirement already satisfied: jinja2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.1.4)
250
+ Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.8)
251
+ Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (52.0.0)
252
+ Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.25.1)
253
+ Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.10)
254
+ Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (24.0)
255
+ Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.5)
256
+ Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.10)
257
+ Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (6.4.0)
258
+ Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.2)
259
+ Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.4.8)
260
+ Requirement already satisfied: numpy>=1.19.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.26.4)
261
+ Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.12)
262
+ Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.9)
263
+ Requirement already satisfied: weasel<0.4.0,>=0.1.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.3.4)
264
+ Requirement already satisfied: pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.7.1)
265
+ Requirement already satisfied: typer<0.10.0,>=0.3.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.9.4)
266
+ Requirement already satisfied: language-data>=1.2 in /usr/local/lib/python3.9/dist-packages (from langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.2.0)
267
+ Requirement already satisfied: marisa-trie>=0.7.7 in /usr/local/lib/python3.9/dist-packages (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.1)
268
+ Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.6.0)
269
+ Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.18.2)
270
+ Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.11.0)
271
+ Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.7.11)
272
+ Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.1.4)
273
+ Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.9/dist-packages (from typer<0.10.0,>=0.3.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.1.7)
274
+ Requirement already satisfied: cloudpathlib<0.17.0,>=0.7.0 in /usr/local/lib/python3.9/dist-packages (from weasel<0.4.0,>=0.1.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.16.0)
275
+ Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.9/dist-packages (from jinja2->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.1.5)
276
+ ✔ Download and installation successful
277
+ You can now load the package via spacy.load('en_core_web_sm')
278
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
279
+ warnings.warn(
280
+ /usr/local/lib/python3.9/dist-packages/gradio/utils.py:924: UserWarning: Expected 1 arguments for function <function depth_analysis at 0x7f41bcff3f70>, received 2.
281
+ warnings.warn(
282
+ /usr/local/lib/python3.9/dist-packages/gradio/utils.py:932: UserWarning: Expected maximum 1 arguments for function <function depth_analysis at 0x7f41bcff3f70>, received 2.
283
+ warnings.warn(
284
+ WARNING: Invalid HTTP request received.
285
+ /usr/local/lib/python3.9/dist-packages/optimum/bettertransformer/models/encoder_models.py:301: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.)
286
+ hidden_states = torch._nested_tensor_from_mask(hidden_states, ~attention_mask)
287
+ IMPORTANT: You are using gradio version 4.26.0, however version 4.29.0 is available, please upgrade.
288
+ --------
289
+ Running on local URL: http://0.0.0.0:80
290
+ Running on public URL: https://0cdfd3e29fbd88d78e.gradio.live
291
+
292
+ This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
293
+ Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09
294
+ Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244
295
+ Input Text: sPredicting Financial Market Trends using Time Series Analysis and Natural Language Processing/s
296
+ Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09
297
+ Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244
298
+ Input Text: sPredicting Financial Market Trends using Time Series Analysis and Natural Language Processing/s
299
+ Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09
300
+ Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244
301
+ MC Score: {'OPENAI GPT': 0.9995232800159636, 'MISTRAL': 2.5594878925034267e-09, 'CLAUDE': 9.711950776533429e-08, 'GEMINI': 2.9362617399423087e-07, 'GRAMMAR ENHANCER': 2.6840377496865067e-05}
302
+
303
+ Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09
304
+ Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244
305
+ Input Text: sPredicting Financial Market Trends using Time Series Analysis and Natural Language Processing/s
306
+ Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09
307
+ Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244
308
+ MC Score: {'OPENAI GPT': 0.9995232800159636, 'MISTRAL': 2.5594878925034267e-09, 'CLAUDE': 9.711950776533429e-08, 'GEMINI': 2.9362617399423087e-07, 'GRAMMAR ENHANCER': 2.6840377496865067e-05}
309
+
310
+
311
+ /home/aliasgarov/copyright_checker/predictors.py:247: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
312
+ probas = F.softmax(tensor_logits).detach().cpu().numpy()
313
+ {'Predicting Financial Market Trends using Time Series Analysis and Natural Language Processing': -0.01068235683389165} quillbot
314
+
315
+ Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.
316
+ ["Hö'elün (fl.", '1162–1210) was a Mongolian noblewoman and the mother of Temüjin, better known as Genghis Khan.', 'She played a major role in his rise to power.', "Born into the Olkhonud clan of the Onggirat tribe, Hö'elün was originally married to Chiledu, but was captured shortly after her wedding by Yesügei, an important member of the Mongols, becoming his primary wife.", 'She and Yesügei had three sons and one daughter, as well as Temüjin.', "After Yesügei was fatally poisoned and the Mongols abandoned her family, Hö'elün shepherded all her children through poverty to adulthood—her resilience and organisational skills have been remarked upon by historians.", "She continued to play an important role after Temüjin's marriage to Börte.", "Hö'elün married Münglig, an old retainer of Yesügei, in thanks for his support after a damaging defeat.", "During the next decades, she arranged marriages, maintained alliances, and was heavily involved in disputes between Genghis, his brothers, and Münglig's sons."]
317
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
318
+ To disable this warning, you can either:
319
+ - Avoid using `tokenizers` before the fork if possible
320
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
321
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
322
+ To disable this warning, you can either:
323
+ - Avoid using `tokenizers` before the fork if possible
324
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
325
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
326
+ To disable this warning, you can either:
327
+ - Avoid using `tokenizers` before the fork if possible
328
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
329
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
330
+ To disable this warning, you can either:
331
+ - Avoid using `tokenizers` before the fork if possible
332
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
333
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
334
+ To disable this warning, you can either:
335
+ - Avoid using `tokenizers` before the fork if possible
336
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
337
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
338
+ To disable this warning, you can either:
339
+ - Avoid using `tokenizers` before the fork if possible
340
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
341
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
342
+ To disable this warning, you can either:
343
+ - Avoid using `tokenizers` before the fork if possible
344
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
345
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
346
+ To disable this warning, you can either:
347
+ - Avoid using `tokenizers` before the fork if possible
348
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
349
+ Traceback (most recent call last):
350
+ File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events
351
+ response = await route_utils.call_process_api(
352
+ File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 261, in call_process_api
353
+ output = await app.get_blocks().process_api(
354
+ File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1786, in process_api
355
+ result = await self.call_function(
356
+ File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1338, in call_function
357
+ prediction = await anyio.to_thread.run_sync(
358
+ File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 56, in run_sync
359
+ return await get_async_backend().run_sync_in_worker_thread(
360
+ File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
361
+ return await future
362
+ File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 851, in run
363
+ result = context.run(func, *args)
364
+ File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 759, in wrapper
365
+ response = f(*args, **kwargs)
366
+ File "/home/aliasgarov/copyright_checker/app.py", line 66, in main
367
+ depth_analysis_plot = depth_analysis(bias_buster_selected, input)
368
+ TypeError: depth_analysis() takes 1 positional argument but 2 were given
369
+ PLAGIARISM PROCESSING TIME: 10.25221395799963
370
+ ['Predicting Financial Market Trends using Time Series Analysis and Natural\nLanguage Processing']
371
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
372
+ To disable this warning, you can either:
373
+ - Avoid using `tokenizers` before the fork if possible
374
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
375
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
376
+ To disable this warning, you can either:
377
+ - Avoid using `tokenizers` before the fork if possible
378
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
379
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
380
+ To disable this warning, you can either:
381
+ - Avoid using `tokenizers` before the fork if possible
382
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
383
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
384
+ To disable this warning, you can either:
385
+ - Avoid using `tokenizers` before the fork if possible
386
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
387
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
388
+ To disable this warning, you can either:
389
+ - Avoid using `tokenizers` before the fork if possible
390
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
391
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
392
+ To disable this warning, you can either:
393
+ - Avoid using `tokenizers` before the fork if possible
394
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
395
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
396
+ To disable this warning, you can either:
397
+ - Avoid using `tokenizers` before the fork if possible
398
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
399
+ huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
400
+ To disable this warning, you can either:
401
+ - Avoid using `tokenizers` before the fork if possible
402
+ - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
403
+ Traceback (most recent call last):
404
+ File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events
405
+ response = await route_utils.call_process_api(
406
+ File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 261, in call_process_api
407
+ output = await app.get_blocks().process_api(
408
+ File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1786, in process_api
409
+ result = await self.call_function(
410
+ File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1338, in call_function
411
+ prediction = await anyio.to_thread.run_sync(
412
+ File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 56, in run_sync
413
+ return await get_async_backend().run_sync_in_worker_thread(
414
+ File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
415
+ return await future
416
+ File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 851, in run
417
+ result = context.run(func, *args)
418
+ File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 759, in wrapper
419
+ response = f(*args, **kwargs)
420
+ File "/home/aliasgarov/copyright_checker/app.py", line 66, in main
421
+ depth_analysis_plot = depth_analysis(bias_buster_selected, input)
422
+ TypeError: depth_analysis() takes 1 positional argument but 2 were given
423
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
424
+ warnings.warn(
425
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
426
+ warnings.warn(
427
+ 2024-05-16 13:20:02.406179: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
428
+ To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
429
+ [nltk_data] Downloading package punkt to /root/nltk_data...
430
+ [nltk_data] Package punkt is already up-to-date!
431
+ [nltk_data] Downloading package stopwords to /root/nltk_data...
432
+ [nltk_data] Package stopwords is already up-to-date!
433
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
434
+ warnings.warn(
435
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
436
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
437
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
438
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
439
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
440
+ Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
441
+ - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
442
+ - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
443
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
444
+ Framework not specified. Using pt to export the model.
445
+ Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
446
+ - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
447
+ - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
448
+ Using the export variant default. Available variants are:
449
+ - default: The default ONNX variant.
450
+
451
+ ***** Exporting submodel 1/1: RobertaForSequenceClassification *****
452
+ Using framework PyTorch: 2.3.0+cu121
453
+ Overriding 1 configuration item(s)
454
+ - use_cache -> False
455
+ Framework not specified. Using pt to export the model.
456
+ Using the export variant default. Available variants are:
457
+ - default: The default ONNX variant.
458
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
459
+ Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}
460
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
461
+ warnings.warn(
462
+
463
+ ***** Exporting submodel 1/3: T5Stack *****
464
+ Using framework PyTorch: 2.3.0+cu121
465
+ Overriding 1 configuration item(s)
466
+ - use_cache -> False
467
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
468
+ warnings.warn(
469
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
470
+ warnings.warn(
471
+ 2024-05-16 13:34:26.723234: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
472
+ To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
473
+ [nltk_data] Downloading package punkt to /root/nltk_data...
474
+ [nltk_data] Package punkt is already up-to-date!
475
+ [nltk_data] Downloading package stopwords to /root/nltk_data...
476
+ [nltk_data] Package stopwords is already up-to-date!
477
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
478
+ warnings.warn(
479
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
480
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
481
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
482
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
483
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
484
+ Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
485
+ - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
486
+ - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
487
+ The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details.
488
+ Framework not specified. Using pt to export the model.
489
+ Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
490
+ - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
491
+ - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
492
+ Using the export variant default. Available variants are:
493
+ - default: The default ONNX variant.
494
+
495
+ ***** Exporting submodel 1/1: RobertaForSequenceClassification *****
496
+ Using framework PyTorch: 2.3.0+cu121
497
+ Overriding 1 configuration item(s)
498
+ - use_cache -> False
499
+ Framework not specified. Using pt to export the model.
500
+ Using the export variant default. Available variants are:
501
+ - default: The default ONNX variant.
502
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
503
+ Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}
504
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
505
+ warnings.warn(
506
+
507
+ ***** Exporting submodel 1/3: T5Stack *****
508
+ Using framework PyTorch: 2.3.0+cu121
509
+ Overriding 1 configuration item(s)
510
+ - use_cache -> False
511
+
512
+ ***** Exporting submodel 2/3: T5ForConditionalGeneration *****
513
+ Using framework PyTorch: 2.3.0+cu121
514
+ Overriding 1 configuration item(s)
515
+ - use_cache -> True
516
+ /usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
517
+ if causal_mask.shape[1] < attention_mask.shape[1]:
518
+
519
+ ***** Exporting submodel 3/3: T5ForConditionalGeneration *****
520
+ Using framework PyTorch: 2.3.0+cu121
521
+ Overriding 1 configuration item(s)
522
+ - use_cache -> True
523
+ /usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
524
+ elif past_key_value.shape[2] != key_value_states.shape[1]:
525
+ In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
526
+ In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
527
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
528
+ Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4}
529
+ /usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML
530
+ warnings.warn("Can't initialize NVML")
531
+ [nltk_data] Downloading package cmudict to /root/nltk_data...
532
+ [nltk_data] Package cmudict is already up-to-date!
533
+ [nltk_data] Downloading package punkt to /root/nltk_data...
534
+ [nltk_data] Package punkt is already up-to-date!
535
+ [nltk_data] Downloading package stopwords to /root/nltk_data...
536
+ [nltk_data] Package stopwords is already up-to-date!
537
+ [nltk_data] Downloading package wordnet to /root/nltk_data...
538
+ [nltk_data] Package wordnet is already up-to-date!
539
+ /usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML
540
+ warnings.warn("Can't initialize NVML")
541
+ Collecting en-core-web-sm==3.7.1
542
+ Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl (12.8 MB)
543
+ Requirement already satisfied: spacy<3.8.0,>=3.7.2 in /usr/local/lib/python3.9/dist-packages (from en-core-web-sm==3.7.1) (3.7.2)
544
+ Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (6.4.0)
545
+ Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (52.0.0)
546
+ Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.12)
547
+ Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.10)
548
+ Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (24.0)
549
+ Requirement already satisfied: typer<0.10.0,>=0.3.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.9.4)
550
+ Requirement already satisfied: numpy>=1.19.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.26.4)
551
+ Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.4.8)
552
+ Requirement already satisfied: jinja2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.1.4)
553
+ Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.4.0)
554
+ Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.66.4)
555
+ Requirement already satisfied: weasel<0.4.0,>=0.1.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.3.4)
556
+ Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.9)
557
+ Requirement already satisfied: thinc<8.3.0,>=8.1.8 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.2.3)
558
+ Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.2)
559
+ Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.10)
560
+ Requirement already satisfied: pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.7.1)
561
+ Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.5)
562
+ Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.8)
563
+ Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.25.1)
564
+ Requirement already satisfied: language-data>=1.2 in /usr/local/lib/python3.9/dist-packages (from langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.2.0)
565
+ Requirement already satisfied: marisa-trie>=0.7.7 in /usr/local/lib/python3.9/dist-packages (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.1)
566
+ Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.11.0)
567
+ Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.6.0)
568
+ Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.18.2)
569
+ Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.7.11)
570
+ Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.1.4)
571
+ Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.9/dist-packages (from typer<0.10.0,>=0.3.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.1.7)
572
+ Requirement already satisfied: cloudpathlib<0.17.0,>=0.7.0 in /usr/local/lib/python3.9/dist-packages (from weasel<0.4.0,>=0.1.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.16.0)
573
+ Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.9/dist-packages (from jinja2->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.1.5)
574
+ ✔ Download and installation successful
575
+ You can now load the package via spacy.load('en_core_web_sm')
576
+ /usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
577
+ warnings.warn(