repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
modoboa/modoboa | django | 2,738 | management subcommand `manage_dkim_keys` breaks if it fails | # Impacted versions
* Modoboa: 2.0.3
# Steps to reproduce
# Current behavior
Try this `$ python manage.py modo manage_dkim_keys` from an account that lacks write permissions for `dkim_keys_storage_dir`. It crashes, but it adds a `dkim_private_key_path` to the `admin_domain` table all the same, so that on subsequent runs, this command does nothing because the [query for missing values](https://github.com/modoboa/modoboa/blob/572e32f868c24c2f08c0d387620c58ef42ebb714/modoboa/admin/management/commands/subcommands/_manage_dkim_keys.py#L48) does not return this domain any more.
So on fixing the write permissions, or running it from a privileged account, nothing happens unless you manually edit the database and inset an empty string in domain's `dkim_private_key_path`, after which a new `.pem` file is created as expected.
# Expected behavior
If for any reason the `.pem` file cannot be created then the associated `dkim_private_key_path` field in the `admin_domain` table should be an empty string not and invalid path (pointing to a file that does not exist). Data integrity basically.
| closed | 2023-01-03T06:17:07Z | 2023-04-25T15:06:13Z | https://github.com/modoboa/modoboa/issues/2738 | [
"enhancement"
] | bernd-wechner | 1 |
dynaconf/dynaconf | fastapi | 824 | [RFC] Support multidoc yaml files | **Is your feature request related to a problem? Please describe.**
Sometimes it can be difficult or impossible to pass multiple files with config fragments. yaml support multiple documents in one file and `safe_load_all` from pyaml api loads that accordingly. It is standard yaml feature, it would be nice to support it and make in usable in cases when passing one file (composited from more files) would be easier.
**Describe the solution you'd like**
Support `safe_load_all` as yaml loader.
**Describe alternatives you've considered**
Passing multiple files will do the work, however it doesn't have to be always straightforward.
**Additional context**
I have prepared a patch
| closed | 2022-10-30T08:14:30Z | 2023-07-18T20:24:08Z | https://github.com/dynaconf/dynaconf/issues/824 | [
"Not a Bug",
"RFC"
] | mangan | 0 |
chatanywhere/GPT_API_free | api | 352 | I suspect it's a fake interface | I suspect it's a fake interface
 | closed | 2025-01-16T11:52:03Z | 2025-01-16T12:44:49Z | https://github.com/chatanywhere/GPT_API_free/issues/352 | [] | w-z-y | 1 |
ymcui/Chinese-BERT-wwm | tensorflow | 187 | 我猜 Whole Word Masking (wwm) 实际是一种更高效的方式,字级别增加预训练时间和随机性 也能达到同样最终效果吧? | @ymcui 多谢! | closed | 2021-07-01T02:44:03Z | 2021-07-05T06:38:33Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/187 | [] | guotong1988 | 3 |
Kanaries/pygwalker | pandas | 92 | [Feat] Export grammar of plots | It would be really great to be able to export and import a description of the graph for later reuse, like it is done in Vega.
This also relates to [[Feat] Force data types in code #70 ](https://github.com/Kanaries/pygwalker/issues/70).
Besides not having to setup predefined data types every time this would enable users of PyGWalker to export predefined setup of the plots in its entirety. | closed | 2023-03-29T16:57:25Z | 2023-07-25T07:22:26Z | https://github.com/Kanaries/pygwalker/issues/92 | [] | Julius-Plehn | 0 |
yzhao062/pyod | data-science | 432 | Running process killed abruptly | Hi everyone, I was running PyOD but halfway the process was apparently killed, it shows only the word "Killed". Does anyone know why it happened ? Thanks a lot. | open | 2022-08-27T16:06:03Z | 2022-08-27T16:07:58Z | https://github.com/yzhao062/pyod/issues/432 | [] | dangmanhtruong1995 | 1 |
ageitgey/face_recognition | python | 851 | Not able to detect face | * face_recognition version: Latest
* Python version: 3.7
* Operating System: Mac
### Description
I am not able to detect a face in the below picture can someone tell me why..and it's not only about this picture it's with many of them clicked with my ONE PLUS 6
 Can someone try and let me know why
| closed | 2019-06-10T05:34:09Z | 2019-06-10T07:05:11Z | https://github.com/ageitgey/face_recognition/issues/851 | [] | akshay-shah | 5 |
lepture/authlib | django | 31 | Make ResourceProtector extensible | Only BearerToken is supported currently, it is required to make the protector extensible so that we can add more token type later. | closed | 2018-02-28T13:46:12Z | 2018-02-28T14:44:07Z | https://github.com/lepture/authlib/issues/31 | [
"break change"
] | lepture | 0 |
google-research/bert | nlp | 1,266 | Can BERT recognize if a sentence fragment follows from a preceding one? | A few introductory questions about BERT:
Is it like GPT-3 where you must query it in natural language or does BERT have some kind of built-in methods?
Is BERT an algorithm that can be trained on any data or is it a pre-trained model?
Most importantly, I read that BERT can assess with high accuracy if a sentence follows from a previous one. Could it do the same with sentence fragments, i.e. if a text is broken up into lines, could it detect if line x+1 is a continuation of the sentence begun in line x, vs. being a separate entity of some kind, like a chapter title following by an author name?
Thank you very much. | open | 2021-10-05T14:30:48Z | 2021-10-05T14:30:48Z | https://github.com/google-research/bert/issues/1266 | [] | julkhami | 0 |
autogluon/autogluon | computer-vision | 4,748 | Discrepancy between specified `5min` frequency and DeepAR model configuration | # Subject: Discrepancy between specified 'freq' and DeepAR config 'freq'
Hi everyone,
I'm encountering an issue where I specified `freq='5min'` during the training of a DeepAR model using AutoGluon, but the final model configuration shows `freq='D'`. I'm trying to understand why this discrepancy exists and if it could be impacting my model.
## Details from Training Logs
Here are some key points from the training logs:
```plaintext
=================== System Info ===================
AutoGluon Version: 1.2
Python Version: 3.12.3
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
CPU Count: 16
GPU Count: 1
Memory Avail: 27.98 GB / 31.29 GB (89.4%)
Disk Space Avail: 437.20 GB / 953.26 GB (45.9%)
===================================================
Setting presets to: best_quality
Fitting with arguments:
{'enable_ensemble': True,
'eval_metric': MASE,
'excluded_model_types': ['RecursiveTabular', 'DirectTabular', 'TiDE'],
'freq': '5min',
'hyperparameters': 'default',
'known_covariates_names': ['minute_sin',
'minute_cos',
'hour_sin',
'hour_cos',
'day_of_week_sin',
'day_of_week_cos',
'is_weekend'],
'num_val_windows': 5,
'prediction_length': 30,
'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
'random_seed': 123,
'refit_every_n_windows': 2,
'refit_full': False,
'skip_model_selection': False,
'target': 'target',
'time_limit': 20000,
'verbosity': 2}
Data with frequency 'None' has been resampled to frequency '5min'.
The provided data has a large number of rows and several time series.
The data contains a target column and the following known covariates:
categorical: ['is_weekend']
continuous (float): ['minute_sin', 'minute_cos', 'hour_sin', 'hour_cos', 'day_of_week_sin', 'day_of_week_cos']
To learn how to fix incorrectly inferred types, please see documentation for TimeSeriesPredictor.fit
AutoGluon will gauge predictive performance using evaluation metric: 'MASE'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
===================================================
```
And here's the relevant part of the saved DeepAR model's config:
```json
{
"__kind__": "instance",
"args": [],
"class": "gluonts.torch.model.predictor.PyTorchPredictor",
"kwargs": {
"batch_size": 64,
"device": "auto",
"forecast_generator": {
"__kind__": "instance",
"args": [],
"class": "gluonts.model.forecast_generator.SampleForecastGenerator",
"kwargs": {}
},
"input_names": [
"feat_static_cat",
"feat_static_real",
"past_time_feat",
"past_target",
"past_observed_values",
"future_time_feat"
],
"input_transform": {
"__kind__": "instance",
"class": "gluonts.transform._base.Chain",
"kwargs": {
"transformations": [
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.field.RemoveFields",
"kwargs": {
"field_names": [
"feat_static_real"
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.field.SetField",
"kwargs": {
"output_field": "feat_static_cat",
"value": [
0
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.field.SetField",
"kwargs": {
"output_field": "feat_static_real",
"value": [
0.0
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "builtins.int"
},
"expected_ndim": 1,
"field": "feat_static_cat"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"expected_ndim": 1,
"field": "feat_static_real"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"expected_ndim": 1,
"field": "target"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.AddObservedValuesIndicator",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"imputation_method": {
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.DummyValueImputation",
"kwargs": {
"dummy_value": 0.0
}
},
"output_field": "observed_values",
"target_field": "target"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.AddTimeFeatures",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"output_field": "time_feat",
"pred_length": 30,
"start_field": "start",
"target_field": "target",
"time_features": [
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.minute_of_hour"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.hour_of_day"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.day_of_week"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.day_of_month"
},
{
"__kind__": "type",
"class": "autogluon.timeseries.utils.datetime.time_features.day_of_year"
}
]
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.feature.AddAgeFeature",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"log_scale": true,
"output_field": "feat_dynamic_age",
"pred_length": 30,
"target_field": "target"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.VstackFeatures",
"kwargs": {
"drop_inputs": true,
"h_stack": false,
"input_fields": [
"time_feat",
"feat_dynamic_age",
"feat_dynamic_real"
],
"output_field": "time_feat"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.convert.AsNumpyArray",
"kwargs": {
"dtype": {
"__kind__": "type",
"class": "numpy.float32"
},
"expected_ndim": 2,
"field": "time_feat"
}
},
{
"__kind__": "instance",
"args": [],
"class": "gluonts.transform.split.InstanceSplitter",
"kwargs": {
"dummy_value": 0.0,
"forecast_start_field": "forecast_start",
"future_length": 30,
"instance_sampler": {
"__kind__": "instance",
"class": "gluonts.transform.sampler.PredictionSplitSampler",
"kwargs": {
"allow_empty_interval": false,
"axis": -1,
"min_future": 0,
"min_past": 0
}
},
"is_pad_field": "is_pad",
"lead_time": 0,
"output_NTC": true,
"past_length": 1212,
"start_field": "start",
"target_field": "target",
"time_series_fields": [
"time_feat",
"observed_values"
]
}
}
]
}
},
"lead_time": 0,
"output_transform": null,
"prediction_length": 30,
"prediction_net": {
"__kind__": "instance",
"args": [],
"class": "gluonts.torch.model.deepar.lightning_module.DeepARLightningModule",
"kwargs": {
"lr": 0.001,
"model_kwargs": {
"cardinality": [
1
],
"context_length": 60,
"default_scale": null,
"distr_output": {
"__kind__": "instance",
"args": [],
"class": "gluonts.torch.distributions.studentT.StudentTOutput",
"kwargs": {
"beta": 0.0
}
},
"dropout_rate": 0.1,
"embedding_dimension": null,
"freq": "D",
"hidden_size": 40,
"lags_seq": [
1,
2,
3,
4,
5,
6,
7,
10,
11,
12,
13,
14,
22,
23,
24,
25,
26,
34,
35,
36,
37,
38,
287,
288,
289,
575,
576,
577,
863,
864,
865,
1151,
1152,
1153
],
"nonnegative_pred_samples": false,
"num_feat_dynamic_real": 14,
"num_feat_static_cat": 1,
"num_feat_static_real": 1,
"num_layers": 2,
"num_parallel_samples": 100,
"prediction_length": 30,
"scaling": true
},
"patience": 10,
"weight_decay": 1e-08
}
}
}
}
```
As you can see, the training arguments clearly state freq='5min', and the logs confirm that the data was resampled to '5min'. However, the freq within the model_kwargs of the trained DeepAR model is 'D'.
Could someone shed some light on why this might be happening? Is this expected behavior, or is there something I might be missing in how AutoGluon handles frequency with DeepAR? Could this mismatch potentially affect the model's accuracy or the interpretation of the results?
Any insights would be greatly appreciated. Thanks! | closed | 2024-12-21T08:21:43Z | 2024-12-23T11:29:36Z | https://github.com/autogluon/autogluon/issues/4748 | [] | Killer3048 | 2 |
ageitgey/face_recognition | python | 1,184 | Face recognition | open | 2020-07-10T10:51:00Z | 2020-07-10T10:51:00Z | https://github.com/ageitgey/face_recognition/issues/1184 | [] | ORsys01 | 0 |
|
viewflow/viewflow | django | 213 | How to cleanly finish a Process with split-n-join tasks? | Say I have a Process which has two optional View tasks to the finish:
--> Split --> optional_view_1 ---> Join ----> finish
| ^
| |
-----> optional_view_2 -------
Let's say both tasks are assigned, but a human logs in and completes optional_view_1. I want to ensure the Process finishes cleanly.
AFAICS, the Process gets into a "stuck" state due to optional_view_2 at the Join. Is that correct? I tried cancelling optional_view_2, but that had no effect. I'm looking for a programmatic approach if that makes any difference. I'm aware of #93, and so I think my question boils down to:
- how to finish the process cleanly (i.e. not cancel it) and without races
- from where (e.g. from inside the Join or a Handler after each View to cancel the other one?)
What is the correct procedure in this case? | closed | 2018-05-19T09:17:15Z | 2018-05-21T00:19:34Z | https://github.com/viewflow/viewflow/issues/213 | [] | ShaheedHaque | 4 |
netbox-community/netbox | django | 17,719 | Add zebra striping to rows in tables | ### NetBox version
4.1.3
### Feature type
Change to existing functionality
### Triage priority
N/A
### Proposed functionality
Add zebra striping to rows in tables.
Add 5-10% difference in background colors between rows.
### Use case
In wide tables with a large number of columns, you can position yourself faster and read the desired line across the entire width.
### Database changes
_No response_
### External dependencies
_No response_ | open | 2024-10-10T00:43:47Z | 2025-01-24T14:40:40Z | https://github.com/netbox-community/netbox/issues/17719 | [
"type: feature",
"status: under review",
"netbox"
] | click0 | 5 |
OpenInterpreter/open-interpreter | python | 739 | UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 | ### Describe the bug
My code:
`interpreter.chat("Please summarize this article:https://about.fb.com/news/2023/08/code-llama-ai-for-coding/")
interpreter.chat()`
Report an error:
`Exception in thread Thread-117 (handle_stream_output):
Traceback (most recent call last):
File "C:\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\john\AppData\Roaming\Python\Python311\site-packages\interpreter\code_interpreters\subprocess_code_interpreter.py", line 121, in handle_stream_output
for line in iter(stream.readline, ""):
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 870: illegal multibyte sequence`

### Reproduce
`interpreter.chat("Please summarize this article:https://about.fb.com/news/2023/08/code-llama-ai-for-coding/")
interpreter.chat()`
### Expected behavior
..
### Screenshots
_No response_
### Open Interpreter version
0.1.11
### Python version
Python 3.11.4
### Operating System name and version
window10
### Additional context
_No response_ | closed | 2023-11-08T07:45:39Z | 2023-12-01T04:59:29Z | https://github.com/OpenInterpreter/open-interpreter/issues/739 | [
"Bug"
] | zixingonline | 4 |
hindupuravinash/the-gan-zoo | machine-learning | 41 | reformat to fit awesome list format? | https://github.com/sindresorhus/awesome if you don't know what i'm talking about, seems like this would be quite fitting if reformatted right | open | 2017-11-03T12:49:06Z | 2017-11-15T14:59:15Z | https://github.com/hindupuravinash/the-gan-zoo/issues/41 | [] | pokeball99 | 2 |
slackapi/bolt-python | fastapi | 282 | Question on forcing Link preview in response | I want to be able to preview links in bot's response to user because the Slack workspace seems to have Link preview turn-off by default. Is there any way to do that? | closed | 2021-04-07T23:20:40Z | 2021-04-08T16:57:16Z | https://github.com/slackapi/bolt-python/issues/282 | [
"question"
] | ttback | 1 |
piccolo-orm/piccolo | fastapi | 997 | Fix type warnings in `playground/commands/run.py` | * Missing `id` annotations for tables
* Use new join syntax
* Fix ipython import | closed | 2024-05-28T20:22:53Z | 2024-05-28T20:38:37Z | https://github.com/piccolo-orm/piccolo/issues/997 | [
"enhancement"
] | dantownsend | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,692 | How to train a paired dataset | Hello, author.
My dataset is paired, and I want to train it in pairs. I tried to input the parameter "--dataset_mode aligned", but I got an error message "AssertionError: ./datasets/underwater\train is not a valid directory". The format of my dataset is shown in the following figure. Could you please tell me how to place the paired data for training?

| closed | 2025-03-18T10:02:27Z | 2025-03-20T00:43:02Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1692 | [] | zhangjy328 | 0 |
fastapi/sqlmodel | sqlalchemy | 129 | Is SQLModel naturally async aware ? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
No code available
```
### Description
Can I use SQLModel inside my FastAPI async endpoint implementations without problems ? Apparently it works. But I'm not sure if it's destroying the asynchronicity.
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.5
### Additional Context
_No response_ | open | 2021-10-12T13:41:50Z | 2024-08-06T05:35:08Z | https://github.com/fastapi/sqlmodel/issues/129 | [
"question"
] | joaopfg | 7 |
joerick/pyinstrument | django | 345 | [Feature Request] Merge profile sessions from multiple runs to get statistical numbers like Average/p95/mean | Hi,
Firstly I want to thank you for having this tool and I have benefited a lot from it to optimize our codebase!
A feature that has been in my mind for some time: **since in real case, different modules in code may have different time variances and to get more accurate profiling results, we need to run multiple times and look at average/mean/p95 numbers for further optimization.**
Currently I don't see an option to do that in pyinstrument and it would be best if it can support merging multiple profile sessions from multiple runs to get statistical numbers like Average/p95/Mean.
Thank you! | open | 2024-10-08T23:11:22Z | 2024-10-09T09:31:37Z | https://github.com/joerick/pyinstrument/issues/345 | [] | teddybearxzh | 1 |
LAION-AI/Open-Assistant | machine-learning | 3,502 | How do I use Open Assistant in my application? | Hi,
I just discovered Open Assistant and would like to use it in a NodeJS application.
The docs seem to suggest using the following URLs:
https://projects.laion.ai/api/v1/tasks/
https://projects.laion.ai/auth/login/discord
but neither exist; both return a 404 error.
Can someone please point me in the right direction?
Thanks! | closed | 2023-06-19T20:41:33Z | 2023-06-20T05:35:16Z | https://github.com/LAION-AI/Open-Assistant/issues/3502 | [] | moonman239 | 2 |
aimhubio/aim | tensorflow | 3,223 | Call basic methods on `hparams` in a query | ## 🚀 Feature : Call basic methods on `hparams` in a query ?
Having tracked runs with aim I can query runs with a term like the following:
```
( run.experiment == "AE_noisy" or run_experiment == "AE_smoothed" )
```
This can be shortened to: `run.experiment.startwith("AE_")`
The same option for tracked hyperparameters, could be really usefull. So instead of having to run:
```
( run.hparams.optimizer == "ADAM or run.hparams.optimizer == "RADAM" )
```
I'd wish to run
```
run.hparams.optimizer.endswith("ADAM")
```
Currently this results in the error `query failed, 'Undefined' object is not callable`
### Motivation
This would allow more flexible and shorter queries (especially when there are a lot of suffixes). Happy to hear any feedback whether this could be possible or is out of scope. Cheers!
| open | 2024-09-18T15:08:10Z | 2024-09-18T15:08:10Z | https://github.com/aimhubio/aim/issues/3223 | [
"type / enhancement"
] | Engrammae | 0 |
FactoryBoy/factory_boy | django | 796 | ImageField inside Maybe declaration no longer working since 3.1.0 | #### Description
In a factory that I defined for companies, I'm randomly generating a logo using a `Maybe` declaration. This used to work fine up to and including 3.0.1, but as of 3.1.0 it has different behaviour.
#### To Reproduce
##### Model / Factory code
Leaving out the other fields as they cannot be relevant to the problem.
```python
from factory import Faker, Maybe
from factory.django import DjangoModelFactory, ImageField
from ..models import Company
class CompanyFactory(DjangoModelFactory):
logo_add = Faker("pybool")
logo = Maybe(
"logo_add",
yes_declaration=ImageField(width=500, height=200, color=Faker("color")),
no_declaration=None,
)
class Meta:
model = Company
exclude = ("logo_add",)
```
##### The issue
Up to and including 3.0.1 the behaviour - which is the desired behaviour as far as I'm concerend - was that I could generate companies that either had a logo or did not (about 50/50 since I'm just using "pybool" for the decider field). If they had a logo, the logo would be 500x200 with a random color.
Now that I use 3.1.0, the randomness of about half the companies having logos still works, but _all_ generated logo's are now 100x100 and blue, which are simply defaults (although the [documentation](https://factoryboy.readthedocs.io/en/latest/orms.html?highlight=imagefield#factory.django.ImageField) says that "green" is actually the default), which is definitely something to fix :)
Perhaps I was misusing/misunderstanding this feature all along, but then I'd still like to know how to get the desired behaviour described.
| closed | 2020-10-13T13:53:14Z | 2020-12-23T17:21:32Z | https://github.com/FactoryBoy/factory_boy/issues/796 | [] | grondman | 2 |
davidteather/TikTok-Api | api | 1,214 | Enhancing TikTok-Api Integration for TikTok-to-YouTube Automation Projects | **Title:** Enhancing TikTok-Api Integration for TikTok-to-YouTube Automation Projects
**Issue:**
We are developing a project, [tiktok-to-youtube-automation](https://github.com/scottsdevelopment/tiktok-to-youtube-automation), aimed at automating the process of downloading TikTok videos and uploading them to YouTube. In our pursuit of efficient and reliable solutions, we have explored various TikTok API wrappers, including [TikTok-Api](https://github.com/davidteather/TikTok-Api).
**Context:**
During our development, we encountered challenges with existing tools. For instance, the [tiktok-scraper](https://github.com/drawrowfly/tiktok-scraper) project has been discontinued, as noted in [this issue](https://github.com/drawrowfly/tiktok-scraper/issues/834). This has led us to seek alternative solutions for integrating TikTok functionalities into our automation workflow.
**Proposal:**
We are considering integrating [TikTok-Api](https://github.com/davidteather/TikTok-Api) into our project to handle TikTok video retrieval. Before proceeding, we would like to understand the current capabilities and limitations of TikTok-Api, especially concerning:
- **Video Downloading:** The ability to programmatically download TikTok videos without watermarks.
- **Rate Limiting:** Handling TikTok's rate limits to ensure stable operation.
- **Maintenance and Support:** The project's activity level and responsiveness to issues or updates.
**Request for Collaboration:**
We invite the maintainers and community of TikTok-Api to provide insights into these aspects. Additionally, we welcome suggestions for best practices when integrating TikTok-Api into automation projects similar to ours.
**Broader Community Engagement:**
We believe that a robust solution for TikTok to YouTube automation can benefit a wide range of users. By collaborating and sharing knowledge across projects, we can develop more resilient and feature-rich tools for the community.
**References:**
- [tiktok-to-youtube-automation Project](https://github.com/scottsdevelopment/tiktok-to-youtube-automation)
- [Discontinuation of tiktok-scraper](https://github.com/drawrowfly/tiktok-scraper/issues/834)
- [TikTok-Api Repository](https://github.com/davidteather/TikTok-Api)
We look forward to the possibility of integrating TikTok-Api into our project and contributing to the broader community's efforts in this domain. | open | 2025-01-19T22:55:03Z | 2025-01-19T22:56:38Z | https://github.com/davidteather/TikTok-Api/issues/1214 | [
"feature_request"
] | scottsdevelopment | 0 |
keras-team/autokeras | tensorflow | 1,455 | Fitting autokeras with the EarlyStopping baseline parameter does not work | ### Bug Description
When the EarlyStopping baseline parameter is triggered, autokeras is crashing with the following error : TypeError: object of type 'NoneType' has no len()
### Bug Reproduction
Here is the colab:
https://colab.research.google.com/drive/1oqxaIaXb51qGaFSRBJtUL5yDZ77Udjx1?usp=sharing
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python:
- autokeras: 1.0.12
- keras-tuner: master
- scikit-learn:
- numpy:
- pandas:
- tensorflow:
| open | 2020-12-02T01:33:41Z | 2021-01-22T16:03:29Z | https://github.com/keras-team/autokeras/issues/1455 | [
"bug report",
"pinned"
] | q-55555 | 0 |
google-research/bert | tensorflow | 1,406 | Internal: Blas GEMM launch failed when running classifier for URLs | System information:
- os: Windows 11
- gpu: Nvidia GeForce RTX 3080 TI (12GB)
- Tensor flow: tensorflow-gpu v 1.14.0
- cuda: v 10.0 (but i have other version installed: 12.4 and 9.0 which dont have the required .dll file, all of them are in my PATH)
- python: 3.7 (for the purposes of protobuf)
The error:
```
2024-04-22 15:35:14.641625: E tensorflow/stream_executor/cuda/cuda_blas.cc:428] failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED
ERROR:tensorflow:Error recorded from training_loop: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(4096, 2), b.shape=(2, 768), m=4096, n=768, k=2
[[node bert/embeddings/MatMul (defined at D:\Faks\UM-Mag 23-25\Drugi semester\JT\google-bert\modeling.py:487) ]]
[[loss/Mean/_4861]]
(1) Internal: Blas GEMM launch failed : a.shape=(4096, 2), b.shape=(2, 768), m=4096, n=768, k=2
[[node bert/embeddings/MatMul (defined at D:\Faks\UM-Mag 23-25\Drugi semester\JT\google-bert\modeling.py:487) ]]
0 successful operations.
0 derived errors ignored.
```
What i've tried:
I tried checking `nvidia-smi.exe` to see if i had something running on the GPU while training but got the following result:
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 552.22 Driver Version: 552.22 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3080 Ti WDDM | 00000000:0A:00.0 Off | N/A |
| 0% 36C P8 24W / 350W | 1598MiB / 12288MiB | 3% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 9072 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 10652 C+G ...al\Discord\app-1.0.9042\Discord.exe N/A |
| 0 N/A N/A 10688 C+G ...ekyb3d8bbwe\PhoneExperienceHost.exe N/A |
| 0 N/A N/A 10820 C+G ...nt.CBS_cw5n1h2txyewy\SearchHost.exe N/A |
| 0 N/A N/A 10844 C+G ...2txyewy\StartMenuExperienceHost.exe N/A |
| 0 N/A N/A 14572 C+G ...\cef\cef.win7x64\steamwebhelper.exe N/A |
| 0 N/A N/A 14744 C+G ...GeForce Experience\NVIDIA Share.exe N/A |
| 0 N/A N/A 14824 C+G ...1.0_x64__8wekyb3d8bbwe\Video.UI.exe N/A |
| 0 N/A N/A 15072 C+G ...t.LockApp_cw5n1h2txyewy\LockApp.exe N/A |
| 0 N/A N/A 17164 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 18100 C+G ...les\Microsoft OneDrive\OneDrive.exe N/A |
| 0 N/A N/A 18936 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 19308 C+G ...air\Corsair iCUE5 Software\iCUE.exe N/A |
| 0 N/A N/A 19848 C+G ...crosoft\Edge\Application\msedge.exe N/A |
| 0 N/A N/A 23840 C+G ..._x64__kzf8qxf38zg5c\Skype\Skype.exe N/A |
| 0 N/A N/A 24040 C+G ...lf\0.248.120.19\OverwolfBrowser.exe N/A |
| 0 N/A N/A 25276 C+G ...on\123.0.2420.97\msedgewebview2.exe N/A |
| 0 N/A N/A 25604 C+G ...ejd91yc\AdobeNotificationClient.exe N/A |
| 0 N/A N/A 25636 C+G ...509_x64__8wekyb3d8bbwe\ms-teams.exe N/A |
| 0 N/A N/A 25920 C+G ...ktop\EA Desktop\EACefSubProcess.exe N/A |
| 0 N/A N/A 25972 C+G ...\GOG Galaxy\GalaxyClient Helper.exe N/A |
| 0 N/A N/A 26180 C+G ...EA Desktop\EA Desktop\EADesktop.exe N/A |
| 0 N/A N/A 28536 C+G ...cks-services\BlueStacksServices.exe N/A |
| 0 N/A N/A 29332 C+G ...aam7r\AcrobatNotificationClient.exe N/A |
| 0 N/A N/A 31120 C+G ...on\123.0.2420.97\msedgewebview2.exe N/A |
| 0 N/A N/A 31468 C+G ..._x64__kzf8qxf38zg5c\Skype\Skype.exe N/A |
| 0 N/A N/A 32340 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
| 0 N/A N/A 33104 C+G ...on\HEX\Creative Cloud UI Helper.exe N/A |
| 0 N/A N/A 33208 C+G ...on\123.0.2420.97\msedgewebview2.exe N/A |
| 0 N/A N/A 35396 C+G ...wekyb3d8bbwe\XboxGameBarWidgets.exe N/A |
| 0 N/A N/A 39264 C+G ...m Files\Mozilla Firefox\firefox.exe N/A |
+-----------------------------------------------------------------------------------------+
```
Then i tried googling for other similar issues and found [this](https://stackoverflow.com/questions/43990046/tensorflow-blas-gemm-launch-failed) and when following [this answers](https://stackoverflow.com/a/65523597) instructions, adding the lines to the `modeling.py` after the imports i received the same error.
I didn't find any other possible solutions and i'm unsure as to what i'm doing wrong. Did i add the memory growth lines in the wrong file or did i go about solving the issue completely wrong? Any help is appreciated.
I am running the 3.7 kernel in an virtual environment and the data i am feeding the model is properly formatted. I am using the BERT base uncased model downloaded from this repository. | open | 2024-04-22T14:08:29Z | 2024-04-22T14:08:29Z | https://github.com/google-research/bert/issues/1406 | [] | loginName1 | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 400 | 纯小白,怎么其他服务器部署这个程序 | 我用的是亚马逊Linux服务器,不知道应该怎么部署,那两个镜像都不能用 | open | 2024-05-20T13:04:26Z | 2024-05-22T01:22:33Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/400 | [
"enhancement"
] | wojiadexiaoming | 1 |
dmlc/gluon-cv | computer-vision | 1,590 | model_store: FileExistsError: [Errno 17] File exists: '/root/.mxnet/models' | Upon running the maskrcnn example, I hit the above error.
Following stack trace:
```
[1,22]<stdout>:Downloading /root/.mxnet/models/resnet50_v1b-0ecdba34.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet50_v1b-0ecdba34.zip...
[1,37]<stdout>:Downloading /root/.mxnet/models/resnet50_v1b-0ecdba34.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet50_v1b-0ecdba34.zip...
[1,38]<stdout>:Traceback (most recent call last):
[1,38]<stdout>: File "train_mask_rcnn.py", line 698, in <module>
[1,38]<stdout>: per_device_batch_size=args.batch_size // num_gpus, **kwargs)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/model_zoo.py", line 403, in get_model
[1,38]<stdout>: net = _models[name](**kwargs)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/rcnn/mask_rcnn/predefined_models.py", line 97, in mask_rcnn_fpn_resnet50_v1b_coco
[1,38]<stdout>: base_network = resnet50_v1b(pretrained=pretrained_base, dilated=False, use_global_stats=True)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/resnetv1b.py", line 367, in resnet50_v1b
[1,38]<stdout>: tag=pretrained, root=root), ctx=ctx)
[1,38]<stdout>: File "/usr/local/lib/python3.7/site-packages/gluoncv/model_zoo/model_store.py", line 274, in get_model_file
[1,38]<stdout>: os.makedirs(root)
[1,38]<stdout>: File "/usr/local/lib/python3.7/os.py", line 223, in makedirs
[1,38]<stdout>: mkdir(name, mode)
[1,38]<stdout>:FileExistsError: [Errno 17] File exists: '/root/.mxnet/models'
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 38 in communicator MPI COMMUNICATOR 5 DUP FROM 0
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
```
Looks like multiple ranks are trying to create the folder simultaneously.
Note: This error is hit intermittently [not reproducible 100% times] | closed | 2021-01-20T00:34:21Z | 2021-01-20T18:38:03Z | https://github.com/dmlc/gluon-cv/issues/1590 | [] | ChaiBapchya | 3 |
donnemartin/data-science-ipython-notebooks | machine-learning | 77 | Trax Tutorials | Hey,
I see that there are no tutorial notebooks for **[Trax](https://github.com/google/trax)** implementations in this repository yet. Trax is an _end-to-end_ library for deep learning that focuses on clear code and speed. It is actively used and maintained in the **Google Brain team**.
I would like to add such tutorial notebooks in Trax
| closed | 2020-10-14T02:38:51Z | 2020-11-19T04:33:26Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/77 | [] | SauravMaheshkar | 0 |
serengil/deepface | machine-learning | 1,133 | API Endpoint /represent fails with 400 for array of img URLs | ### Description
Using an array of `img` items via the API endpoint `represent` responds with the following error
```
{
"error": "Exception while representing: img must be numpy array or str but it is <class 'list'>"
}
```
#### Steps to Reproduce
1. Pull repo
2. Build the image
`docker build -t deepface -f Dockerfile .`
3. Run the image
`docker run deepface`
4. Attempt to use the represent endpoint with an array of images i.e.:
```
{
"img": ["imgUrl1", "imgUrl2"]
}
```
### Expected Behavior
The API is able to parse the list into a numpy array and completed the request. | closed | 2024-03-22T09:43:58Z | 2024-03-22T22:31:40Z | https://github.com/serengil/deepface/issues/1133 | [
"invalid"
] | lounging-lizard | 2 |
pywinauto/pywinauto | automation | 1,394 | andlers | ## Expected Behavior
## Actual Behavior
## Steps to Reproduce the Problem
1.
2.
3.
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version:
- Python version and bitness:
- Platform and OS:
| open | 2024-07-03T22:05:01Z | 2024-07-03T22:05:01Z | https://github.com/pywinauto/pywinauto/issues/1394 | [] | Lemur19 | 0 |
fastapi-users/fastapi-users | fastapi | 111 | How to login with cookie? | Hi, I create a login form page , I want to login it with cookie, How to do it? I test it , but not work.
```
auth_backends = [
CookieAuthentication(secret=SECRET, lifetime_seconds=3600),
# JWTAuthentication(secret=SECRET, lifetime_seconds=3600),
]
fastapi_users = FastAPIUsers(
user_db, auth_backends, User, UserCreate, UserUpdate, UserDB, SECRET,
)
app.include_router(fastapi_users.router, prefix="/users", tags=["users"])
templates = Jinja2Templates(directory='templates')
...
@app.get("/")
async def read_root(user: User = Depends(fastapi_users.get_current_active_user)):
return {"Hello": f"{user.email}"}
@app.route("/login", methods=['GET'])
async def login(request):
return templates.TemplateResponse('login.html', {'request': request})
```
template
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Login</title>
</head>
<body>
<h1>Login</h1>
<form method="post" action="/users/login/cookie">
<input name="username" autocomplete="off">
<input name="password" autocomplete="off">
<button>submit</button>
</form>
</body>
</html>
```
I access Login page and input username and password, then submit it, response null, then I access homepage "http://127.0.0.1:8000", but still display `{"detail":"Unauthorized"}` | closed | 2020-02-22T02:20:48Z | 2020-02-22T08:05:04Z | https://github.com/fastapi-users/fastapi-users/issues/111 | [
"question"
] | jet10000 | 2 |
blb-ventures/strawberry-django-plus | graphql | 245 | Got an unexpected keyword argument 'filters' | Argument filters shows up in GraphiQL, but still got an error. It works fine before upgrading to v3.


Related code:
<https://github.com/he0119/smart-home/commit/9922c788a2d79761bd6c100c3bd4b13c31cfb4d6#diff-af3602ede1befa32df28d961add5b48aaf07992465fb6d0f1dbb50e4a0568cdbR79-R85>
```python
@gql.django.type(models.Device, filters=DeviceFilter, order=DeviceOrder)
class Device(relay.Node):
name: auto
device_type: auto
location: auto
created_at: auto
edited_at: auto
is_online: auto
online_at: auto
offline_at: auto
token: auto
# FIXME: Device.autowatering_data() got an unexpected keyword argument 'filters'
@gql.django.connection(
gql.django.ListConnectionWithTotalCount[AutowateringData],
filters=AutowateringDataFilter,
order=AutowateringDataOrder,
)
def autowatering_data(self, info) -> Iterable[models.AutowateringData]:
return models.AutowateringData.objects.all()
``` | open | 2023-06-17T02:50:12Z | 2023-06-18T14:26:58Z | https://github.com/blb-ventures/strawberry-django-plus/issues/245 | [
"enhancement",
"help wanted"
] | he0119 | 2 |
mouredev/Hello-Python | fastapi | 6 | A | closed | 2022-12-26T00:07:04Z | 2022-12-26T00:07:23Z | https://github.com/mouredev/Hello-Python/issues/6 | [] | kpnicolas | 0 |
|
mwaskom/seaborn | data-visualization | 3,537 | catplot with redundant hue assignment creates empty legend with title | ```python
sns.catplot(tips, x="day", y="total_bill", hue="day", col="sex", row="smoker", kind="box", height=3)
```

Set `legend=False` to workaround, but with default `legend='auto'` this should be disabled due to the redundancy. | closed | 2023-10-21T18:09:26Z | 2023-11-04T16:03:15Z | https://github.com/mwaskom/seaborn/issues/3537 | [
"bug",
"mod:categorical"
] | mwaskom | 1 |
netbox-community/netbox | django | 18,462 | Add scope_name field when doing bulk ipam.prefixes imports | ### NetBox version
v4.2.2
### Feature type
Change to existing functionality
### Proposed functionality
Add `scope_name` in addition or as alternative field to `scope_id` when doing builk `ipam.prefix` (and others) imports.
### Use case
Before updating from v4.1 to v4.2 we noticed the breaking change about the removal of the `site` field which got replaced by a combination of `scope_type` and `scope_id`. I don't question the need for the multipe scope types, but to have a better migration when doing bulk imports it would be nice to have a `scope_name` field in addition to `scope_id`. Of course you can only have one field filled (id or name).
Sure this would cause an id-lookup when scope_name is used (like in v4.1) but it would help the transition (we would just have to add scope_type=dcim.site and rename the csv field site to scope_name and continue instead of using numeric ids during mass imports.
### Database changes
Not that I know of
### External dependencies
None | open | 2025-01-23T11:01:08Z | 2025-02-27T18:14:47Z | https://github.com/netbox-community/netbox/issues/18462 | [
"type: feature",
"needs milestone",
"breaking change",
"status: backlog",
"complexity: medium"
] | dhoffend | 2 |
pydata/xarray | numpy | 9,111 | `xr.open_zarr` is 3x slower than `zarr.open`, even at scale | ### What is your issue?
I'm doing some benchmarks on Xarray + Zarr vs. some other formats, and I get quite a surprising result — in a very simple array, xarray is adding a lot of overhead to reading a Zarr array.
Here's a quick script — super simple, just a single chunk. It's 800MB of data — so not some tiny array where reading a metadata json file or allocating an index is going to throw the results.
```python
import numpy as np
import zarr
import xarray as xr
import dask
print(zarr.__version__, xr.__version__, dask.__version__)
(
xr.DataArray(np.random.rand(10000, 10000), name="foo")
.to_dataset()
.chunk(None)
.to_zarr("test.zarr", mode="w")
)
%timeit xr.open_zarr("test.zarr").compute()
%timeit zarr.open("test.zarr")["foo"][:]
```
```
2.17.2 2024.5.1.dev37+gce196d56 2024.5.2
551 ms ± 15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
183 ms ± 2.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
So:
- 551ms for xarray
- 183ms for zarr
Having a quick look with `py-spy` suggests there might be some thread contention, but not sure how much is really contention vs. idle threads waiting.
---
Making the array 10x bigger (with 10 chunks) reduces the relative difference, but it's still fairly large:
```
2.17.2 2024.5.1.dev37+gce196d56 2024.5.2
6.88 s ± 353 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
4.15 s ± 264 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
---
Any thoughts on what might be happening? Is the benchmark at least correct? | closed | 2024-06-12T22:51:04Z | 2024-06-26T18:20:05Z | https://github.com/pydata/xarray/issues/9111 | [
"topic-performance",
"topic-zarr"
] | max-sixty | 7 |
alteryx/featuretools | data-science | 2,361 | Add support for running DFS using multiprocessing | - We support running DFS via multiprocessing (https://docs.python.org/3/library/multiprocessing.html) | open | 2022-11-07T15:54:56Z | 2023-06-26T19:14:49Z | https://github.com/alteryx/featuretools/issues/2361 | [
"2.0 wish list"
] | gsheni | 0 |
microsoft/qlib | deep-learning | 1,851 | Fillna does not work if fields_group is not None | ## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
The Fillna processor does not work if fields_group is not None since assigning values to df.values changes nothing.
## To Reproduce
Use any model and specify fields_group for Fillna processor.
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
No nan after calling Fillna.
## Additional Notes
<!-- Add any other information about the problem here. -->
Same as the issue here: https://github.com/microsoft/qlib/issues/1307#issuecomment-1785284039.
| open | 2024-09-26T02:59:42Z | 2024-09-26T04:13:00Z | https://github.com/microsoft/qlib/issues/1851 | [
"bug"
] | LeetaH666 | 2 |
axnsan12/drf-yasg | rest-api | 143 | 'NoneType' object has no attribute 'description' | Firstly, thanks for this great library.
Now, in:
https://github.com/axnsan12/drf-yasg/blob/2ef7cfbfe369a55e8b68e574bf20fe32d40cac38/src/drf_yasg/inspectors/query.py#L49
If we can't find out which type it is, we default to string.
However, later, we try to ask the schema description:
https://github.com/axnsan12/drf-yasg/blob/2ef7cfbfe369a55e8b68e574bf20fe32d40cac38/src/drf_yasg/inspectors/query.py#L51
My field that is getting passed in, is from https://github.com/django-money/django-money, has no schema:
``` python
ipdb> field
Field(name=u'currency', required=False, location='query', schema=None, description=u'', type=None, example=None)
ipdb> type(field.schema)
<type 'NoneType'>
ipdb> field.schema.description
*** AttributeError: 'NoneType' object has no attribute 'description'
```
Not sure what to do exactly but it seems that we should make a `field.schema is None` check? | closed | 2018-06-12T11:53:41Z | 2018-06-16T13:31:32Z | https://github.com/axnsan12/drf-yasg/issues/143 | [
"bug"
] | decentral1se | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,588 | relationship_proxy documentation doesn't work with `_: dataclasses.KW_ONLY` | ### Describe the bug
Consider the example [here](https://docs.sqlalchemy.org/en/20/orm/extensions/associationproxy.html#simplifying-association-objects):
If you add `MappedAsDataclass` and `_: dataclasses.KW_ONLY` to `UserKeywordAssociation` the example stops working because the implicit creator passes the argument as positional.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.31
### DBAPI (i.e. the database driver)
asyncpg
### Database Vendor and Major Version
postgres 16
### Python Version
3.12
### Operating system
Linux
### To Reproduce
```python
from __future__ import annotations
from typing import List
from typing import Optional
from sqlalchemy import ForeignKey
from sqlalchemy import String
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.associationproxy import AssociationProxy
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import MappedAsDataclass
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
class Base(DeclarativeBase, MappedAsDataclass):
pass
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(64))
user_keyword_associations: Mapped[List[UserKeywordAssociation]] = relationship(
back_populates="user",
cascade="all, delete-orphan",
)
# association proxy of "user_keyword_associations" collection
# to "keyword" attribute
keywords: AssociationProxy[List[Keyword]] = association_proxy(
"user_keyword_associations",
"keyword",
)
def __init__(self, name: str):
self.name = name
class UserKeywordAssociation(Base):
__tablename__ = "user_keyword"
_: dataclasses.KW_ONLY
user_id: Mapped[int] = mapped_column(ForeignKey("user.id"), primary_key=True)
keyword_id: Mapped[int] = mapped_column(ForeignKey("keyword.id"), primary_key=True)
special_key: Mapped[Optional[str]] = mapped_column(String(50))
user: Mapped[User] = relationship(back_populates="user_keyword_associations")
keyword: Mapped[Keyword] = relationship()
class Keyword(Base):
__tablename__ = "keyword"
id: Mapped[int] = mapped_column(primary_key=True)
keyword: Mapped[str] = mapped_column("keyword", String(64))
def __init__(self, keyword: str):
self.keyword = keyword
def __repr__(self) -> str:
return f"Keyword({self.keyword!r})"
user = User("log")
for kw in (Keyword("new_from_blammo"), Keyword("its_big")):
user.keywords.append(kw) # boom
print(user.keywords)
```
### Error
```
/Users/tamird/Library/Caches/pypoetry/virtualenvs/common-hf-Ms37h-py3.12/lib/python3.12/site-packages/sqlalchemy/ext/associationproxy.py:1505: in append
item = self._create(value)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _create(self, value: _T) -> Any:
> return self.creator(value)
E TypeError: __init__() takes 1 positional argument but 2 were given
```
### Additional context
_No response_ | closed | 2024-07-09T13:38:46Z | 2024-07-09T14:28:11Z | https://github.com/sqlalchemy/sqlalchemy/issues/11588 | [] | tamird | 0 |
modelscope/data-juicer | data-visualization | 370 | [Bug]: MODEL_ZOO is not reused in subprocesses | Since `model_key` (the hash of partial function) changes in mapped processes, the preloaded models are never truly reused.
https://github.com/modelscope/data-juicer/blob/aaa404a5b12c9ef87ebf54a7bf38c7b4bcbfd0f1/data_juicer/utils/model_utils.py#L546-L549 | closed | 2024-07-29T05:38:25Z | 2024-08-21T03:06:21Z | https://github.com/modelscope/data-juicer/issues/370 | [
"enhancement"
] | drcege | 1 |
Kinto/kinto | api | 2,982 | id missing in the payload for the endpoint POST /accounts | In the response for GET /\__api\__,
for the endpoint POST /accounts, the ObjectSchema( body) is
<img width="1711" alt="Screenshot 2022-04-15 at 2 07 07 PM" src="https://user-images.githubusercontent.com/26853764/163551763-7f9821ef-0586-4fb6-bc3b-0e84423ffcb1.png">
Using this body gives the following error,
```
{
"code": 400,
"errno": 107,
"error": "Invalid parameters",
"message": "data.id in body: Required",
"details": [
{
"location": "body",
"name": "data.id",
"description": "data.id in body: Required"
}
]
}
```
The "id" field is missing in the body of the endpoint POST /accounts in the OpenAPI Spec.
Correct payload should be,
```
{
"data": {
"password": "string",
"id": "username",
"additionalProp1": "string",
"additionalProp2": "string",
"additionalProp3": "string"
},
"permissions": {
"read": [
"string"
],
"write": [
"string"
]
}
}
``` | open | 2022-04-15T09:20:53Z | 2024-07-23T20:04:50Z | https://github.com/Kinto/kinto/issues/2982 | [
"bug",
"protocol",
"stale"
] | MBhartiya | 7 |
google-research/bert | nlp | 1,214 | In tf1.13.1 version, bert performs downstream tasks, how to run bert on multiple GPUs? | Written on github:Yes, all of the code in this repository works out-of-the-box with CPU, GPU, and Cloud TPU. However, GPU training is single-GPU only.
When bert completes downstream tasks and performs fine_tune, can't he train on multiple GPUs? If not, when can the downstream task be trained on the bert model of multiple GPUs? Because, when a very large long text is trained for bert, an error will be reported and the efficiency will be low | open | 2021-04-07T14:17:07Z | 2021-04-07T14:17:07Z | https://github.com/google-research/bert/issues/1214 | [] | iamsuarez | 0 |
fbdesignpro/sweetviz | pandas | 149 | Can't find "index" | Hi ! There some trouble in the running process of sweetviz.
My code is `report = sw.analyze(df_a)`
But python noted e with the following keyerror
`"None of ['index'] are in the columns"`
I had checked my columns' name, and tried to resent the index as well, however, pyhton still send the notes.
Hope for ur help if it is convenient !
The whole error notes
`---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[57], line 1
----> 1 report = sw.analyze(df_a)
File ~\AppData\Roaming\Python\Python39\site-packages\sweetviz\sv_public.py:12, in analyze(source, target_feat, feat_cfg, pairwise_analysis)
8 def analyze(source: Union[pd.DataFrame, Tuple[pd.DataFrame, str]],
9 target_feat: str = None,
10 feat_cfg: FeatureConfig = None,
11 pairwise_analysis: str = 'auto'):
---> 12 report = sweetviz.DataframeReport(source, target_feat, None,
13 pairwise_analysis, feat_cfg)
14 return report
File ~\AppData\Roaming\Python\Python39\site-packages\sweetviz\dataframe_report.py:256, in DataframeReport.__init__(self, source, target_feature_name, compare, pairwise_analysis, fc)
253 for f in features_to_process:
254 # start = time.perf_counter()
255 self.progress_bar.set_description_str(f"Feature: {f.source.name}")
--> 256 self._features[f.source.name] = sa.analyze_feature_to_dictionary(f)
257 self.progress_bar.update(1)
258 # print(f"DONE FEATURE------> {f.source.name}"
259 # f" {(time.perf_counter() - start):.2f} {self._features[f.source.name]['type']}")
260 # self.progress_bar.set_description_str('[FEATURES DONE]')
261 # self.progress_bar.close()
262
263 # Wrap up summary
File ~\AppData\Roaming\Python\Python39\site-packages\sweetviz\series_analyzer.py:92, in analyze_feature_to_dictionary(to_process)
89 | closed | 2023-08-01T16:45:59Z | 2023-11-15T12:13:33Z | https://github.com/fbdesignpro/sweetviz/issues/149 | [] | ZORAYE | 9 |
trevorstephens/gplearn | scikit-learn | 273 | Add conda installation instructions | open | 2022-06-20T08:37:52Z | 2022-06-20T08:37:52Z | https://github.com/trevorstephens/gplearn/issues/273 | [
"documentation"
] | trevorstephens | 0 |
|
SciTools/cartopy | matplotlib | 1,659 | Shapefile gives warning with GSHHS feature | Hi,
shapefile gives me a warning when I save a figure which has GSHHS features in it. Thought I'd ask it here:
MWE:
```python
import matplotlib.pyplot as plt;
import cartopy.crs as ccrs;
import cartopy;
fig = plt.figure();
proj = ccrs.Mercator(central_longitude = 0);
ax = plt.axes(projection = proj);
ax.add_feature(cartopy.feature.GSHHSFeature(levels = (1, 2), linewidth = 2.0));
print("before");
fig.savefig("test.png");
print("after");
```
output:
before
```
/home/max/.local/lib/python3.8/site-packages/shapefile.py:385: UserWarning: Shapefile shape has invalid polygon: no exterior rings found (must have clockwise orientation); interpreting holes as exteriors.
warnings.warn('Shapefile shape has invalid polygon: no exterior rings found (must have clockwise orientation); interpreting holes as exteriors.')
```
after
What is the cause of this and how can it be resolved?
OS: Ubuntu 20.04
Cartopy: 0.18.0
pyshp: 2.1.2
Thanks in advance. | closed | 2020-09-16T10:36:44Z | 2021-09-04T00:53:28Z | https://github.com/SciTools/cartopy/issues/1659 | [] | MHBalsmeier | 4 |
moshi4/pyCirclize | matplotlib | 85 | adjust font size under heatmap_track.yticks | Great package!
I want to know how to adjust font size under heatmap_track.yticks
I followed the example https://moshi4.github.io/pyCirclize/phylogenetic_tree/#2-3-with-heatmap
```
from pycirclize import Circos
from pycirclize.utils import load_example_tree_file, ColorCycler
import numpy as np
np.random.seed(0)
tree_file = load_example_tree_file("large_example.nwk")
circos, tv = Circos.initialize_from_tree(
tree_file,
start=-350,
end=0,
r_lim=(10, 80),
leaf_label_size=5,
leaf_label_rmargin=21,
line_kws=dict(color="lightgrey", lw=1),
)
# Define group-species dict for tree annotation
# In this example, set minimum species list to specify group's MRCA node
group_name2species_list = dict(
Monotremata=["Tachyglossus_aculeatus", "Ornithorhynchus_anatinus"],
Marsupialia=["Monodelphis_domestica", "Vombatus_ursinus"],
Xenarthra=["Choloepus_didactylus", "Dasypus_novemcinctus"],
Afrotheria=["Trichechus_manatus", "Chrysochloris_asiatica"],
Euarchontes=["Galeopterus_variegatus", "Theropithecus_gelada"],
Glires=["Oryctolagus_cuniculus", "Microtus_oregoni"],
Laurasiatheria=["Talpa_occidentalis", "Mirounga_leonina"],
)
# Set tree line color
ColorCycler.set_cmap("Set2")
for species_list in group_name2species_list.values():
tv.set_node_line_props(species_list, color=ColorCycler())
# Plot heatmap
sector = circos.sectors[0]
heatmap_track = sector.add_track((80, 100))
matrix_data = np.random.randint(0, 100, (5, tv.leaf_num))
heatmap_track.heatmap(matrix_data, cmap="viridis")
heatmap_track.yticks([0.5, 1.5, 2.5, 3.5, 4.5], list("EDCBA"), vmax=5, tick_length=0, fontsize=10)
fig = circos.plotfig()
```
but got
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-56-ce35496d538a>](https://localhost:8080/#) in <cell line: 0>()
39 matrix_data = np.random.randint(0, 100, (5, tv.leaf_num))
40 heatmap_track.heatmap(matrix_data, cmap="viridis")
---> 41 heatmap_track.yticks([0.5, 1.5, 2.5, 3.5, 4.5], list("EDCBA"), vmax=5, tick_length=0, fontsize=10)
42
43 fig = circos.plotfig()
TypeError: Track.yticks() got an unexpected keyword argument 'fontsize'
``` | closed | 2025-03-05T07:22:58Z | 2025-03-05T09:09:06Z | https://github.com/moshi4/pyCirclize/issues/85 | [
"question"
] | johnnytam100 | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 887 | Please work on browser fingerprints | Please add bypass options for
canvas fingerprinting
audio fingerprinting
and also timezone integration on browser level
id like to work with you to implement these features
| open | 2022-11-12T01:36:21Z | 2022-11-14T17:57:25Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/887 | [] | NiamurRashid | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 487 | I want a parameter configuration that reproduces the results | Using the operating instructions you provided and the parameter settings in the documentation did not reproduce the results,
The parameter settings in the article are even less effective.
Can you provide a parameter setting that achieves the effect in the article and put it in the document.
`
Evaluating market1501 (source)
Extracting features from query set ...
Done, obtained 3368-by-512 matrix
Extracting features from gallery set ...
Done, obtained 15913-by-512 matrix
Speed: 0.0299 sec/batch
Computing distance matrix with metric=euclidean ...
Computing CMC and mAP ...
** Results **
mAP: 78.0%
CMC curve
Rank-1 : 91.3%
Rank-5 : 96.8%
Rank-10 : 97.9%
Rank-20 : 98.8%
Checkpoint saved to "log/osnet_x1_0-softmax-market1501/model/model.pth.tar-64"
`
has converged
`
Evaluating market1501 (source)
Extracting features from query set ...
Done, obtained 3368-by-512 matrix
Extracting features from gallery set ...
Done, obtained 15913-by-512 matrix
Speed: 0.0299 sec/batch
Computing distance matrix with metric=euclidean ...
Computing CMC and mAP ...
** Results **
mAP: 78.1%
CMC curve
Rank-1 : 91.3%
Rank-5 : 96.8%
Rank-10 : 97.9%
Rank-20 : 98.8%
Checkpoint saved to "log/osnet_x1_0-softmax-market1501/model/model.pth.tar-100"
`
`Total params: 2,578,879
Trainable params: 2,578,879
Non-trainable params: 0
Input size (MB): 0.38
Forward/backward pass size (MB): 282.45
Params size (MB): 9.84
Estimated Total Size (MB): 292.66
Loading checkpoint from "******************************/model.pth.tar-100"
Loaded model weights
Loaded optimizer
Last epoch = 100
Last rank1 = 91.3%
dist_metric='cosine'
Evaluating dukemtmcreid (target)
Extracting features from query set ...
Done, obtained 2228-by-512 matrix
Extracting features from gallery set ...
Done, obtained 17661-by-512 matrix
Speed: 0.0303 sec/batch
Computing distance matrix with metric=cosine ...
Computing CMC and mAP ...
** Results **
mAP: 24.3%
CMC curve
Rank-1 : 41.7%
Rank-5 : 57.7%
Rank-10 : 63.3%
Rank-20 : 69.0%
` | closed | 2022-01-17T05:15:34Z | 2022-01-17T05:17:22Z | https://github.com/KaiyangZhou/deep-person-reid/issues/487 | [] | yup1212 | 0 |
pandas-dev/pandas | data-science | 60,604 | BUG: Single Index of Tuples as Output on Tuple Groupings | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
df = pandas.DataFrame({"A" : [1,2,3,1,2,3], "B": [4,5,6,4,5,6], "C" : [7,8,9,7,8,9]})
df = df.set_index(['A', 'B'])
df.groupby(lambda x : (x[0], x[1])).aggregate('sum')
```
### Issue Description
This issue appears to be similar to #24786 except that here a lambda is used to create the tuples. The [User Guide for Pandas](https://pandas.pydata.org/docs/user_guide/groupby.html) states that
> The result of the aggregation will have the group names as the new index. In the case of multiple keys, the result is a MultiIndex by default.
Thus I was expecting the tuples to be automatically combined to form a ```MultiIndex``` as opposed to an ```Index``` with tuples as indices. Internally, the ```Index.map()``` [call](https://github.com/pandas-dev/pandas/blob/8a5344742c5165b2595f7ccca9e17d5eff7f7886/pandas/core/groupby/grouper.py#L511) converts the produced list of tuples to a ```MultiIndex```, but when it [produces the aggregation index](https://github.com/pandas-dev/pandas/blob/8a5344742c5165b2595f7ccca9e17d5eff7f7886/pandas/core/groupby/ops.py#L753), it appears to produce a single ``Index``` instead.
I wasn't sure if this was intended behaviour, or a bug; I apologize if it is the former!
### Expected Behavior
```python
df = pandas.DataFrame({"A" : [1,2,3,1,2,3], "B": [4,5,6,4,5,6], "C" : [7,8,9,7,8,9]})
df = df.set_index(['A', 'B'])
df.groupby(['A', 'B']).aggregate('sum')
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.1
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18363
machine : AMD64
processor : AMD64 Family 21 Model 96 Stepping 1, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_Canada.1252
pandas : 2.2.3
numpy : 2.2.1
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None</details>
| closed | 2024-12-24T23:37:57Z | 2024-12-26T21:47:42Z | https://github.com/pandas-dev/pandas/issues/60604 | [
"Bug",
"Groupby"
] | RahimD | 1 |
aiortc/aiortc | asyncio | 825 | Quality problems with H264 Codec and Server example | Hey, I have the following setup:
- The [aiortc server](https://github.com/aiortc/aiortc/blob/main/examples/server/server.py) from the aiortc examples is running on a Windows machine
- An Android phone is connecting to the server (Using Google's Java WebRTC implementation)
This works perfectly fine as long as I'm using a VP8 Encoder/Decoder. Using the H264 Codec, the video stream sent to the aiortc server also looks fine (Tested this by recording it). However, the video stream sent back to the Android system is very bad quality-wise (low PFS, bad image). Looking at the Sender Statistics, the server receives millions of bytes, but only sends back a couple thousand.
A similar issue also happens with the original server example, using the following setup:
- [Aiortc server](https://github.com/aiortc/aiortc/blob/main/examples/server/server.py) from original example, runs on Windows
- [Javascript client](https://github.com/aiortc/aiortc/blob/main/examples/server/client.js) from original example (Using Firefox/Chrome on Windows/Android using Virtual Camera/Phone Camera -> all same result)
Here, the quality seems fine originally, but degrades within 30-60 seconds of connection, eventually losing all colors and getting ultra blurry. This also does not happen with VP8, only with H264.
Could it be that the aiortc H264 Codec is not compatible with other implementations? | closed | 2023-01-31T17:23:38Z | 2023-08-15T01:57:07Z | https://github.com/aiortc/aiortc/issues/825 | [
"stale"
] | richardbinder | 2 |
sepandhaghighi/samila | matplotlib | 101 | Add linewidth parameter to plot method | #### Description
It seems that this parameter has a significant effect on the output
## Linewidth: 1.2 | Spot Size: 0.1

## Linewidth: 1.2 | Spot Size: 10

## Linewidth: 12 | Spot Size: 0.1

## Linewidth: 12 | Spot Size: 10

| closed | 2022-02-19T12:57:51Z | 2022-04-13T11:54:44Z | https://github.com/sepandhaghighi/samila/issues/101 | [
"enhancement",
"new feature"
] | sepandhaghighi | 3 |
mwaskom/seaborn | pandas | 2,779 | Barplot | barplot max value shows almost half of maximum real data, but matplotlib bar fit the data well.
try to compare this
> import seaborn as sns
> tips = sns.load_dataset('tips')
> sns.barplot(y=tips['total_bill'], x=tips['day'])
and
> import matplotlib.pyplot as plt
> plt.bar(height=tips['total_bill'], x=tips['day']) | closed | 2022-04-11T13:43:40Z | 2022-04-11T13:54:30Z | https://github.com/mwaskom/seaborn/issues/2779 | [] | ghiffaryr | 1 |
ydataai/ydata-profiling | data-science | 1,536 | Feature Request | ### Missing functionality
As a frequent user of ydata_profiling am encountering the below issue
In the given dataset for the numeric datatype column we have to exclude empty cell while calculating 'sum'. When we are not including empty cells then the value of sum is coming as 'NaN'. From our side if we are replacing empty with '0' then it is impacting 'min' value, if we are replacing with some other value then it is impacting the data type of the corresponding column.
### Proposed feature
we have to exclude empty or null cell of the numeric data type columns while calculating 'sum'. When we are not including empty cells then the value of sum is coming as 'NaN'
### Alternatives considered
Below logic in describe_numeric_spark.py is the place where 'sum' is been is calculated. Please correct me if am wrong
@describe_numeric_1d.register
def describe_numeric_1d_spark(
config: Settings, df: DataFrame, summary: dict
) -> Tuple[Settings, DataFrame, dict]:
"""Describe a boolean series.
Args:
series: The Series to describe.
summary: The dict containing the series description so far.
Returns:
A dict containing calculated series description values.
"""
stats = numeric_stats_spark(df, summary)
summary["min"] = stats["min"]
summary["max"] = stats["max"]
summary["mean"] = stats["mean"]
summary["std"] = stats["std"]
summary["variance"] = stats["variance"]
summary["skewness"] = stats["skewness"]
summary["kurtosis"] = stats["kurtosis"]
summary["sum"] = stats["sum"]
### Additional context
We are building wheel file from our code and installing the same in databricks cluster and trying to do exploratory data analysis of the given source dataset in CSV file format | open | 2024-02-07T10:33:59Z | 2024-03-11T10:00:30Z | https://github.com/ydataai/ydata-profiling/issues/1536 | [
"needs-triage"
] | liyaskerj | 1 |
coqui-ai/TTS | pytorch | 3,614 | new version numpy from text-generation-webui | ### Describe the bug
one month ago all working fine...
btw you have the best real OFFLINE voice extention for obadooga ;)
### To Reproduce
loadthe extention
### Expected behavior
_No response_
### Logs
```shell
12:58:44-519624 INFO Loading the extension "coqui_tts"
12:58:45-869491 ERROR Failed to load the extension "coqui_tts".
Traceback (most recent call last):
File "e:\text-generation-webui\modules\extensions.py", line 37, in load_extensions
extension = importlib.import_module(f"extensions.{name}.script")
File "e:\text-generation-webui\installer_files\env\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "e:\text-generation-webui\extensions\coqui_tts\script.py", line 10, in <module>
from TTS.api import TTS
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\api.py", line 9, in <module>
from TTS.cs_api import CS_API
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\cs_api.py", line 12, in <module>
from TTS.utils.audio.numpy_transforms import save_wav
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\utils\audio\__init__.py", line 1, in <module> from TTS.utils.audio.processor import AudioProcessor
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\utils\audio\processor.py", line 10, in <module>
from TTS.utils.audio.numpy_transforms import (
File "e:\text-generation-webui\installer_files\env\lib\site-packages\TTS\utils\audio\numpy_transforms.py", line 8, in <module>
from librosa import magphase, pyin
File "e:\text-generation-webui\installer_files\env\lib\site-packages\lazy_loader\__init__.py", line 78, in __getattr__ attr = getattr(submod, name)
File "e:\text-generation-webui\installer_files\env\lib\site-packages\lazy_loader\__init__.py", line 77, in __getattr__ submod = importlib.import_module(submod_path)
File "e:\text-generation-webui\installer_files\env\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "e:\text-generation-webui\installer_files\env\lib\site-packages\librosa\core\spectrum.py", line 12, in <module>
from numba import jit
File "e:\text-generation-webui\installer_files\env\lib\site-packages\numba\__init__.py", line 55, in <module>
_ensure_critical_deps()
File "e:\text-generation-webui\installer_files\env\lib\site-packages\numba\__init__.py", line 42, in _ensure_critical_deps
raise ImportError("Numba needs NumPy 1.24 or less")
ImportError: Numba needs NumPy 1.24 or less
Running on local URL: http://127.0.0.1:7861
```
### Environment
```shell
-
```
### Additional context
_No response_ | closed | 2024-02-28T13:23:23Z | 2025-01-03T09:48:00Z | https://github.com/coqui-ai/TTS/issues/3614 | [
"bug",
"wontfix"
] | kalle07 | 2 |
d2l-ai/d2l-en | computer-vision | 2,437 | Add MindSpore support. | Dear maintainers,
We are developing MindSpore adapted version of d2l-zh. However, it was found that the English version and the Chinese version were very different, could we just follow the newest version to develop? If possible, can you create a new repository like 'd2l-jax-colab'? | open | 2023-02-06T01:56:41Z | 2023-05-17T22:26:00Z | https://github.com/d2l-ai/d2l-en/issues/2437 | [
"question"
] | lvyufeng | 5 |
comfyanonymous/ComfyUI | pytorch | 6,313 | Keybind to close Workflow | ### Feature Idea
Dear ComfyUI Team,
I would like to request the addition of a keyboard shortcut to quickly close the current workspace (workflow). This feature would improve efficiency and user experience by providing a faster way to manage workflows without relying solely on the mouse.
Thank you for considering this suggestion!
Best regards,
### Existing Solutions
_No response_
### Other
_No response_ | closed | 2025-01-02T02:52:12Z | 2025-01-06T15:50:51Z | https://github.com/comfyanonymous/ComfyUI/issues/6313 | [
"Feature",
"Frontend"
] | eduvm | 0 |
vllm-project/vllm | pytorch | 14,911 | [Bug]: UserWarning on skipping serialisation of PostGradPassManager | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 16 2024, 04:38:48) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 1
Core(s) per socket: 12
Socket(s): 1
Stepping: 6
BogoMIPS: 6002.58
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip pku ospke gfni vaes vpclmulqdq rdpid md_clear flush_l1d arch_capabilities
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 576 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 15 MiB (12 instances)
L3 cache: 216 MiB (12 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.2.post1+cu124torch2.5
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] sentence-transformers==3.2.1
[pip3] torch==2.6.0
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.21.0
[pip3] transformers==4.49.0
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.2.0
[pip3] tritonclient==2.51.0
[pip3] vector-quantize-pytorch==1.21.2
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.3.dev758+g489b7938
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-11 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
CUDA_PATH=/nix/store/skdw4l72lgrj628l0arj1m3ynlzfksi8-cuda-merged-12.4
LD_LIBRARY_PATH=/workspace/vllm/.venv/lib/python3.11/site-packages/cv2/../../lib64:/nix/store/lmyyfaz2amcs2an1f6m9h263151jiajy-cuda-merged-12.4/lib
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### 🐛 Describe the bug
I have started to see this serialization warning whenever running `vllm serve` from time to time when running with v1:
```prolog
/workspace/project/.venv/lib/python3.12/site-packages/torch/utils/_config_module.py:189: UserWarning: Skipping serialization of post_grad_custom_post_pass value <vllm.compilation.pass_manager.PostGradPassManager object at 0x7f23314f7c20>
warnings.warn(f"Skipping serialization of {k} value {v}")
```
Not sure how impactful is this, or we can ignore this.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-17T01:16:18Z | 2025-03-17T05:45:56Z | https://github.com/vllm-project/vllm/issues/14911 | [
"bug",
"v1"
] | aarnphm | 1 |
errbotio/errbot | automation | 937 | Hipchat is dropping XMPP, we need to port the backend fully to their API. | In the past it was not possible but it now it should if they drop XMPP, reevaluate and remplace the XMPP calls to native API ones. | closed | 2016-12-29T13:52:17Z | 2019-01-03T22:22:34Z | https://github.com/errbotio/errbot/issues/937 | [] | gbin | 9 |
labmlai/annotated_deep_learning_paper_implementations | deep-learning | 263 | LORA | An implementation of LORA and other tuning techniques would be nice. | open | 2024-07-13T17:29:05Z | 2024-07-31T13:42:12Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/263 | [] | erlebach | 2 |
falconry/falcon | api | 2,387 | falcon.testing.* is not (re-)exported | Now that falcon 4 with type hints has been released 🎉 , we've enabled type checking using mypy as well for the falcon namespace for our codebase.
Unfortunately, `falcon.testing.__init__` does not reexport the names it imports, which causes complaints by mypy, for example:
```
tests/test_subscription_data.py:13: error: Name "falcon.testing.TestClient" is not defined [name-defined]
```
(PR will follow)
| closed | 2024-10-24T11:41:23Z | 2024-11-06T18:44:22Z | https://github.com/falconry/falcon/issues/2387 | [
"bug",
"typing"
] | jap | 4 |
mwaskom/seaborn | matplotlib | 3,674 | errorbar wont be plotted if using 'col' | using next command
sns.catplot(data=df, x=x, y=y, hue=hue, kind='bar', height=6, aspect=1)
**will display error bars**
sns.catplot(data=df, x=x, y=y, hue=hue, **col=col,** kind='bar', height=6, aspect=1) or with some variations like adding **errorbar=('ci', 98)**
**will not display error bars** | closed | 2024-04-10T06:15:57Z | 2024-04-11T16:51:54Z | https://github.com/mwaskom/seaborn/issues/3674 | [] | ohadOrbach | 1 |
Johnserf-Seed/TikTokDownload | api | 232 | [BUG] | **描述出现的错误**
大佬帮忙看一下是不是py的版本有问题。 module not found。
**bug复现**
点击build.bat

在build.bat加入--exclude-module _bootlocale后build成功

但是运行时报错

**桌面(请填写以下信息):**
-操作系统:windows10 64bit
-vpn代理
关闭
-python版本
Python 3.10.8
pyinstaller 3.6
**附文**
在此处添加有关此问题的文字。
| closed | 2022-10-14T00:48:13Z | 2022-10-14T16:25:13Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/232 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | klook-tech-alvinsun | 2 |
GibbsConsulting/django-plotly-dash | plotly | 350 | Restrict dash version to be less than 1.21.0 | For current codebase, restrict dash version to address #349 | closed | 2021-08-23T13:04:41Z | 2021-08-23T23:16:44Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/350 | [
"enhancement"
] | GibbsConsulting | 1 |
miguelgrinberg/python-socketio | asyncio | 564 | Error during WebSocket handshake: Unexpected response code: 400 | I've a django socketio as server and reactjs as client
When i'm running both cilent and server,i'm getting error WebSocket connection to 'ws://localhost:8000/socket.io/?EIO=3&transport=websocket' failed: Error during WebSocket handshake: Unexpected response code: 400 at client side and "GET /socket.io/?EIO=3&transport=websocket HTTP/1.1" 400 11 at server side
socket.io-client verison-2.3.0
django version-3.1.3 | closed | 2020-11-12T08:51:53Z | 2020-11-12T11:16:16Z | https://github.com/miguelgrinberg/python-socketio/issues/564 | [
"question"
] | maneesha-reddy | 7 |
waditu/tushare | pandas | 917 | 请问期权是否提供历史数据? | 请问是否有SSE50 期权每天的小时,分钟数据?
谢谢 | closed | 2019-02-08T14:55:38Z | 2019-02-10T04:14:13Z | https://github.com/waditu/tushare/issues/917 | [] | Jack012a | 1 |
plotly/plotly.py | plotly | 4,170 | Using a timestamp column for facet_col or facet_row gives a KeyError | Using a column which is a `datetime64[ns]` (e.g. what `pd.to_datetime()` or `pd.date_range()` returns) for `facet_col` or `facet_row` gives a key error.
```python
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame({'Cost': np.random.normal(size=20), 'Date': pd.date_range(start='2023-04-20', periods=20)})
px.histogram(df, x='Cost', facet_col='Date')
```
This gives `KeyError: Timestamp('2023-04-20 00:00:00')`, as does `facet_row='Date'`.
Converting the timestamp column to a string or a `datetime.date` (e.g. via `df['Date'].dt.date`) is a temporary workaround. | closed | 2023-04-20T20:21:08Z | 2023-04-21T18:42:27Z | https://github.com/plotly/plotly.py/issues/4170 | [] | trevor-pope | 1 |
MaartenGr/BERTopic | nlp | 1,717 | How to link data inside topic model to original training data without preprocessed? | I use bertopic in Chinese .
So I must split the text to token and add space between every 2 tokens, and remove stopwords, and some special symbol, than I input the processed text into bertopic.
And the problem is how can I link the document to the oringinal training data without preprocessed? if I use documents in the model, the data are tokens without stopwords etc., its s weired.
except transform every text again?Somebody get better ideas? | open | 2023-12-28T09:01:52Z | 2023-12-28T10:04:14Z | https://github.com/MaartenGr/BERTopic/issues/1717 | [] | wushixian | 1 |
apify/crawlee-python | automation | 630 | Add a custom URL type instead of rely on `httpx.URL` | - We use the `httpx.URL` in the [ProxyConfiguration](https://github.com/apify/crawlee-python/blob/master/src/crawlee/proxy_configuration.py), in the tests and maybe in other places as well.
- It kind of shot us into the foot in #618.
- ~I suggest replacing it with our custom data type (either the Pydantic model or data class).~
- As Honza said, we could probably utilize some 3rd party library, [yarl](https://pypi.org/project/yarl/) seems like a good option.
- Also please find other occurrences of the `httpx.URL` in the Crawlee and replace them with our custom type.
- This applies also to tests: `httpbin: URL`
- Keep in mind, that URLs in the `Request` model need to be serialized as strings. | closed | 2024-10-29T12:30:29Z | 2024-11-26T14:09:17Z | https://github.com/apify/crawlee-python/issues/630 | [
"t-tooling",
"debt"
] | vdusek | 6 |
ipython/ipython | jupyter | 14,825 | IPython 9 `logfile` causes crash | Running `ipython` with `--logfile` or `--logappend` causes a crash in `ipython>=9`
e.g. `ipython --logfile=log.txt`
This is failing due to the following error:
```py
File "/usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 817, in init_logstart
self.magic('logstart %s' % self.logfile)
^^^^^^^^^^
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
```
---
Using docker for reproducibility
`docker pull python@sha256:385ccb8304f6330738a6d9e6fa0bd7608e006da7e15bc52b33b0398e1ba4a15b`
(digest matches current `latest` tag)
Installing `ipython==9.0.1` and running with `--logfile=log.txt` with verbose crash:
```sh
docker run --rm python@sha256:385ccb8304f6330738a6d9e6fa0bd7608e006da7e15bc52b33b0398e1ba4a15b \
sh -c \
'pip install -qq ipython
ipython --logfile=log.txt --BaseIPythonApplication.verbose_crash=True
cat /root/.ipython/Crash_report_ipython.txt'
```
<details>
```
---------------------------------------------------------------------------
---------------------------------------------------------------------------
AttributeError Python 3.13.2: /usr/local/bin/python3.13
Thu Mar 6 19:37:39 2025
A problem occurred executing Python code. Here is the sequence of function
calls leading up to the error, with the most recent (innermost) call last.
File /usr/local/bin/ipython:8
1 #!/usr/local/bin/python3.13
2 # -*- coding: utf-8 -*-
3 import re
4 import sys
5 from IPython import start_ipython
6 if __name__ == '__main__':
7 sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
----> 8 sys.exit(start_ipython())
File /usr/local/lib/python3.13/site-packages/IPython/__init__.py:139, in start_ipython(argv=None, **kwargs={})
113 def start_ipython(argv=None, **kwargs):
114 """Launch a normal IPython instance (as opposed to embedded)
115
116 `IPython.embed()` puts a shell in a particular calling scope,
(...) 136 allowing configuration of the instance (see :ref:`terminal_options`).
137 """
138 from IPython.terminal.ipapp import launch_new_instance
--> 139 return launch_new_instance(argv=argv, **kwargs)
launch_new_instance = <bound method Application.launch_instance of <class 'IPython.terminal.ipapp.TerminalIPythonApp'>>
argv = None
kwargs = {}
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:1074, in Application.launch_instance(cls=<class 'IPython.terminal.ipapp.TerminalIPythonApp'>, argv=None, **kwargs={})
1067 @classmethod
1068 def launch_instance(cls, argv: ArgvType = None, **kwargs: t.Any) -> None:
1069 """Launch a global instance of this Application
1070
1071 If a global instance already exists, this reinitializes and starts it
1072 """
1073 app = cls.instance(**kwargs)
-> 1074 app.initialize(argv)
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
argv = None 1075 app.start()
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:118, in catch_config_error.<locals>.inner(app=<IPython.terminal.ipapp.TerminalIPythonApp object>, *args=(None,), **kwargs={})
115 @functools.wraps(method)
116 def inner(app: Application, *args: t.Any, **kwargs: t.Any) -> t.Any:
117 try:
--> 118 return method(app, *args, **kwargs)
method = <function TerminalIPythonApp.initialize at 0xffffa318f1a0>
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
args = (None,)
kwargs = {} 119 except (TraitError, ArgumentError) as e:
120 app.log.fatal("Bad config encountered during initialization: %s", e)
121 app.log.debug("Config at the time: %s", app.config)
122 app.exit(1)
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:286, in TerminalIPythonApp.initialize(self=<IPython.terminal.ipapp.TerminalIPythonApp object>, argv=None)
274 @catch_config_error
275 def initialize(self, argv=None):
276 """Do actions after construct, but before starting the app."""
277 super(TerminalIPythonApp, self).initialize(argv)
278 if self.subapp is not None:
279 # don't bother initializing further, starting subapp
280 return
281 # print(self.extra_args)
282 if self.extra_args and not self.something_to_run:
283 self.file_to_run = self.extra_args[0]
284 self.init_path()
285 # create the shell
--> 286 self.init_shell()
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 287 # and draw the banner
288 self.init_banner()
289 # Now a variety of things that happen after the banner is printed.
290 self.init_gui_pylab()
291 self.init_extensions()
292 self.init_code()
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:300, in TerminalIPythonApp.init_shell(self=<IPython.terminal.ipapp.TerminalIPythonApp object>)
294 def init_shell(self):
295 """initialize the InteractiveShell instance"""
296 # Create an InteractiveShell instance.
297 # shell.display_banner should always be False for the terminal
298 # based app, because we call shell.show_banner() by hand below
299 # so the banner shows *before* all extension loading stuff.
--> 300 self.shell = self.interactive_shell_class.instance(parent=self,
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 301 profile_dir=self.profile_dir,
302 ipython_dir=self.ipython_dir, user_ns=self.user_ns)
303 self.shell.configurables.append(self)
File /usr/local/lib/python3.13/site-packages/traitlets/config/configurable.py:583, in SingletonConfigurable.instance(cls=<class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
553 @classmethod
554 def instance(cls: type[CT], *args: t.Any, **kwargs: t.Any) -> CT:
555 """Returns a global instance of this class.
556
557 This method create a new instance if none have previously been created
(...) 579 True
580 """
581 # Create and save the instance
582 if cls._instance is None:
--> 583 inst = cls(*args, **kwargs)
cls = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None} 584 # Now make sure that the instance will also be returned by
585 # parent classes' _instance attribute.
586 for subclass in cls._walk_mro():
587 subclass._instance = inst
589 if isinstance(cls._instance, cls):
590 return cls._instance
591 else:
592 raise MultipleInstanceError(
593 f"An incompatible sibling of '{cls.__name__}' is already instantiated"
594 f" as singleton: {type(cls._instance).__name__}"
595 )
File /usr/local/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py:977, in TerminalInteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
976 def __init__(self, *args, **kwargs) -> None:
--> 977 super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None}
TerminalInteractiveShell = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'> 978 self._set_autosuggestions(self.autosuggestions_provider)
979 self.init_prompt_toolkit_cli()
980 self.init_term_title()
981 self.keep_running = True
982 self._set_formatter(self.autoformatter)
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:650, in InteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, ipython_dir='/root/.ipython', profile_dir=<IPython.core.profiledir.ProfileDir object>, user_module=None, user_ns=None, custom_exceptions=((), None), **kwargs={'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>})
632 self.init_logger()
633 self.init_builtins()
635 # The following was in post_config_initialization
636 self.raw_input_original = input
637 self.init_completer()
638 # TODO: init_io() needs to happen before init_traceback handlers
639 # because the traceback handlers hardcode the stdout/stderr streams.
640 # This logic in in debugger.Pdb and should eventually be changed.
641 self.init_io()
642 self.init_traceback_handlers(custom_exceptions)
643 self.init_prompts()
644 self.init_display_formatter()
645 self.init_display_pub()
646 self.init_data_pub()
647 self.init_displayhook()
648 self.init_magics()
649 self.init_alias()
--> 650 self.init_logstart()
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 651 self.init_pdb()
652 self.init_extension_manager()
653 self.init_payload()
654 self.events.trigger('shell_initialized', self)
655 atexit.register(self.atexit_operations)
657 # The trio runner is used for running Trio in the foreground thread. It
658 # is different from `_trio_runner(async_fn)` in `async_helpers.py`
659 # which calls `trio.run()` for every cell. This runner runs all cells
660 # inside a single Trio event loop. If used, it is set from
661 # `ipykernel.kernelapp`.
662 self.trio_runner = None
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:817, in InteractiveShell.init_logstart(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>)
811 def init_logstart(self):
812 """Initialize logging in case it was requested at the command line.
813 """
814 if self.logappend:
815 self.magic('logstart %s append' % self.logappend)
816 elif self.logfile:
--> 817 self.magic('logstart %s' % self.logfile)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 818 elif self.logstart:
819 self.magic('logstart')
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
**********************************************************************
Oops, ipython crashed. We do our best to make it stable, but...
A crash report was automatically generated with the following information:
- A verbatim copy of the crash traceback.
- A copy of your input history during this session.
- Data on your current ipython configuration.
It was left in the file named:
'/root/.ipython/Crash_report_ipython.txt'
If you can email this file to the developers, the information in it will help
them in understanding and correcting the problem.
You can mail it to: The IPython Development Team at ipython-dev@python.org
with the subject 'ipython Crash Report'.
If you want to do it now, the following command will work (under Unix):
mail -s 'ipython Crash Report' ipython-dev@python.org < /root/.ipython/Crash_report_ipython.txt
In your email, please also include information about:
- The operating system under which the crash happened: Linux, macOS, Windows,
other, and which exact version (for example: Ubuntu 16.04.3, macOS 10.13.2,
Windows 10 Pro), and whether it is 32-bit or 64-bit;
- How ipython was installed: using pip or conda, from GitHub, as part of
a Docker container, or other, providing more detail if possible;
- How to reproduce the crash: what exact sequence of instructions can one
input to get the same crash? Ideally, find a minimal yet complete sequence
of instructions that yields the crash.
To ensure accurate tracking of this issue, please file a report about it at:
https://github.com/ipython/ipython/issues
Hit <Enter> to quit (your terminal may close):Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/IPython/core/application.py", line 288, in excepthook
return self.crash_handler(etype, evalue, tb)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/IPython/core/crashhandler.py", line 206, in __call__
builtin_mod.input("Hit <Enter> to quit (your terminal may close):")
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
EOFError: EOF when reading a line
Original exception was:
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 8, in <module>
sys.exit(start_ipython())
~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/IPython/__init__.py", line 139, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.13/site-packages/traitlets/config/application.py", line 1074, in launch_instance
app.initialize(argv)
~~~~~~~~~~~~~~^^^^^^
File "/usr/local/lib/python3.13/site-packages/traitlets/config/application.py", line 118, in inner
return method(app, *args, **kwargs)
File "/usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py", line 286, in initialize
self.init_shell()
~~~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py", line 300, in init_shell
self.shell = self.interactive_shell_class.instance(parent=self,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
profile_dir=self.profile_dir,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ipython_dir=self.ipython_dir, user_ns=self.user_ns)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/traitlets/config/configurable.py", line 583, in instance
inst = cls(*args, **kwargs)
File "/usr/local/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py", line 977, in __init__
super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 650, in __init__
self.init_logstart()
~~~~~~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 817, in init_logstart
self.magic('logstart %s' % self.logfile)
^^^^^^^^^^
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
***************************************************************************
IPython post-mortem report
{'commit_hash': 'd64897cf0',
'commit_source': 'installation',
'default_encoding': 'utf-8',
'ipython_path': '/usr/local/lib/python3.13/site-packages/IPython',
'ipython_version': '9.0.1',
'os_name': 'posix',
'platform': 'Linux-6.10.14-linuxkit-aarch64-with-glibc2.36',
'sys_executable': '/usr/local/bin/python3.13',
'sys_platform': 'linux',
'sys_version': '3.13.2 (main, Feb 25 2025, 21:31:02) [GCC 12.2.0]'}
***************************************************************************
Application name: ipython
Current user configuration structure:
{'BaseIPythonApplication': {'verbose_crash': True},
'TerminalInteractiveShell': {'logfile': 'log.txt'}}
***************************************************************************
Crash traceback:
---------------------------------------------------------------------------
---------------------------------------------------------------------------
AttributeError Python 3.13.2: /usr/local/bin/python3.13
Thu Mar 6 19:37:39 2025
A problem occurred executing Python code. Here is the sequence of function
calls leading up to the error, with the most recent (innermost) call last.
File /usr/local/bin/ipython:8
1 #!/usr/local/bin/python3.13
2 # -*- coding: utf-8 -*-
3 import re
4 import sys
5 from IPython import start_ipython
6 if __name__ == '__main__':
7 sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
----> 8 sys.exit(start_ipython())
File /usr/local/lib/python3.13/site-packages/IPython/__init__.py:139, in start_ipython(argv=None, **kwargs={})
113 def start_ipython(argv=None, **kwargs):
114 """Launch a normal IPython instance (as opposed to embedded)
115
116 `IPython.embed()` puts a shell in a particular calling scope,
(...) 136 allowing configuration of the instance (see :ref:`terminal_options`).
137 """
138 from IPython.terminal.ipapp import launch_new_instance
--> 139 return launch_new_instance(argv=argv, **kwargs)
launch_new_instance = <bound method Application.launch_instance of <class 'IPython.terminal.ipapp.TerminalIPythonApp'>>
argv = None
kwargs = {}
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:1074, in Application.launch_instance(cls=<class 'IPython.terminal.ipapp.TerminalIPythonApp'>, argv=None, **kwargs={})
1067 @classmethod
1068 def launch_instance(cls, argv: ArgvType = None, **kwargs: t.Any) -> None:
1069 """Launch a global instance of this Application
1070
1071 If a global instance already exists, this reinitializes and starts it
1072 """
1073 app = cls.instance(**kwargs)
-> 1074 app.initialize(argv)
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
argv = None 1075 app.start()
File /usr/local/lib/python3.13/site-packages/traitlets/config/application.py:118, in catch_config_error.<locals>.inner(app=<IPython.terminal.ipapp.TerminalIPythonApp object>, *args=(None,), **kwargs={})
115 @functools.wraps(method)
116 def inner(app: Application, *args: t.Any, **kwargs: t.Any) -> t.Any:
117 try:
--> 118 return method(app, *args, **kwargs)
method = <function TerminalIPythonApp.initialize at 0xffffa318f1a0>
app = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>
args = (None,)
kwargs = {} 119 except (TraitError, ArgumentError) as e:
120 app.log.fatal("Bad config encountered during initialization: %s", e)
121 app.log.debug("Config at the time: %s", app.config)
122 app.exit(1)
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:286, in TerminalIPythonApp.initialize(self=<IPython.terminal.ipapp.TerminalIPythonApp object>, argv=None)
274 @catch_config_error
275 def initialize(self, argv=None):
276 """Do actions after construct, but before starting the app."""
277 super(TerminalIPythonApp, self).initialize(argv)
278 if self.subapp is not None:
279 # don't bother initializing further, starting subapp
280 return
281 # print(self.extra_args)
282 if self.extra_args and not self.something_to_run:
283 self.file_to_run = self.extra_args[0]
284 self.init_path()
285 # create the shell
--> 286 self.init_shell()
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 287 # and draw the banner
288 self.init_banner()
289 # Now a variety of things that happen after the banner is printed.
290 self.init_gui_pylab()
291 self.init_extensions()
292 self.init_code()
File /usr/local/lib/python3.13/site-packages/IPython/terminal/ipapp.py:300, in TerminalIPythonApp.init_shell(self=<IPython.terminal.ipapp.TerminalIPythonApp object>)
294 def init_shell(self):
295 """initialize the InteractiveShell instance"""
296 # Create an InteractiveShell instance.
297 # shell.display_banner should always be False for the terminal
298 # based app, because we call shell.show_banner() by hand below
299 # so the banner shows *before* all extension loading stuff.
--> 300 self.shell = self.interactive_shell_class.instance(parent=self,
self = <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620> 301 profile_dir=self.profile_dir,
302 ipython_dir=self.ipython_dir, user_ns=self.user_ns)
303 self.shell.configurables.append(self)
File /usr/local/lib/python3.13/site-packages/traitlets/config/configurable.py:583, in SingletonConfigurable.instance(cls=<class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
553 @classmethod
554 def instance(cls: type[CT], *args: t.Any, **kwargs: t.Any) -> CT:
555 """Returns a global instance of this class.
556
557 This method create a new instance if none have previously been created
(...) 579 True
580 """
581 # Create and save the instance
582 if cls._instance is None:
--> 583 inst = cls(*args, **kwargs)
cls = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None} 584 # Now make sure that the instance will also be returned by
585 # parent classes' _instance attribute.
586 for subclass in cls._walk_mro():
587 subclass._instance = inst
589 if isinstance(cls._instance, cls):
590 return cls._instance
591 else:
592 raise MultipleInstanceError(
593 f"An incompatible sibling of '{cls.__name__}' is already instantiated"
594 f" as singleton: {type(cls._instance).__name__}"
595 )
File /usr/local/lib/python3.13/site-packages/IPython/terminal/interactiveshell.py:977, in TerminalInteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, *args=(), **kwargs={'ipython_dir': '/root/.ipython', 'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>, 'profile_dir': <IPython.core.profiledir.ProfileDir object>, 'user_ns': None})
976 def __init__(self, *args, **kwargs) -> None:
--> 977 super(TerminalInteractiveShell, self).__init__(*args, **kwargs)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590>
args = ()
kwargs = {'parent': <IPython.terminal.ipapp.TerminalIPythonApp object at 0xffffa354b620>, 'profile_dir': <IPython.core.profiledir.ProfileDir object at 0xffffa354bcb0>, 'ipython_dir': '/root/.ipython', 'user_ns': None}
TerminalInteractiveShell = <class 'IPython.terminal.interactiveshell.TerminalInteractiveShell'> 978 self._set_autosuggestions(self.autosuggestions_provider)
979 self.init_prompt_toolkit_cli()
980 self.init_term_title()
981 self.keep_running = True
982 self._set_formatter(self.autoformatter)
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:650, in InteractiveShell.__init__(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>, ipython_dir='/root/.ipython', profile_dir=<IPython.core.profiledir.ProfileDir object>, user_module=None, user_ns=None, custom_exceptions=((), None), **kwargs={'parent': <IPython.terminal.ipapp.TerminalIPythonApp object>})
632 self.init_logger()
633 self.init_builtins()
635 # The following was in post_config_initialization
636 self.raw_input_original = input
637 self.init_completer()
638 # TODO: init_io() needs to happen before init_traceback handlers
639 # because the traceback handlers hardcode the stdout/stderr streams.
640 # This logic in in debugger.Pdb and should eventually be changed.
641 self.init_io()
642 self.init_traceback_handlers(custom_exceptions)
643 self.init_prompts()
644 self.init_display_formatter()
645 self.init_display_pub()
646 self.init_data_pub()
647 self.init_displayhook()
648 self.init_magics()
649 self.init_alias()
--> 650 self.init_logstart()
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 651 self.init_pdb()
652 self.init_extension_manager()
653 self.init_payload()
654 self.events.trigger('shell_initialized', self)
655 atexit.register(self.atexit_operations)
657 # The trio runner is used for running Trio in the foreground thread. It
658 # is different from `_trio_runner(async_fn)` in `async_helpers.py`
659 # which calls `trio.run()` for every cell. This runner runs all cells
660 # inside a single Trio event loop. If used, it is set from
661 # `ipykernel.kernelapp`.
662 self.trio_runner = None
File /usr/local/lib/python3.13/site-packages/IPython/core/interactiveshell.py:817, in InteractiveShell.init_logstart(self=<IPython.terminal.interactiveshell.TerminalInteractiveShell object>)
811 def init_logstart(self):
812 """Initialize logging in case it was requested at the command line.
813 """
814 if self.logappend:
815 self.magic('logstart %s append' % self.logappend)
816 elif self.logfile:
--> 817 self.magic('logstart %s' % self.logfile)
self = <IPython.terminal.interactiveshell.TerminalInteractiveShell object at 0xffffa31bc590> 818 elif self.logstart:
819 self.magic('logstart')
AttributeError: 'TerminalInteractiveShell' object has no attribute 'magic'
***************************************************************************
History of session input:
```
</details> | closed | 2025-03-06T19:42:50Z | 2025-03-08T13:10:14Z | https://github.com/ipython/ipython/issues/14825 | [] | adavis444 | 2 |
smarie/python-pytest-cases | pytest | 339 | parametrize_with_cases: map case variables by name rather than position (dict vs list) | Is it possible to use `parametrize_with_cases` so that the parameters of decorated function are mapped from a dictionary of case data, rather than a list of case data? I'm using the `cases` parameter to generate the case data.
Below is an example that doesn't work (but wish it did):
```python
import pytest_cases
def _get_cases():
'''
Return a list of test cases to ensure that the addition operator works as expected
'''
return [
# test case 1
{
"value1": 1,
"value2": 2,
"expected": 3,
},
# test case 2
{
"value1": 10,
"value2": 20,
"expected": 30,
},
]
@pytest_cases.parametrize(case=_get_cases())
def get_cases(case):
return case
@pytest_cases.parametrize_with_cases('value1, value2, expected', cases=get_cases)
def test_addition(value1, value2, expected):
assert value1 + value2 == expected
```
Resulting errors...
```
FAILED debug_pytest_cases.py::test_addition[get_cases-case={'value1': 1, 'value2': 2, 'expected': 3}] - Exception: Unable to unpack parameter value to a tuple: dict_values([1, 2, 3])
FAILED debug_pytest_cases.py::test_addition[get_cases-case={'value1': 10, 'value2': 20, 'expected': 30}] - Exception: Unable to unpack parameter value to a tuple: dict_values([10, 20, 30])
```
This can be "fixed", by having each case be a list (rather than a dict)....
```python
@pytest_cases.parametrize(case=_get_cases())
def get_cases(case):
return case.values() # return the case as list of values only, e.g. [1,2,3]
```
... but obviously we lose the name mapping, resulting in the case variables getting improperly mapped into the test function...
```
========================================= FAILURES =========================================
__________ test_addition[get_cases-case={'expected': 3, 'value2': 2, 'value1': 1}] __________
value1 = 3, value2 = 2, expected = 1
@pytest_cases.parametrize_with_cases('value1, value2, expected', cases=get_cases)
def test_addition(value1, value2, expected):
> assert value1 + value2 == expected
E assert (3 + 2) == 1
```
So instead of simply converting the case data into a list of values ( via `case.values()`), we could convert it to a list of tuples via `case.items()` , e.g. `[("expected": 3), ("value1": 1), ("value2": 2)]`
But now we need to unpack these values in our test function...
```python
@pytest_cases.parametrize_with_cases('case_data', cases=get_cases)
def test_addition(case_data):
# convert case_data back into it's original dictionary form
case_data = dict(case_data)
# unpack variables from dict. brittle and ugly :(
value1, value2, expected = [case_data[k] for k in ['value1', 'value2', 'expected']]
assert value1 + value2 == expected
```
This certainly works (I've been using it for a couple years), but it has a few drawbacks:
- Every test function will always need this same boiler plate code (unpacking the case data into discrete variables)
- The unpacking logic is brittle. It requires parameters on both side of `=` to mirror one another (including their order).
In conclusion, I think it could be notably simpler/convenient to map case data (a dictionary of values) directly to the function parameters (by name) so that no unpacking is necessary (as shown in the top/original example).
I realize that there's certainly some caveats with this approach (e.g. parameter name clashes, illegal parameter names, etc), but I imagine this wouldn't be the first time such issues/compromises needed to be considered.
... OR perhaps this functionality already exists and I just need to RTFM :)
Thank you! | closed | 2024-04-02T20:13:17Z | 2024-04-05T11:44:21Z | https://github.com/smarie/python-pytest-cases/issues/339 | [] | lawschlosser | 3 |
robinhood/faust | asyncio | 86 | Document table callbacks and configs | ## Checklist
- [ ] on_changelog_event
- [ ] on_recover
| closed | 2018-03-12T23:03:01Z | 2018-07-31T14:39:15Z | https://github.com/robinhood/faust/issues/86 | [
"Issue Type: Documentation"
] | vineetgoel | 0 |
kynan/nbstripout | jupyter | 78 | Option to keep output style | It would be nice to have an option to keep the way output is displayed (between standard, scrolled, and collapsed) since it is more part of formatting than of outputs (I am enabling scrolling when I know that the output will be large, but when I still want to display it). | closed | 2018-04-24T13:11:46Z | 2024-02-04T07:06:47Z | https://github.com/kynan/nbstripout/issues/78 | [
"type:enhancement",
"help wanted"
] | melsophos | 7 |
donnemartin/data-science-ipython-notebooks | data-science | 109 | Data science | open | 2024-07-23T08:49:29Z | 2025-01-11T04:53:21Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/109 | [] | yacoubadiomande | 1 |
|
mljar/mercury | data-visualization | 146 | how to deploy mercury to k8s cluster? | closed | 2022-07-26T11:36:59Z | 2022-07-27T09:53:15Z | https://github.com/mljar/mercury/issues/146 | [] | markqiu | 3 |
|
biolab/orange3 | pandas | 6,588 | orange associate can't be loaded | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
I have added an add-in called Associate.
Although all seems wel after installation- meaning that all the necessary source files seem in place,
the log shows a problem in loading Associate and some other things
```
2023-09-29 01:29:44,099:INFO:orangecanvas.registry.discovery: Could not import 'orangecontrib.associate.widgets.owassociate'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/associate/widgets/owassociate.py", line 9, in <module>
from AnyQt.QtWidgets import QTableView, qApp, QGridLayout, QLabel
ImportError: cannot import name 'qApp' from 'AnyQt.QtWidgets' (/home/frankc/orange/lib/python3.11/site-packages/AnyQt/QtWidgets.py)
2023-09-29 01:29:44,100:INFO:orangecanvas.registry.discovery: Could not import 'orangecontrib.associate.widgets.owitemsets'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/associate/widgets/owitemsets.py", line 9, in <module>
from AnyQt.QtWidgets import QTreeWidget, QTreeWidgetItem, qApp
ImportError: cannot import name 'qApp' from 'AnyQt.QtWidgets' (/home/frankc/orange/lib/python3.11/site-packages/AnyQt/QtWidgets.py)
2023-09-29 01:29:44,144:INFO:orangecanvas.registry.discovery: Could not import 'orangecontrib.educational.widgets.ow1ka'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/educational/widgets/ow1ka.py", line 24, in <module>
from Orange.widgets.utils.webview import wait
ImportError: cannot import name 'wait' from 'Orange.widgets.utils.webview' (/home/frankc/orange/lib/python3.11/site-packages/Orange/widgets/utils/webview.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/educational/widgets/ow1ka.py", line 27, in <module>
from AnyQt.QtWidgets import qApp
ImportError: cannot import name 'qApp' from 'AnyQt.QtWidgets' (/home/frankc/orange/lib/python3.11/site-packages/AnyQt/QtWidgets.py)
2023-09-29 01:29:44,146:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/explain/widgets/owexplainfeaturebase.py'.
2023-09-29 01:29:44,150:WARNING:root: No module named 'tempeh': LawSchoolGPADataset will be unavailable. To install, run:
pip install 'aif360[LawSchoolGPA]'
2023-09-29 01:29:45,819:WARNING:root: No module named 'fairlearn': ExponentiatedGradientReduction will be unavailable. To install, run:
pip install 'aif360[Reductions]'
2023-09-29 01:29:45,820:WARNING:root: No module named 'fairlearn': GridSearchReduction will be unavailable. To install, run:
pip install 'aif360[Reductions]'
2023-09-29 01:29:45,820:WARNING:root: No module named 'fairlearn': GridSearchReduction will be unavailable. To install, run:
pip install 'aif360[Reductions]'
2023-09-29 01:29:45,841:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/fairness/widgets/utils.py'.
2023-09-29 01:29:45,842:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/geo/widgets/plotutils.py'.
2023-09-29 01:29:45,891:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/survival_analysis/widgets/data.py'.
2023-09-29 01:29:46,036:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableCategory'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableCategory.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,037:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableContext'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableContext.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,038:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableConvert'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableConvert.py", line 27, in <module>
from PyQt5.QtWidgets import QMessageBox, QApplication, QFileDialog
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,039:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableCooccurrence'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableCooccurrence.py", line 30, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,040:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableCount'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableCount.py", line 28, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,041:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableDisplay'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableDisplay.py", line 28, in <module>
from PyQt5.QtWidgets import QTextBrowser, QFileDialog, QMessageBox, QApplication
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,042:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableExtractXML'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableExtractXML.py", line 25, in <module>
from PyQt5.QtGui import QFont
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,044:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableInterchange'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableInterchange.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,045:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableIntersect'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableIntersect.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,046:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableLength'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableLength.py", line 28, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,047:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableMerge'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableMerge.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,048:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableMessage'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableMessage.py", line 26, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,049:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextablePreprocess'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextablePreprocess.py", line 27, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,050:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableRecode'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableRecode.py", line 25, in <module>
from PyQt5.QtGui import QFont
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,051:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableSegment'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableSegment.py", line 25, in <module>
from PyQt5.QtGui import QFont
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,052:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableSelect'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableSelect.py", line 31, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,053:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableTextField'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableTextField.py", line 28, in <module>
from PyQt5.QtWidgets import QPlainTextEdit
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,053:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableTextFiles'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableTextFiles.py", line 30, in <module>
from PyQt5.QtCore import QTimer
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,054:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableTreetagger'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableTreetagger.py", line 32, in <module>
from PyQt5.QtWidgets import QFileDialog, QMessageBox
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,055:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableURLs'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableURLs.py", line 31, in <module>
from PyQt5.QtCore import QTimer
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,056:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.OWTextableVariety'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/OWTextableVariety.py", line 28, in <module>
from .TextableUtils import (
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,058:INFO:orangecanvas.registry.discovery: Could not import '_textable.widgets.TextableUtils'.
Traceback (most recent call last):
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 309, in iter_widget_descriptions
module = asmodule(name)
^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/registry/discovery.py", line 552, in asmodule
return __import__(module, fromlist=[""])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frankc/orange/lib/python3.11/site-packages/_textable/widgets/TextableUtils.py", line 909, in <module>
from PyQt5.QtCore import QTimer, QEventLoop
ModuleNotFoundError: No module named 'PyQt5'
2023-09-29 01:29:46,328:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/_owmodel.py'.
2023-09-29 01:29:46,328:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/_rangeslider.py'.
2023-09-29 01:29:46,328:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/owperiodbase.py'.
2023-09-29 01:29:46,329:INFO:orangecanvas.registry.discovery: Ignoring '/home/frankc/orange/lib/python3.11/site-packages/orangecontrib/timeseries/widgets/utils.py'.
2023-09-29 01:29:46,334:INFO:orangecanvas.main: Adding search path '/home/frankc/orange/lib/python3.11/site-packages/orangecanvas/styles/orange' for prefix, 'canvas_icons'
```
Constantly it show that QApp can't be found
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Manjaro
- Orange version: 3.36.1
- How you installed Orange: through pip in a virtual environment.
| closed | 2023-09-28T23:38:22Z | 2023-10-03T10:34:30Z | https://github.com/biolab/orange3/issues/6588 | [
"bug"
] | frankclaessen | 9 |
jupyter-incubator/sparkmagic | jupyter | 254 | User might think that HiveContext creation is taking longer than it is | Last cell output says "Creating HiveContext as 'sqlContext'" but does not tell user when context has been created and code is now running.
| closed | 2016-06-14T23:40:23Z | 2016-06-22T22:57:42Z | https://github.com/jupyter-incubator/sparkmagic/issues/254 | [
"kind:bug"
] | aggFTW | 1 |
unionai-oss/pandera | pandas | 1,781 | Failed Index Uniqueness Validation for Dask Series | **Describe the bug**
A clear and concise description of what the bug is.
- [X ] I have checked that this issue has not already been reported.
- [X ] I have confirmed this bug exists on the latest version of pandera.
- [X ] (optional) I have confirmed this bug exists on the main branch of pandera.
#### Code Sample
```python
import pandera as pa
import dask.dataframe as dd
import pandas as pd
example_schema = pa.SeriesSchema(
float,
index=pa.Index(pa.dtypes.Timestamp, unique=True),
nullable=False,
unique=False,
)
example_series = dd.from_pandas(
pd.Series([1.333, 2.22, 3.333, 4.311, 5.222], index=[1, 2, 3, 5, 5]),
npartitions=1,
)
# Skips validation despite having a duplicate
example_schema.validate(example_series).compute()
# Validates correctly and throws an exception.
# Note that when we execute .compute(), we are dealing with a Pandas Series
example_schema.validate(example_series.compute())
```
#### Expected behavior
pandera should be able to validate a Dask Series directly and throw an exception if it fails the validation.
#### Desktop (please complete the following information):
- OS: Windows 10
- Browser: Chrome
- Version: pandera 0.20.3
### Additional Information
I have validated that this issue is only occurring with Dask Series. If we convert Dask Series into a DataFrame, pandera throws an exception:
> SchemaError: series 'None' contains duplicate values:
> 3 5
> 4 5
> dtype: int64 | open | 2024-08-12T16:30:51Z | 2024-08-12T16:34:48Z | https://github.com/unionai-oss/pandera/issues/1781 | [
"bug"
] | vladjohnson | 1 |
numba/numba | numpy | 9,817 | debuginfo output doesn't seem to work with gdb v15 | This is true across Numba releases; I tried 0.58 through 0.60. I am setting `NUMBA_DEBUGINFO=1` for profiling (https://github.com/pythonspeed/profila/).
1. On Ubuntu 24.04, gdb v15 doesn't see the symbols generated. lldb _does_ see the symbols, so they are being generated.
2. On Ubuntu 22.04, gdb v12 does see the symbols.
So something about how NUMBA_DEBUGINFO works doesn't work with the latest gdb when running on Ubuntu 24.04.
It's possible this is a gdb bug, of course, but if e.g. gdb got a little stricter seems easier to fix in Numba than trying to get new gdb bug fixes into stable LTS Linux distros... | open | 2024-11-27T20:55:51Z | 2025-01-28T13:44:49Z | https://github.com/numba/numba/issues/9817 | [
"no action required"
] | itamarst | 16 |
Gerapy/Gerapy | django | 137 | 使用configparser读取.ini配置文件,在部署时报错 | 在Scrapy的settings文件中使用configparser读取.ini配置文件,
```
config = RawConfigParser()
config_file = os.getcwd() + '/config.ini'
config.read(config_file)
env = config['env']['env']
```
在gerapy中部署时报错,报错信息如下:
```
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
File "/tmp/douyin-1581320692-ek4kqmnm.egg/douyin/settings.py", line 38, in <module>
File "/root/.pyenv/versions/3.6.7/lib/python3.6/configparser.py", line 959, in __getitem__
raise KeyError(key)
KeyError: 'env'
```
| closed | 2020-02-10T07:54:28Z | 2020-02-10T08:16:26Z | https://github.com/Gerapy/Gerapy/issues/137 | [
"bug"
] | Rosscqu | 1 |
noirbizarre/flask-restplus | api | 141 | Error handlers on namespaces causes error | Flask-restplus=0.9.0
When error handlers are registered on a namespace and afterwards are transfered to api an error occurs. I got it working on my side by adding ".items()" on api.py:398:
`for exception, handler in ns.error_handlers.items():`
Due to the lack of time I cannot go through the contribution process myself.
More info:
For example when registering my error handler for a specific exception
```
namespace = Namespace('endpoint')
@namespace.errorhandler(ConflictExceptionClass)
def handle_conflict_exception(error):
...
```
When starting the program you will get:
``` ...
api.add_namespace(ns_engine)
File "/home/bevandeba/tools/python/local/lib/python2.7/site-packages/flask_restplus/api.py", line 398, in add_namespace
for exception, handler in ns.error_handlers:
TypeError: 'type' object is not iterable
```
This issue is located at
api.py:397-399:
```
# Register error handlers
for exception, handler in ns.error_handlers:
self.error_handlers[exception] = handler
```
Fixed by changing to:
```
# Register error handlers
for exception, handler in ns.error_handlers.items():
self.error_handlers[exception] = handler
```
| closed | 2016-03-01T10:05:15Z | 2016-04-21T16:30:57Z | https://github.com/noirbizarre/flask-restplus/issues/141 | [
"bug"
] | 3art | 2 |
yvann-ba/Robby-chatbot | streamlit | 18 | Learn how analyse the trafic on chatbot-csv.com for improving | closed | 2023-04-24T20:57:20Z | 2023-06-12T08:06:12Z | https://github.com/yvann-ba/Robby-chatbot/issues/18 | [] | yvann-ba | 0 |
|
stanfordnlp/stanza | nlp | 1,015 | Where is Semantic Dependency Parsing? | I am looking for the code for Semantic Dependency Parsing. https://github.com/tdozat/Parser-v3 said I should be able to find it in the https://github.com/stanfordnlp/stanfordnlp/ repo. But that repo is marked obsolete, replaced by this stanza repo.
Is the relevant code in this repo? If so, where can I get instructions for getting it up and running? If not, where should I look next?
Thank you
| open | 2022-04-23T23:33:59Z | 2022-04-24T15:33:30Z | https://github.com/stanfordnlp/stanza/issues/1015 | [
"enhancement",
"question"
] | xoffey | 2 |
pallets/flask | flask | 4,734 | Version 2.2.0 crashes when using app.test_client() as a context-manager in multi-threaded tests | The following code defines a simple app, and queries it from a test that uses threads:
```py
import threading
from flask import Flask
import pytest
def create_app():
app = Flask(__name__)
@app.route("/hi", methods=["POST"])
def hello():
return "Hello, World!"
return app
@pytest.fixture()
def app():
app = create_app()
yield app
@pytest.fixture()
def client(app):
with app.test_client() as client:
yield client
def test_request_example(client):
successes = 0
def f():
nonlocal successes
response = client.post("/hi")
assert "Hello, World!" == response.text
successes += 1
thread = threading.Thread(target=f)
thread.start()
thread.join()
f()
assert successes == 2
```
When running with pytest on Flask 2.1.3, this test passes:
```
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.9.2, pytest-7.1.2, pluggy-0.13.0
rootdir: /home/dev/swh-environment/swh-indexer, configfile: pytest.ini
plugins: asyncio-0.18.3, requests-mock-1.9.3, django-4.5.2, xdist-2.5.0, swh.core-2.13, mock-3.8.2, django-test-migrations-1.2.0, flask-1.2.0, redis-2.4.0, hypothesis-6.49.1, cov-3.0.0, subtesthack-0.1.2, forked-1.3.0, postgresql-3.1.3, swh.journal-0.8.1.dev3+gf92d4ac
asyncio: mode=strict
collected 1 item
test_flask_threads.py . [100%]
===================================================================== 1 passed, 1 warning in 0.02s =====================================================================
```
However, when running with Flask 2.2.0:
```
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.9.2, pytest-7.1.2, pluggy-0.13.0
rootdir: /home/dev/swh-environment/swh-indexer, configfile: pytest.ini
plugins: asyncio-0.18.3, requests-mock-1.9.3, django-4.5.2, xdist-2.5.0, swh.core-2.13, mock-3.8.2, django-test-migrations-1.2.0, flask-1.2.0, redis-2.4.0, hypothesis-6.49.1, cov-3.0.0, subtesthack-0.1.2, forked-1.3.0, postgresql-3.1.3, swh.journal-0.8.1.dev3+gf92d4ac
asyncio: mode=strict
collected 1 item
test_flask_threads.py F [100%]
=============================================================================== FAILURES ===============================================================================
_________________________________________________________________________ test_request_example _________________________________________________________________________
self = <contextlib.ExitStack object at 0x7fc444b63b20>
exc_details = (<class 'ValueError'>, ValueError("<Token var=<ContextVar name='flask.request_ctx' at 0x7fc44844a1d0> at 0x7fc444a18a80> was created in a different Context"), <traceback object at 0x7fc444a18d40>)
received_exc = False, _fix_exception_context = <function ExitStack.__exit__.<locals>._fix_exception_context at 0x7fc444a7edc0>, suppressed_exc = False
def __exit__(self, *exc_details):
received_exc = exc_details[0] is not None
# We manipulate the exception state so it behaves as though
# we were actually nesting multiple with statements
frame_exc = sys.exc_info()[1]
def _fix_exception_context(new_exc, old_exc):
# Context may not be correct, so find the end of the chain
while 1:
exc_context = new_exc.__context__
if exc_context is old_exc:
# Context is already set correctly (see issue 20317)
return
if exc_context is None or exc_context is frame_exc:
break
new_exc = exc_context
# Change the end of the chain to point to the exception
# we expect it to reference
new_exc.__context__ = old_exc
# Callbacks are invoked in LIFO order to match the behaviour of
# nested context managers
suppressed_exc = False
pending_raise = False
while self._exit_callbacks:
is_sync, cb = self._exit_callbacks.pop()
assert is_sync
try:
> if cb(*exc_details):
/usr/lib/python3.9/contextlib.py:498:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <flask.ctx.AppContext object at 0x7fc444b75730>, exc_type = None, exc_value = None, tb = None
def __exit__(
self,
exc_type: t.Optional[type],
exc_value: t.Optional[BaseException],
tb: t.Optional[TracebackType],
) -> None:
> self.pop(exc_value)
../../.local/lib/python3.9/site-packages/flask/ctx.py:275:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <flask.ctx.AppContext object at 0x7fc444b75730>, exc = None
def pop(self, exc: t.Optional[BaseException] = _sentinel) -> None: # type: ignore
"""Pops the app context."""
try:
if len(self._cv_tokens) == 1:
if exc is _sentinel:
exc = sys.exc_info()[1]
self.app.do_teardown_appcontext(exc)
finally:
ctx = _cv_app.get()
> _cv_app.reset(self._cv_tokens.pop())
E ValueError: <Token var=<ContextVar name='flask.app_ctx' at 0x7fc44844f090> at 0x7fc444a18ac0> was created in a different Context
../../.local/lib/python3.9/site-packages/flask/ctx.py:256: ValueError
During handling of the above exception, another exception occurred:
client = <FlaskClient <Flask 'test_flask_threads'>>
def test_request_example(client):
successes = 0
def f():
nonlocal successes
response = client.post("/hi")
assert "Hello, World!" == response.text
successes += 1
thread = threading.Thread(target=f)
thread.start()
thread.join()
> f()
test_flask_threads.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_flask_threads.py:34: in f
response = client.post("/hi")
../../.local/lib/python3.9/site-packages/werkzeug/test.py:1140: in post
return self.open(*args, **kw)
../../.local/lib/python3.9/site-packages/flask/testing.py:221: in open
self._context_stack.close()
/usr/lib/python3.9/contextlib.py:521: in close
self.__exit__(None, None, None)
/usr/lib/python3.9/contextlib.py:513: in __exit__
raise exc_details[1]
/usr/lib/python3.9/contextlib.py:498: in __exit__
if cb(*exc_details):
../../.local/lib/python3.9/site-packages/flask/ctx.py:432: in __exit__
self.pop(exc_value)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <RequestContext 'http://localhost/hi' [POST] of test_flask_threads>
exc = ValueError("<Token var=<ContextVar name='flask.app_ctx' at 0x7fc44844f090> at 0x7fc444a18ac0> was created in a different Context")
def pop(self, exc: t.Optional[BaseException] = _sentinel) -> None: # type: ignore
"""Pops the request context and unbinds it by doing that. This will
also trigger the execution of functions registered by the
:meth:`~flask.Flask.teardown_request` decorator.
.. versionchanged:: 0.9
Added the `exc` argument.
"""
clear_request = len(self._cv_tokens) == 1
try:
if clear_request:
if exc is _sentinel:
exc = sys.exc_info()[1]
self.app.do_teardown_request(exc)
request_close = getattr(self.request, "close", None)
if request_close is not None:
request_close()
finally:
ctx = _cv_request.get()
token, app_ctx = self._cv_tokens.pop()
> _cv_request.reset(token)
E ValueError: <Token var=<ContextVar name='flask.request_ctx' at 0x7fc44844a1d0> at 0x7fc444a18a80> was created in a different Context
../../.local/lib/python3.9/site-packages/flask/ctx.py:407: ValueError
======================================================================= short test summary info ========================================================================
FAILED test_flask_threads.py::test_request_example - ValueError: <Token var=<ContextVar name='flask.request_ctx' at 0x7fc44844a1d0> at 0x7fc444a18a80> was created in...
===================================================================== 1 failed, 1 warning in 0.11s =====================================================================
```
Environment:
- Python version: 3.9
- Flask version: 2.2.0
| closed | 2022-08-03T15:30:10Z | 2022-08-18T00:06:07Z | https://github.com/pallets/flask/issues/4734 | [] | progval | 1 |
pytest-dev/pytest-xdist | pytest | 749 | pytest-xdist rtd documentation site problem | It seems that pytest-xdist documentation rtd website exists but never container docs on it, but searching for xdist docs brings https://readthedocs.org/projects/pytest-xdist amont top results which is confusing.
Apparently @hpk42 @nicoddemus and @RonnyPfannschmidt the the ones listed as admins on RTD. I suspect that removing the project from RTD can sort the problem. As I guess that if nobody bothered to add a sphinx config in so many years it will never happen.
To improve the situation I would also add extra metadata to package to link directly the readme as the Documentation, avoiding further confusions. | closed | 2022-01-29T10:37:16Z | 2022-01-29T16:29:36Z | https://github.com/pytest-dev/pytest-xdist/issues/749 | [] | ssbarnea | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 688 | ERROR | Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
ParameterError: "Audio buffer is not finite everywhere"
Traceback Error: "
File "UVR.py", line 4719, in process_start
File "separate.py", line 683, in seperate
File "separate.py", line 827, in spec_to_wav
File "lib_v5/spec_utils.py", line 332, in cmb_spectrogram_to_wave
File "librosa/util/decorators.py", line 104, in inner_f
File "librosa/core/audio.py", line 606, in resample
File "librosa/util/decorators.py", line 88, in inner_f
File "librosa/util/utils.py", line 294, in valid_audio
"
Full Application Settings:
vr_model: 4_HP-Vocal-UVR
aggression_setting: 20
window_size: 320
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-07-23T02:27:39Z | 2023-07-23T02:27:39Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/688 | [] | Vepoe | 0 |
miguelgrinberg/flasky | flask | 406 | Cyclic imports of db objcect | I've run flasky with pylint (I can post the specific pylintrc file, if required) and discovered a lot of cyclic imports
```
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.auth -> app.auth.views -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app.auth -> app.auth.views) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app.main -> app.main.views) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app.api -> app.api.users) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.auth -> app.auth.views) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.main -> app.main.views -> app.decorators -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.auth -> app.auth.views -> app.email) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.main -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app.main -> app.main.errors) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.main -> app.main.views -> app.main.forms -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.main -> app.main.views -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.api -> app.api.users -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app.api -> app.api.posts -> app.api.decorators -> app.api.errors) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.api -> app.api.comments -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app.api -> app.api.posts -> app.api.errors) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.api -> app.api.authentication -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.api -> app.api.posts -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.auth -> app.auth.views -> app.auth.forms -> app.models) (cyclic-import)
app/auth/__init__.py:1:0: R0401: Cyclic import (app -> app.main -> app.main.views) (cyclic-import)
```
Most of them are related to `app.models` module and it doesn't seem to be a good practice. | closed | 2019-02-04T02:43:57Z | 2019-05-19T07:37:42Z | https://github.com/miguelgrinberg/flasky/issues/406 | [
"question"
] | montreal91 | 3 |
geex-arts/django-jet | django | 325 | forms.MultipleChoiceField doesn't work as it does in default admin site | When a use django default admin site, I get the desired MultiChoiceField

but when i switch to django-jet admin site I get something like below, none of the option are selectable

| open | 2018-04-19T18:24:56Z | 2019-04-20T14:42:35Z | https://github.com/geex-arts/django-jet/issues/325 | [] | DevAbhinav2073 | 12 |
miguelgrinberg/flasky | flask | 11 | db.session.add(..) but without commit(() | Hi, i'm reading your book and at this point in Chapter 10. In model.py the "User" class has several db.session.add(self) but no db.session.commit(). I thought that every time an update in database there should be an commit() in the end but your code works fine. So i'm a little confused. Thank you for your patient.
| closed | 2014-05-16T10:02:28Z | 2015-02-06T08:09:01Z | https://github.com/miguelgrinberg/flasky/issues/11 | [] | jeffjzx | 7 |
django-import-export/django-import-export | django | 1,053 | Django 3 support | closed | 2019-12-29T11:33:25Z | 2019-12-31T05:53:42Z | https://github.com/django-import-export/django-import-export/issues/1053 | [] | MuslimBeibytuly | 5 |
|
viewflow/viewflow | django | 134 | Python 3.5 on CI tests | Refs #133
As I said earlier in my mail, this is absolutely vital to us.
We can't have a big part of our production code note run on our tech stack.
I'm happy to make the changes, if you tell me how you want it.
| closed | 2016-03-03T10:07:53Z | 2016-03-05T08:59:45Z | https://github.com/viewflow/viewflow/issues/134 | [
"request/enhancement"
] | codingjoe | 3 |
ipython/ipython | data-science | 14,024 | Allowing `UsageError` subclasses | Hi,
I'm developing a package that exposes magic commands to users. In many cases, the Python traceback is long and uninformative so I only want to display an appropriate error message.
I noticed that this can be done with [`UsageError`](https://github.com/ipython/ipython/blob/a3d5a06d9948260a6b08c622e86248f48afd66c3/IPython/core/interactiveshell.py#L2086) since the shell will hide the traceback. However, some errors we want to display are not really "usage errors" so we want to provide alternative names.
Unfortunately, subclassing `UsageError` doesn't work:

Because the code is checking for the `UsageError` type: https://github.com/ipython/ipython/blob/a3d5a06d9948260a6b08c622e86248f48afd66c3/IPython/core/interactiveshell.py#L2086
Checking for subclasses instead would fix the issue. Another approach would be defining a `HideTracebackError`, and `UsageError` could be a subclass of it, then, the checking condition could verify if the exception is `HideTracebackError` or a subclass.
I believe this is a common scenario among packages that expose magics to users and can greatly improve user experience.
I'm happy to open a PR if the maintainers accept this change.
| open | 2023-04-20T17:28:34Z | 2023-04-20T17:28:34Z | https://github.com/ipython/ipython/issues/14024 | [] | edublancas | 0 |
flairNLP/flair | pytorch | 3,614 | [Bug]: `identify_dynamic_embeddings` does not work for `DataPair` | ### Describe the bug
The training util `identify_dynamic_embeddings` does not look properly at the embeddings for the `first` and `second`. It probably makes more sense to have a member function for each `DataPoint` class, instead of checking `isinstance` within an external function. There could still be an outer function that iterates over data points in a batch, but it should itself call `identify_dynamic_embeddings_point` for each data point it iterates over. I would say that `identify_dynamic_embeddings` should be called `identify_dynamic_embeddings_batch`, but that would change the API.
### To Reproduce
```python
first = flair.data.DataPoint()
second = flair.data.DataPoint()
dynamic_tensor1 = torch.tensor([1., 2., 3.], requires_grad=True)
dynamic_tensor2 = torch.tensor([1., 2., 3.], requires_grad=True)
first.set_embedding('dynamic', dynamic_tensor1)
second.set_embedding('dynamic', dynamic_tensor2)
dynamic_data_pair = flair.data.DataPair(first, second)
print(flair.training_utils.identify_dynamic_embeddings([dynamic_data_pair]))
```
### Expected behavior
This should return `['dynamic']`
### Logs and Stack traces
```stacktrace
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.15.1
##### Pytorch
2.6.0+cu124
##### Transformers
4.48.2
#### GPU
False | open | 2025-02-07T19:04:14Z | 2025-02-07T19:04:14Z | https://github.com/flairNLP/flair/issues/3614 | [
"bug"
] | MattGPT-ai | 0 |
plotly/dash-table | dash | 740 | [Duplicate] With fixed rows, columns are as wide as the data and not the headers | If the column headers are wider than the data (which is often the case with numerical tables & text column headers), then the width of the columns is too narrow and the column headers are cut off.
Without fixed columns, the column width expands to account for the width of the headers. I would expect the behavior to be the same with fixed headers.
Expected behavior:

Default behavior with `fixed_rows`

Still undesirable behavior with header overflow:
 | closed | 2020-04-13T22:12:26Z | 2020-04-14T00:59:24Z | https://github.com/plotly/dash-table/issues/740 | [] | chriddyp | 6 |
roboflow/supervision | computer-vision | 834 | TypeError: BoxAnnotator.annotate() got an unexpected keyword argument 'custom_colors_lookup' | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
First of all, I really like how supervision is providing so many features to integrate with yolov8 detection
I am trying to give assign different colors to some of the bounding boxes by using "custom_color_lookup", but it appears that this error showed up:
TypeError: BoxAnnotator.annotate() got an unexpected keyword argument 'custom_colors_lookup'
Here is my code on calling the annotate() :
"colors" is initiated as an array
<img width="439" alt="image" src="https://github.com/roboflow/supervision/assets/62897998/b928a289-5541-4379-b99f-506c92b3d57a">
not sure if i am doing anything wrong here. Perhaps if there is other way of achieving this, please do let me know! thank you
### Additional
_No response_ | closed | 2024-02-01T12:36:41Z | 2024-02-01T13:24:13Z | https://github.com/roboflow/supervision/issues/834 | [
"question"
] | hoongyuan | 2 |
quokkaproject/quokka | flask | 292 | INspector view error | ``` python
TypeError
TypeError: 'Database' object is not callable. If you meant to call the '__html__' method on a 'MongoClient' object it is failing because no such method exists.
Traceback (most recent call last)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask_admin/base.py", line 68, in inner
return self._run_view(f, *args, **kwargs)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask_admin/base.py", line 359, in _run_view
return fn(self, *args, **kwargs)
File "/home/rochacbruno/www/quokka/quokka/core/admin/views.py", line 37, in index
return self.render('admin/inspector.html', **context)
File "/home/rochacbruno/www/quokka/quokka/core/admin/models.py", line 43, in render
return render_template(template, theme=theme, **kwargs)
File "/home/rochacbruno/www/quokka/quokka/core/templates.py", line 16, in render_template
return render_theme_template(theme, template, **context)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/quokka_themes/__init__.py", line 184, in render_theme_template
return render_template('_themes/%s/%s' % (theme, last), **context)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/templating.py", line 128, in render_template
context, ctx.app)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/flask/templating.py", line 110, in _render
rv = template.render(context)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/jinja2/environment.py", line 989, in render
return self.environment.handle_exception(exc_info, True)
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/jinja2/environment.py", line 754, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/inspector.html", line 2, in top-level template code
{% from theme('admin/lib.html') import format_value, render_table with context %}
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/master.html", line 1, in top-level template code
{% extends theme(admin_base_template) %}
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/base.html", line 29, in top-level template code
{% block page_body %}
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/base.html", line 73, in block "page_body"
{% block body %}{% endblock %}
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/inspector.html", line 42, in block "body"
{{render_table(headers=('name', 'extension'), values=app.extensions.items())}}
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/lib.html", line 258, in template
<td>{{format_value(subitem)}}</td>
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/lib.html", line 214, in template
{{ render_table(headers=('Key', 'Value'), values=value.items()) }}
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/lib.html", line 258, in template
<td>{{format_value(subitem)}}</td>
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/lib.html", line 214, in template
{{ render_table(headers=('Key', 'Value'), values=value.items()) }}
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/lib.html", line 258, in template
<td>{{format_value(subitem)}}</td>
File "/home/rochacbruno/www/quokka/quokka/themes/admin/templates/admin/lib.html", line 238, in template
{{value}}
File "/home/rochacbruno/.virtualenvs/quokkaclean/lib/python2.7/site-packages/pymongo/database.py", line 1054, in __call__
self.__name, self.__connection.__class__.__name__))
TypeError: 'Database' object is not callable. If you meant to call the '__html__' method on a 'MongoClient' object it is failing because no such method exists.
The debugger caught an exception in your WSGI application. You can now look at the traceback which led to the error.
To switch between the interactive traceback and the plaintext one, you can click on the "Traceback" headline. From the text traceback you can also create a paste of it. For code execution mouse-over the frame you want to debug and click on the console icon on the right side.
You can execute arbitrary Python code in the stack frames and there are some extra helpers available for introspection:
dump() shows all variables in the frame
dump(obj) dumps all that's known about the object
Brought to you by DON'T PANIC, your friendly Werkzeug powered traceback interpreter.
```
| closed | 2015-08-24T02:59:58Z | 2015-08-25T22:22:17Z | https://github.com/quokkaproject/quokka/issues/292 | [
"bug"
] | rochacbruno | 0 |
home-assistant/core | asyncio | 141,174 | Tado new api authentication | ### The problem
From March 21 todo require MFA on API.
https://support.tado.com/en/articles/8565472-how-do-i-authenticate-to-access-the-rest-api
Tado integration unable to login.
### What version of Home Assistant Core has the issue?
All
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Tado
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-23T08:23:22Z | 2025-03-23T08:41:49Z | https://github.com/home-assistant/core/issues/141174 | [
"integration: tado"
] | topoxy | 2 |
jeffknupp/sandman2 | rest-api | 208 | How to use this with ForeignKey table | How to use this with ForeignKey table ,Just like join or inner Sql | open | 2021-03-24T14:44:45Z | 2021-03-24T14:44:45Z | https://github.com/jeffknupp/sandman2/issues/208 | [] | hzjux001 | 0 |
aio-libs/aiomysql | sqlalchemy | 45 | Update documentation with python3.5 features | 1) document `async with` for pool/connection/cursor
2) document `async for` iteration over cursor
3) change examples to async/await
| open | 2015-11-26T22:23:59Z | 2022-01-13T00:06:00Z | https://github.com/aio-libs/aiomysql/issues/45 | [
"docs"
] | jettify | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.