repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
PablocFonseca/streamlit-aggrid | streamlit | 14 | Getting AggGrid's state | Hey @PablocFonseca, thank you for such an amazing job on integrating AggGrid in Streamlit, I highly appreciate the efforts and happy to contribute if there are any known issues that need help from python side of things.
One task I am having hard time to wrap my head around is getting AggGrid state after a user interacts with it.
I.e., I have a multi-select to keep grouping consistent between page reloads or new data push.
```
# let user pick cols from dataframe she want to group by
if st.sidebar.checkbox("Enable default grouping"):
default_group_col = st.sidebar.selectbox("Default group by: ", cols, 1)
# if any of columns are selected, apply it to aggrid and persist on page reload,
# as default_group_col state is persisted under the hood by streamlit
try:
gb.configure_column(default_group_col, rowGroup=True)
except:
pass
```
Now say a user groups by an additional column using AggGrid groupby feature, collapses some of the resulting groups and keeps the others expanded. I would assume AggGrid itself stores this state somewhere in client side JS. Is there a potential way to get stat state back to python in order to save it somewhere in a dict and persist between page reloads when AggGrid component is being redrawn or populated with new data?
Thanks! | closed | 2021-03-04T10:07:40Z | 2022-06-21T09:36:26Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/14 | [] | adamlansky | 5 |
zappa/Zappa | flask | 467 | [Migrated] Package custom modules | Originally from: https://github.com/Miserlou/Zappa/issues/1242 by [tista3](https://github.com/tista3)
## Description
new setting: list of module names that will be packaged into zip
## Motivation
I have many flask apps in my repository that shares common module. This module is available in the PYTHONPATH env variable which points into directory where my common module directory is. I had to copy the common module before every `zappa update` into the flask app directory to be packed into distribution zip. But with this feature I just list the common modules and they are packaged into zip automatically if they can be imported in my environment.
Now I use my custom pypi package `timo-zappa`, but I will be happy If it could be in official zappa package. I tested it on Linux, py2.7, py3.6 and as 3.6 lambda.
This is my first pull request, be gentle :) | closed | 2021-02-20T08:35:15Z | 2022-07-16T07:30:12Z | https://github.com/zappa/Zappa/issues/467 | [
"needs-user-testing"
] | jneves | 1 |
harry0703/MoneyPrinterTurbo | automation | 245 | 在合成 长视频 的时候,能否新增pycuda ,用GPU加速合成,效率会快很多 | 我测试了14分钟的视频。花了20多分钟,而且是内存基本吃满的情况下,,,, | closed | 2024-04-12T03:49:51Z | 2024-04-12T14:27:50Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/245 | [] | Test-Jim | 1 |
django-oscar/django-oscar | django | 3,822 | Add SECURITY.md | Hey there!
I belong to an open source security research community, and a member (@ktg9) has found an issue, but doesn’t know the best way to disclose it.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | closed | 2021-12-03T00:18:27Z | 2022-10-20T21:32:23Z | https://github.com/django-oscar/django-oscar/issues/3822 | [] | JamieSlome | 6 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 75 | NotImplementedError: Cannot copy out of meta tensor; no data! | │ │
│ 403 │ │ │ │ device_map = infer_auto_device_map( │
│ 404 │ │ │ │ │ self, max_memory=max_memory, no_split_module_classes=no_split_module │
│ 405 │ │ │ │ ) │
│ ❱ 406 │ │ │ dispatch_model( │
│ 407 │ │ │ │ self, │
│ 408 │ │ │ │ device_map=device_map, │
│ 409 │ │ │ │ offload_dir=offload_dir, │
│ │
│ C:\Users\zhaoxianghui\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\big_m │
│ odeling.py:355 in dispatch_model │
│ │
│ 352 │ │ and (not os.path.isdir(offload_dir) or not os.path.isfile(os.path.join(offload_d │
│ 353 │ ): │
│ 354 │ │ disk_state_dict = extract_submodules_state_dict(model.state_dict(), disk_modules │
│ ❱ 355 │ │ offload_state_dict(offload_dir, disk_state_dict) │
│ 356 │ │
│ 357 │ execution_device = { │
│ 358 │ │ name: main_device if device in ["cpu", "disk"] else device for name, device in d │
│ │
│ C:\Users\zhaoxianghui\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\utils │
│ \offload.py:103 in offload_state_dict │
│ │
│ 100 │ os.makedirs(save_dir, exist_ok=True) │
│ 101 │ index = {} │
│ 102 │ for name, parameter in state_dict.items(): │
│ │
│ 34 │ │ # Need to reinterpret the underlined data as int16 since NumPy does not handle b │
│ 35 │ │ weight = weight.view(torch.int16) │
│ 36 │ │ dtype = "bfloat16" │
│ ❱ 37 │ array = weight.cpu().numpy() │
│ 38 │ tensor_file = os.path.join(offload_folder, f"{weight_name}.dat") │
│ 39 │ if index is not None: │
│ 40 │ │ if dtype is None: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
NotImplementedError: Cannot copy out of meta tensor; no data!
| open | 2023-07-13T13:24:19Z | 2023-07-13T13:24:50Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/75 | [] | sjtuzhaoxh | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,126 | Encountered Bug After Using PandasAI (Agent) Inside My App and Standalone with PyInstaller | ### System Info
OS version: MacBook Pro, M1
Interpreter: 3.11.4
pandas-ai version: 2.0.32
### 🐛 Describe the bug
I'm implementing an app using PyQt5. To analyze my data, I integrated PandasAI. While running the program directly, everything works fine. However, upon standalone packaging with PyInstaller, I encounter the following error:
```
Traceback (most recent call last):
File "pandasai/pipelines/chat/generate_chat_pipeline.py", line 283, in run
output = (self.code_generation_pipeline | self.code_execution_pipeline).run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/pipelines/pipeline.py", line 137, in run
raise e
File "pandasai/pipelines/pipeline.py", line 101, in run
step_output = logic.execute(
^^^^^^^^^^^^^^
File "pandasai/pipelines/chat/code_execution.py", line 126, in execute
code_to_run = self._retry_run_code(
^^^^^^^^^^^^^^^^^^^^^
File "pandasai/pipelines/chat/code_execution.py", line 346, in _retry_run_code
return self.on_retry(code, e)
^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/pipelines/chat/generate_chat_pipeline.py", line 128, in on_code_retry
return self.code_exec_error_pipeline.run(correction_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/pipelines/chat/error_correction_pipeline/error_correction_pipeline.py", line 47, in run
return self.pipeline.run(input)
^^^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/pipelines/pipeline.py", line 137, in run
raise e
File "pandasai/pipelines/pipeline.py", line 101, in run
step_output = logic.execute(
^^^^^^^^^^^^^^
File "pandasai/pipelines/chat/code_cleaning.py", line 91, in execute
code_to_run = self.get_code_to_run(input, code_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/pipelines/chat/code_cleaning.py", line 137, in get_code_to_run
code_to_run = self._clean_code(code, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/pipelines/chat/code_cleaning.py", line 481, in _clean_code
self._extract_fix_dataframe_redeclarations(node, clean_code_lines)
File "pandasai/pipelines/chat/code_cleaning.py", line 384, in _extract_fix_dataframe_redeclarations
env = get_environment(self._additional_dependencies)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/helpers/optional.py", line 68, in get_environment
**{builtin: __builtins__[builtin] for builtin in WHITELISTED_BUILTINS},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandasai/helpers/optional.py", line 68, in <dictcomp>
**{builtin: __builtins__[builtin] for builtin in WHITELISTED_BUILTINS},
~~~~~~~~~~~~^^^^^^^^^
KeyError: 'help'
```
I'm utilizing the following code to package my program standalone:
`pyinstaller --collect-all pandasai --noconsole --copy-metadata pandasai --add-data "resources:resources" --add-data "finance-db.db:." --icon="./resources/icons/logo.png" --name money-management --pat /opt/homebrew/bin/python3.11 money-management.py
` | closed | 2024-04-21T09:11:15Z | 2024-07-28T16:05:54Z | https://github.com/sinaptik-ai/pandas-ai/issues/1126 | [] | PVZMF | 0 |
pyro-ppl/numpyro | numpy | 1,100 | Checkpointing during MCMC | Hi devs, thanks for your contributions to this tool!
Is it possible to save the MCMC chains while the chains are running? I'm using numpyro on multiple GPUs in an HPC environment and would like to checkpoint my jobs in case of preemption. | closed | 2021-07-17T05:17:31Z | 2021-07-18T09:21:42Z | https://github.com/pyro-ppl/numpyro/issues/1100 | [
"question"
] | bmorris3 | 2 |
unit8co/darts | data-science | 2,731 | Investigate allowing `predict_likelihood_parameters` for auto-regression with quantile likelihoods | Check whether we could allow `predict_likelihood_parameters=True` for auto-regression with quantile likelihoods.
Would be interesting to see how the quantiles compare if we just feed the model with the last predicted quantiles, compared to feeding it with the samples
If the results are similar, this could speed up things quite a bit especially for torch models since we wouldn’t have to call the forward with all samples | closed | 2025-03-13T08:16:42Z | 2025-03-21T08:44:39Z | https://github.com/unit8co/darts/issues/2731 | [
"feature request",
"improvement"
] | dennisbader | 1 |
home-assistant/core | python | 141,166 | Reopen Shelly Ble Problem | ### The problem
Is the same like
https://github.com/home-assistant/core/issues/140889
### What version of Home Assistant Core has the issue?
2025.3.x
### What was the last working version of Home Assistant Core?
2024.x
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
15.0
### Link to integration documentation on our website
_No response_
### Diagnostics information
Logger: habluetooth.base_scanner
Quelle: runner.py:154
Erstmals aufgetreten: 22. März 2025 um 10:02:14 (4 Vorkommnisse)
Zuletzt protokolliert: 22. März 2025 um 18:31:45
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 99.69573211669922s, check logs on the scanner device for more information
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 100.61075592041016s, check logs on the scanner device for more information
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 119.21070861816406s, check logs on the scanner device for more information
shellyplus1pm-keller (08:3A:F2:02:2D:A0): Bluetooth scanner has gone quiet for 94.63068389892578s, check logs on the scanner device for more information
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-23T05:55:31Z | 2025-03-23T09:03:53Z | https://github.com/home-assistant/core/issues/141166 | [
"integration: shelly"
] | CrazyUs3r | 5 |
MaartenGr/BERTopic | nlp | 1,406 | 'BERTopic' object has no attribute 'reduce_outliers' | i am not able to use reduce_outliers as it is showing a error as shown below

Can you guide me on how do i solve this? | open | 2023-07-12T19:25:59Z | 2023-07-12T20:29:31Z | https://github.com/MaartenGr/BERTopic/issues/1406 | [] | dhruvilm28 | 3 |
aws/aws-sdk-pandas | pandas | 2,383 | [Support Us]: LogicalCube | Thank you for letting us use your organization's name on the repository read.me page and letting other customers know that you support the project! If you would like us to also display your organization's logo. please raise a pull request to provide an image file for the logo.
Please add any files to *docs/source/_static/*
Organization Name: LogicalCube (https://www.logicalcube.com)
Your Name: Bryan Kelly
Your Position: Manager
I have included a logo: n
*By raising a Support Us issue (and related pull request), you are granting AWS permission to use your company’s name (and logo) for the limited purpose described here and you are confirming that you have authority to grant such permission.*
| closed | 2023-07-06T00:37:59Z | 2023-07-06T08:31:25Z | https://github.com/aws/aws-sdk-pandas/issues/2383 | [] | zolabud | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 20,033 | Can't save models via the ModelCheckpoint() when using custom optimizer | ### Bug description
Dear all,
I want to use a [Hessian-Free LM optimizer](https://github.com/ltatzel/PyTorchHessianFree) replace the pytorch L-BFGS optimizer. However, the model can't be saved normally if I use the ModelCheckpoint(), while the torch.save() and Trainer.save_checkpoint() are still working. You can find my test python file in the following. Could you give me some suggestions to handle this problem?
Thanks!
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
import numpy as np
import pandas as pd
import time
import torch
from torch import nn
from torch.utils.data import DataLoader,TensorDataset
import matplotlib.pyplot as plt
import lightning as L
from lightning.pytorch import LightningModule
from lightning.pytorch.loggers import CSVLogger
from lightning.pytorch.callbacks.model_checkpoint import ModelCheckpoint
from lightning.pytorch import Trainer
from lightning.pytorch.callbacks.early_stopping import EarlyStopping
from hessianfree.optimizer import HessianFree
class LitModel(LightningModule):
def __init__(self,loss):
super().__init__()
self.tanh_linear= nn.Sequential(
nn.Linear(1,20),
nn.Tanh(),
nn.Linear(20,20),
nn.Tanh(),
nn.Linear(20,1),
)
self.loss_fn = nn.MSELoss()
self.automatic_optimization = False
return
def forward(self, x):
out = self.tanh_linear(x)
return out
def configure_optimizers(self):
optimizer = HessianFree(
self.parameters(),
cg_tol=1e-6,
cg_max_iter=1000,
lr=1e0,
LS_max_iter=1000,
LS_c=1e-3
)
return optimizer
def training_step(self, batch, batch_idx):
x, y = batch
opt = self.optimizers()
def forward_fn():
y_pred = self(x)
loss=self.loss_fn(y_pred,y)
return loss,y_pred
opt.optimizer.step( forward=forward_fn)
loss,y_pred=forward_fn()
self.log("train_loss", loss, on_epoch=True, on_step=False)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
val_loss = self.loss_fn(y_hat, y)
# passing to early_stoping
self.log("val_loss", val_loss, on_epoch=True, on_step=False)
return val_loss
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.loss_fn(y_hat, y)
return loss
def main():
input_size = 20000
train_size = int(input_size*0.9)
test_size = input_size-train_size
batch_size = 1000
x_total = np.linspace(-1.0, 1.0, input_size, dtype=np.float32)
x_total = np.random.choice(x_total,size=input_size,replace=False) #random sampling
x_train = x_total[0:train_size]
x_train= x_train.reshape((train_size,1))
x_test = x_total[train_size:input_size]
x_test= x_test.reshape((test_size,1))
x_train=torch.from_numpy(x_train)
x_test=torch.from_numpy(x_test)
y_train = torch.from_numpy(np.sinc(10.0 * x_train))
y_test = torch.from_numpy(np.sinc(10.0 * x_test))
training_data = TensorDataset(x_train,y_train)
test_data = TensorDataset(x_test,y_test)
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size
#,num_workers=2
)
test_dataloader = DataLoader(test_data, batch_size=batch_size
#,num_workers=2
)
for X, y in test_dataloader:
print("Shape of X: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
for X, y in train_dataloader:
print("Shape of X: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
loss_fn = nn.MSELoss()
model=LitModel(loss_fn)
# prepare trainer
opt_label=f'lm_HF_t20'
logger = CSVLogger(f"./{opt_label}", name=f"test-{opt_label}",flush_logs_every_n_steps=1)
epochs = 1e1
print(f"test for {opt_label}")
early_stop_callback = EarlyStopping(
monitor="val_loss"
, min_delta=1e-9
, patience=10
, verbose=False, mode="min"
, stopping_threshold = 1e-8 #stop if reaching accuracy
)
modelck=ModelCheckpoint(
dirpath = f"./{opt_label}"
, monitor="val_loss"
,save_last = True
#, save_top_k = 2
#, mode ='min'
#, every_n_epochs = 1
#, save_on_train_epoch_end=True
#,save_weights_only=True,
)
Train_model=Trainer(
accelerator="cpu"
, max_epochs = int(epochs)
, enable_progress_bar = True #using progress bar
#, callbacks=[modelck,early_stop_callback] # using earlystopping
, callbacks=[modelck] #do not using earlystopping
, logger=logger
#, num_processes = 16
)
t1=time.time()
Train_model.fit(model,train_dataloaders=train_dataloader, val_dataloaders=test_dataloader)
t2=time.time()
print('total time')
print(t2-t1)
# torch.save() and Trainer.save_checkpoint() can save the model, but ModelCheckpoint() can't.
#torch.save(model.state_dict(), f"model{opt_label}.pth")
#print(f"Saved PyTorch Model State to model{opt_label}.pth")
#Train_model.save_checkpoint(f"model{opt_label}.ckpt")
#print(f"Saved PL Model State to model{opt_label}.ckpt")
exit()
return
if __name__=='__main__':
main()
```
```
### Error messages and logs
```
# Error messages and logs here please
```
The program do not report error, but the ModelCheckpoint() can't save models when I use a custom optimizer.
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: 12.1
* Lightning:
- backpack-for-pytorch: 1.6.0
- lightning: 2.2.0
- lightning-utilities: 0.11.3.post0
- pytorch-lightning: 2.2.3
- torch: 2.2.0
- torchaudio: 2.0.1
- torchmetrics: 0.11.4
- torchvision: 0.15.1
* Packages:
- aiohttp: 3.9.1
- aiosignal: 1.3.1
- async-timeout: 4.0.3
- attrs: 23.2.0
- backpack-for-pytorch: 1.6.0
- bottleneck: 1.3.5
- certifi: 2022.12.7
- charset-normalizer: 3.1.0
- cmake: 3.26.0
- colorama: 0.4.6
- contourpy: 1.2.1
- cycler: 0.12.1
- einops: 0.8.0
- filelock: 3.10.0
- fonttools: 4.51.0
- frozenlist: 1.4.1
- fsspec: 2023.3.0
- hessianfree: 0.1
- idna: 3.4
- jinja2: 3.1.2
- kiwisolver: 1.4.5
- lightning: 2.2.0
- lightning-utilities: 0.11.3.post0
- lit: 15.0.7
- markupsafe: 2.1.2
- matplotlib: 3.8.4
- mpmath: 1.3.0
- multidict: 6.0.4
- networkx: 3.0
- numexpr: 2.8.4
- numpy: 1.24.2
- nvidia-cublas-cu11: 11.10.3.66
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu11: 11.7.101
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu11: 11.7.99
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu11: 11.7.99
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu11: 8.5.0.96
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu11: 10.9.0.58
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu11: 10.2.10.91
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu11: 11.4.0.1
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu11: 11.7.4.91
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu11: 2.14.3
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.3.101
- nvidia-nvtx-cu11: 11.7.91
- nvidia-nvtx-cu12: 12.1.105
- packaging: 23.0
- pandas: 1.5.3
- pillow: 9.4.0
- pip: 24.1.1
- pyparsing: 3.1.2
- python-dateutil: 2.8.2
- pytorch-lightning: 2.2.3
- pytz: 2022.7
- pyyaml: 6.0
- requests: 2.28.2
- setuptools: 67.6.0
- six: 1.16.0
- sympy: 1.11.1
- torch: 2.2.0
- torchaudio: 2.0.1
- torchmetrics: 0.11.4
- torchvision: 0.15.1
- tqdm: 4.65.0
- triton: 2.2.0
- typing-extensions: 4.11.0
- unfoldnd: 0.2.1
- urllib3: 1.26.15
- wheel: 0.40.0
- yarl: 1.9.4
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.9
- release: 3.10.0-862.el7.x86_64
- version: #1 SMP Fri Apr 20 16:44:24 UTC 2018
</details>
### More info
_No response_ | open | 2024-07-01T08:54:11Z | 2024-07-01T08:54:11Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20033 | [
"bug",
"needs triage"
] | youli-jlu | 0 |
scrapy/scrapy | web-scraping | 6,481 | Allow passing parameters to signal receiver | ## Summary
Allow passing parameters to a signal receiver (when self is not available)
I.e.
```
crawler.signals.connect(receiver=cls.engine_stopped, signal=signals.engine_stopped, cb_kwargs={"lazy": True})
@classmethod
def engine_stopped(cls, lazy: bool) -> None:
...
```
## Motivation
Pass of parameters to the receiver method would allow more dependent logic/behavior in it | closed | 2024-09-26T09:08:35Z | 2024-10-16T09:07:31Z | https://github.com/scrapy/scrapy/issues/6481 | [] | genismoreno | 4 |
huggingface/pytorch-image-models | pytorch | 1,344 | [BUG] ViT models can't load pretrained weights from models with different `cls_token`/`no_embed_class` settings | **Describe the bug**
The title says it all. ViT models currently support changing some hyperparameters when loading pretrained weights (such as `img_size`). This is useful, when the loaded weights are intended to be used for further fine-tuning with different hyperparameters. However, `_load_weights` currently assumes that the default config was used.
**To Reproduce**
```python
timm.create_model("vit_large_patch16_384", pretrained=True, class_token=False, global_pool="avg")
# AttributeError: 'NoneType' object has no attribute 'copy_'
```
```python
timm.create_model("vit_large_patch16_384", pretrained=True, no_embed_class=True)
# RuntimeError: The size of tensor a (576) must match the size of tensor b (577) at non-singleton dimension 1
```
**Expected behavior**
Return ViT models with `class_token=False` and `no_embed_class=True`.
I don't have the time to fill out a proper PR, but the short version is that `_load_weights` should check if `model.cls_token` is `None` before attempting to copy it from the pretrained weights and `resize_pos_embed` should just drop the extra prefix tokens from the embeddings before doing the interpolation. | closed | 2022-07-11T20:02:54Z | 2022-07-13T07:15:46Z | https://github.com/huggingface/pytorch-image-models/issues/1344 | [
"bug"
] | ruro | 4 |
gradio-app/gradio | machine-learning | 10,502 | gradio demo don't work in huggingface space | ### Describe the bug
when deploying demo code of doc on huggingface space, it will produce a bug "gradio.exceptions.error: 'data incompatible with the messages format'". Because the version of gradio is 5.0.1 when deploying and can not change, I can not fix it by updating the version. After trying, find if I change the code 【chatbot=gr.Chatbot(height=300)】 -> 【chatbot=gr.Chatbot(height=300, type="messages")], the bug will be fixed.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def slow_echo(message, history):
for i in range(len(message)):
time.sleep(0.3)
yield "You typed: " + message[: i+1]
gr.ChatInterface(
slow_echo,
type="messages",
chatbot=gr.Chatbot(height=300),
textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7),
title="Yes Man",
description="Ask Yes Man any question",
theme="ocean",
examples=["Hello", "Am I cool?", "Are tomatoes vegetables?"],
cache_examples=True,
).launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
gradio==5.0.1
```
### Severity
I can work around it | closed | 2025-02-05T02:49:16Z | 2025-02-05T05:54:37Z | https://github.com/gradio-app/gradio/issues/10502 | [
"bug"
] | Shuryne | 1 |
tqdm/tqdm | pandas | 1,322 | UnicodeDecodeError when using subprocess.getstatusoutput | I've made this script for finding corrupt images using Imagemagick. Full code:
```
from pathlib import Path
import time
import subprocess
import concurrent.futures
from tqdm import tqdm
_err_set = set()
def _imgerr(_img):
global _err_set
output = subprocess.getstatusoutput("magick identify -regard-warnings \"" + str(_img) + "\"")
if(int(output[0]) == 1):
_err_set.add(str(_img))
_root = input("Input directory path: ")
file_set = set(Path(_root).rglob("*.jpg"))
print("Scanning...")
start1 = time.perf_counter()
with concurrent.futures.ThreadPoolExecutor() as executor:
list(tqdm(executor.map(_imgerr, file_set),total=int(len(file_set))))
finish1 = time.perf_counter()
with open('bad_img.txt', 'w', encoding="utf-8") as f:
for item in sorted(_err_set):
f.write('"' + item + '"' + "\n")
f.close()
print(f'Total execution time [mt] = {round(finish1 - start1, 3)}s')
print(f'Average time per image = {round((finish1 - start1)/len(file_set), 10)}s')
print(f'Corrupt images = {len(_err_set)}')
```
I'm using tqdm for progress tracking. The problem is with this line:
```
list(tqdm(executor.map(_imgerr, file_set),total=int(len(file_set))))
```
If there are no non ascii characters in the image path then everything works fine, but if any unicode character appears I get
>Exception has occurred: UnicodeDecodeError 'charmap' codec can't decode byte 0x81 in position 37: character maps to /<undefined/>
If I instead just use
```
executor.map(_imgerr, file_set)
```
everyting works just fine, regardless if there are unicode characters present or not. Been scratching my head for couple of hours now but still can't figure out what causes the error. Any suggestions are welcome! Btw maybe it's relevant but when debugging the error pops out in the function at the following line:
```
output = subprocess.getstatusoutput("magick identify -regard-warnings \"" + str(_img) + "\"")
```
| closed | 2022-04-25T19:23:04Z | 2022-04-25T21:07:32Z | https://github.com/tqdm/tqdm/issues/1322 | [] | gotr3k | 1 |
scikit-learn/scikit-learn | data-science | 30,180 | DOC grammar issue in the governance page | ### Describe the issue linked to the documentation
In the governance page at line: https://github.com/scikit-learn/scikit-learn/blob/59dd128d4d26fff2ff197b8c1e801647a22e0158/doc/governance.rst?plain=1#L161
there is a reference attached to "Enhancement proposals (SLEPs)."
However, after compiling, it is displayed as "a Enhancement proposals (SLEPs)" which is grammatically incorrect.
Page at: https://scikit-learn.org/stable/governance.html
### Suggest a potential alternative/fix
Fix it by updating the line with
```
an :ref:`slep`
``` | closed | 2024-10-30T19:49:04Z | 2024-11-05T07:31:05Z | https://github.com/scikit-learn/scikit-learn/issues/30180 | [
"Documentation"
] | AdityaInnovates | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 220 | Image preprocessing parameters | The function `preprocess_input()`, in encoders/_preprocessing.py, takes 'mean' and 'std' as parameters and apply the normalization on the data in the following way:
```
if mean is not None:
mean = np.array(mean)
x = x - mean
if std is not None:
std = np.array(std)
x = x / std
```
In my opinion, the mean/std here should be the statistics for the training dataset (data-specific). However, according to `get_preprocessing_params()`, in encoders/\_\_init\_\_.py, the mean/std are determined by the pretrained model, which depends on the training data used in the pretrained model.
Just wonder, is there any reason why we do it based on the pretrained model? | closed | 2020-06-03T20:54:49Z | 2020-06-13T18:43:14Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/220 | [] | lkforward | 2 |
collerek/ormar | pydantic | 983 | select_related with ManyToMany through | **Describe the bug**
Trying to query a ManyToMany through relationship and getting this error:
```bash
ormar.exceptions.RelationshipInstanceError: Relationship error - ForeignKey EffortResource is of type <class 'int'> while <class 'weakref.ProxyType'> passed as a parameter.
```
after running `await EffortStep.objects.select_related("users").all()`.
Models:
```python
class User(PublicIdMixin, ormar.Model):
id = ormar.Integer(primary_key=True)
class Meta(BaseMeta):
tablename = "users"
class EffortStepUser(ormar.Model):
id = ormar.Integer(primary_key=True)
class Meta(BaseMeta):
tablename = "effort_step_x_user"
class EffortStep(PublicIdMixin, DateFieldsMixin, TenantAwareModel, ormar.Model):
id = ormar.Integer(primary_key=True)
users = ormar.ManyToMany(
User,
through=EffortStepUser,
through_relation_name="step_id",
through_reverse_relation_name="user_id",
)
class Meta(BaseMeta):
tablename = "effort_step"
class EffortResource(DateFieldsMixin, TenantAwareModel, ormar.Model):
id = ormar.Integer(primary_key=True)
step: EffortStep = ormar.ForeignKey(EffortStep, name="step_id", nullable=False)
class Meta(BaseMeta):
tablename = "effort_resource"
```
### Full traceback
```bash
Traceback (most recent call last):
File "/.venvs/core/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/venvs/core/lib/python3.11/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/cors.py", line 92, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/cors.py", line 147, in simple_response
await self.app(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/.venvs/core/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/.venvs/core/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/.venvs/core/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/.venvs/core/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/fastapi/routing.py", line 235, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/efforts/router.py", line 95, in add_effort_collaborators
return await service.add_step_zero_collaborators(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/efforts/service.py", line 165, in add_step_zero_collaborators
await EffortStep.objects.select_related("users")
File "/.venvs/core/lib/python3.11/site-packages/ormar/queryset/queryset.py", line 982, in get
processed_rows = self._process_query_result_rows(rows)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/queryset/queryset.py", line 196, in _process_query_result_rows
return self.model.merge_instances_list(result_rows) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 43, in merge_instances_list
model = cls.merge_two_instances(next_model, model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 91, in merge_two_instances
cls.merge_two_instances(
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 91, in merge_two_instances
cls.merge_two_instances(
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/mixins/merge_mixin.py", line 82, in merge_two_instances
setattr(other, field_name, value_to_set)
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/newbasemodel.py", line 175, in __setattr__
object.__setattr__(self, name, value)
File "/.venvs/core/lib/python3.11/site-packages/ormar/models/descriptors/descriptors.py", line 110, in __set__
model = instance.Meta.model_fields[self.name].expand_relationship(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 541, in expand_relationship
model = constructors.get( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 399, in _extract_model_from_sequence
return [
^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 400, in <listcomp>
self.expand_relationship( # type: ignore
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 541, in expand_relationship
model = constructors.get( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venvs/core/lib/python3.11/site-packages/ormar/fields/foreign_key.py", line 475, in _construct_model_from_pk
raise RelationshipInstanceError(
ormar.exceptions.RelationshipInstanceError: Relationship error - ForeignKey EffortResource is of type <class 'int'> while <class 'weakref.ProxyType'> passed as a parameter.
```
**Versions (please complete the following information):**
- Database backend used (mysql/sqlite/postgress): **postgres 14.1**
- Python version: 3.11
- `ormar` version: 0.12.0
- if applicable `fastapi` version 0.88
Thanks!
| open | 2023-01-10T17:49:25Z | 2023-04-10T09:32:26Z | https://github.com/collerek/ormar/issues/983 | [
"bug"
] | AdamGold | 1 |
dpgaspar/Flask-AppBuilder | rest-api | 1,913 | Incorrect(?) use of db.session in flask_appbuilder.AppBuilder | ### Environment
Flask-Appbuilder version: 4.1.3
### Describe the expected results
We have an Azure SQL database that we use with flask-appbuilder. This database requires that we request a new token every hour or so (expiry on the token is 3600 seconds). To do this, we use an event listener of the form: `@event.listens_for(engine, "do_connect")`, that requests a new token and sets this in the connection parameters for the engine when creating a new connection. The expected behaviour would be that once the token has expired, and it needs to create a new connection to the database, it runs the event from above and acquires a new token that can be used for connections.
### Describe the actual results
We're facing an issue where after an hour (when the token expires) if you perform a request to the application you'll get an Internal Server Error with an error like this: `sqlalchemy.exc.OperationalError: (pyodbc.OperationalError) ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x20 (32) (SQLExecDirectW)')`.
Subsequent requests after this will be fine, until the token expires again at which point it'll happen again.
My suspicion is that there is a problem with the use of the `db.session` when initializing the appbuilder object: `flask_appbuilder.AppBuilder(app, db.session, ...)` because the `db.session` in my understanding is not meant to be long-lived, it's meant to be a short-lived object that you use for a transaction and then close afterwards: see [here](https://docs.sqlalchemy.org/en/14/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it).
I further don't know if engine events are triggered for sessions at all (and that this may be the cause of the token expiry -> connection failure issue that I'm seeing).
```pytb
app|ERROR|Exception on / [GET]
Traceback (most recent call last):
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x20 (32) (SQLExecDirectW)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 1519, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 1517, in full_dispatch_request
rv = self.dispatch_request()
File "/app/.venv/lib/python3.9/site-packages/flask/app.py", line 1503, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/app/app.py", line 72, in index
return self.render_template(index, appbuilder=self.appbuilder)
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/baseviews.py", line 322, in render_template
return render_template(
File "/app/.venv/lib/python3.9/site-packages/flask/templating.py", line 154, in render_template
return _render(
File "/app/.venv/lib/python3.9/site-packages/flask/templating.py", line 128, in _render
rv = template.render(context)
File "/app/.venv/lib/python3.9/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/app/.venv/lib/python3.9/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/app/webapp/templates/index_not_auth.html", line 1, in top-level template code
{% extends "appbuilder/base.html" %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/base.html", line 1, in top-level template code
{% extends base_template %}
File "/app/webapp/templates/custom_base.html", line 1, in top-level template code
{% extends 'appbuilder/baselayout.html' %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 37, in top-level template code
{% block body %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 8, in block 'body'
{% block navbar %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 10, in block 'navbar'
{% include 'appbuilder/navbar.html' %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/navbar.html", line 29, in top-level template code
{% include 'appbuilder/navbar_menu.html' %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/templates/appbuilder/navbar_menu.html", line 11, in top-level template code
{% if item1 | is_menu_visible %}
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/filters.py", line 136, in is_menu_visible
return self.security_manager.has_access("menu_access", item.name)
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/manager.py", line 1526, in has_access
return self.is_item_public(permission_name, view_name)
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/manager.py", line 1406, in is_item_public
permissions = self.get_public_permissions()
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/sqla/manager.py", line 322, in get_public_permissions
role = self.get_public_role()
File "/app/.venv/lib/python3.9/site-packages/flask_appbuilder/security/sqla/manager.py", line 316, in get_public_role
self.get_session.query(self.role_model)
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2845, in one_or_none
return self._iter().one_or_none()
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2903, in _iter
result = self.session.execute(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1696, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/app/.venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (pyodbc.OperationalError) ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x20 (32) (SQLExecDirectW)')
...
(Background on this error at: https://sqlalche.me/e/14/e3q8)
```
### Steps to reproduce
Unclear. | open | 2022-08-19T11:09:31Z | 2022-09-05T09:35:02Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1913 | [
"pending"
] | Atheuz | 3 |
kizniche/Mycodo | automation | 1,011 | API not capturing conversion_id property update | ### Describe the problem/bug
When converting an input measurement the 'conversion_id' prop of "device measurement settings" in the API stays null while the db reflects the update
### Versions:
- Mycodo Version: 8.10.1
- Raspberry Pi Version: 3B+
- Raspbian OS Version: Raspberry Pi OS Lite
### Reproducibility
Convert an input measurement via UI
GET api/settings/device_measurements/by_device_id/[unique_id]
Bug: DB has conversion_id ref but API does not
thank you
| closed | 2021-05-25T12:37:08Z | 2021-11-01T01:25:01Z | https://github.com/kizniche/Mycodo/issues/1011 | [
"bug",
"Fixed and Committed"
] | tilersmyth | 9 |
skypilot-org/skypilot | data-science | 4,021 | [Storage] Bump GCSFuse to 2.4.0+ | [GCSFuse 2.4.0](https://github.com/GoogleCloudPlatform/gcsfuse/releases/tag/v2.4.0) introduces parallel downloads which helps with loading large files (e.g., model checkpoints).
We should bump GCSFuse version + investigate the tradeoffs of enabling parallel downloads (by setting `file-cache:enable-parallel-downloads:true` in the gcsfuse config file). | open | 2024-09-30T17:49:32Z | 2024-12-19T23:08:59Z | https://github.com/skypilot-org/skypilot/issues/4021 | [] | romilbhardwaj | 0 |
custom-components/pyscript | jupyter | 74 | Feature Request: Automatic reloading for existing scripts? | While jupyter notebook is a great place for doing the initial development, at some point you want to move your automations out from it. It would be great if (at least the existing scripts) would be automatically reloaded on change, i.e., by using inotify listeners on the existing script files and calling the `reload()` when a modification is detected.
My personal development setup looks like this: I'm using notebooks to do the initial development, and as soon as I'm confident that things are mostly working, I'll move the code over into a new script. For development, I'm using pycharm which I have configured to do an auto-deployment on the production instance whenever a file gets saved. In such cases, the ability to avoid doing a manual reload service call would simplify the workflow. | closed | 2020-11-02T23:14:55Z | 2021-01-01T00:59:06Z | https://github.com/custom-components/pyscript/issues/74 | [] | rytilahti | 25 |
plotly/plotly.py | plotly | 5,109 | Plotly min.js import missing .js suffix | When running this cell in a JupyterLab instance:
```python
import plotly.io as pio
pio.renderers.default = "notebook_connected"
import plotly.graph_objects as go
go.Figure()
```
The first time Plotly is loaded, the figure is blank and the following errors appear in the dev console:
```
GET https://cdn.plot.ly/plotly-3.0.1.min net::ERR_ABORTED 403 (Forbidden)
Uncaught ReferenceError: Plotly is not defined
at <anonymous>:1:179
at P.attachWidget (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1859455)
at P.insertWidget (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1858919)
at M._insertOutput (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1282454)
at M.onModelChanged (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1278810)
at m (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1832098)
at Object.l [as emit] (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1831774)
at a.emit (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1829611)
at d._onListChanged (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1273587)
at m (jlab_core.a4c5e1f5bac9ba5dc7f6.js?v=a4c5e1f5bac9ba5dc7f6:1:1832098)
```
When running the cell for a second time, the figure appears. If the page is refreshed, the figure disappears again.
It looks like the issue can be traced back to this line, where `.js` is removed from the CDN path:
https://github.com/plotly/plotly.py/blob/ae0fbedce7ba3be6450aba350f12c1fb043e8eb8/plotly/io/_base_renderers.py#L286
This is with Python 3.11.11, Plotly 6.0.1 and Jupyterlab 4.3.6, and occurs in both Chrome and Edge.
```
anyio==4.9.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==3.0.0
async-lru==2.0.5
attrs==25.3.0
babel==2.17.0
beautifulsoup4==4.13.3
bleach==6.2.0
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
comm==0.2.2
debugpy==1.8.13
decorator==5.2.1
defusedxml==0.7.1
executing==2.2.0
fastjsonschema==2.21.1
fqdn==1.5.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
ipykernel==6.29.5
ipython==9.0.2
ipython_pygments_lexers==1.1.1
isoduration==20.11.0
jedi==0.19.2
Jinja2==3.1.6
json5==0.10.0
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jupyter-events==0.12.0
jupyter-lsp==2.2.5
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab==4.3.6
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
MarkupSafe==3.0.2
matplotlib-inline==0.1.7
mistune==3.1.3
narwhals==1.31.0
nbclient==0.10.2
nbconvert==7.16.6
nbformat==5.10.4
nest-asyncio==1.6.0
notebook_shim==0.2.4
overrides==7.7.0
packaging==24.2
pandocfilters==1.5.1
parso==0.8.4
pexpect==4.9.0
platformdirs==4.3.7
plotly==6.0.1
prometheus_client==0.21.1
prompt_toolkit==3.0.50
psutil==7.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
pycparser==2.22
Pygments==2.19.1
python-dateutil==2.9.0.post0
python-json-logger==3.3.0
PyYAML==6.0.2
pyzmq==26.3.0
referencing==0.36.2
requests==2.32.3
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.23.1
Send2Trash==1.8.3
six==1.17.0
sniffio==1.3.1
soupsieve==2.6
stack-data==0.6.3
terminado==0.18.1
tinycss2==1.4.0
tornado==6.4.2
traitlets==5.14.3
types-python-dateutil==2.9.0.20241206
typing_extensions==4.12.2
uri-template==1.3.0
urllib3==2.3.0
wcwidth==0.2.13
webcolors==24.11.1
webencodings==0.5.1
websocket-client==1.8.0
```
It still doesn't work when downgrading to Plotly 5.24.1, although the console error is slightly different:
```
Uncaught ReferenceError: require is not defined
at <anonymous>:1:17
at P.attachWidget (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1859455)
at P.insertWidget (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1858919)
at M._insertOutput (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1282454)
at M.onModelChanged (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1278810)
at m (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1832098)
at Object.l [as emit] (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1831774)
at a.emit (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1829611)
at d._onListChanged (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1273587)
at m (jlab_core.a4c5e1f5ba…9ba5dc7f6:1:1832098)
```
When exporting to html with `nbconvert`, the 403 error still appears but the plot displays. Changing line 7568 of the attached html file to `<script type="module">import "https://cdn.plot.ly/plotly-3.0.1.min.js"</script>` removes the error from the console.
[Plotly min.js issue.zip](https://github.com/user-attachments/files/19428204/Plotly.min.js.issue.zip) | closed | 2025-03-24T10:51:15Z | 2025-03-24T15:06:10Z | https://github.com/plotly/plotly.py/issues/5109 | [] | slishak-PX | 2 |
ivy-llc/ivy | pytorch | 28,557 | Fix Frontend Failing Test: paddle - creation.paddle.assign | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-12T11:52:24Z | 2024-03-21T12:04:07Z | https://github.com/ivy-llc/ivy/issues/28557 | [
"Sub Task"
] | ZJay07 | 0 |
AirtestProject/Airtest | automation | 852 | swipe滑动操作不够平滑 #884 | 我在用airtest测试自动滑动解锁验证码,但很多页面的滑动解锁会通过js判断滑动操作是人为还是机器,比较生硬的滑动操作会被判定为机器。
目前使用的是基于windows页面识别的swipe操作,具体滑动代码如下:
swipe({图片},vector=[0.1776,0.0056],duration=0.2,steps=randint(2,5))
无论怎么调用参数,swipe的滑动操作其实都是针对指定距离向量进行分段滑动操作,滑动过程都比较生硬容易被识别。
能否提供比较拟人的滑动解决方案? | closed | 2021-01-14T11:50:25Z | 2021-01-15T09:18:44Z | https://github.com/AirtestProject/Airtest/issues/852 | [] | tiexinyang | 1 |
errbotio/errbot | automation | 1,103 | Errbot 4.2.2 python 2.7. Error with command errbot. | In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [ ] Reporting a bug
* [ ] Suggesting a new feature
* [x ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 4.2.2
* OS version: ubuntu 16.04 lts
* Python version: 2.7
* Using a virtual environment: yes
### Issue description
Please describe your bug/feature/problem here.
The more information you can provide, the better.
I can only use python 2.7. I'm trying to install errbot version 4.2.2. I installed it. I got error running errbot command. Can you help me implement errbot without using python 3
> 17:42:12 DEBUG errbot.specific_plugin_ma Load the one remaining...
17:42:12 ERROR yapsy Unable to import plugin: /home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/backends/text
Traceback (most recent call last):
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/yapsy/PluginManager.py", line 488, in loadPlugins
candidate_module = imp.load_module(plugin_module_name,plugin_file,candidate_filepath+".py",("py","r",imp.PY_SOURCE))
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/backends/text.py", line 14, in <module>
from errbot.rendering import ansi, text, xhtml, imtext
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/rendering/__init__.py", line 10, in <module>
MD_ESCAPE_RE = re.compile(u'|'.join(re.escape(c) for c in Markdown.ESCAPED_CHARS))
AttributeError: type object 'Markdown' has no attribute 'ESCAPED_CHARS'
17:42:12 ERROR errbot.bootstrap Unable to load or configure the backend.
Traceback (most recent call last):
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/bootstrap.py", line 126, in setup_bot
bot = backendpm.get_plugin_by_name(backend_name)
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/specific_plugin_manager.py", line 87, in get_plugin_by_name
raise Exception(u'Error loading plugin %s:\nError:\n%s\n' % (name, formatted_error))
Exception: Error loading plugin Text:
Error:
<type 'exceptions.AttributeError'>:
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/yapsy/PluginManager.py", line 488, in loadPlugins
candidate_module = imp.load_module(plugin_module_name,plugin_file,candidate_filepath+".py",("py","r",imp.PY_SOURCE))
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/backends/text.py", line 14, in <module>
from errbot.rendering import ansi, text, xhtml, imtext
File "/home/thienloc/working/errbot/.venv/local/lib/python2.7/site-packages/errbot/rendering/__init__.py", line 10, in <module>
MD_ESCAPE_RE = re.compile(u'|'.join(re.escape(c) for c in Markdown.ESCAPED_CHARS))
### Steps to reproduce
I create a folder, virtual environment.
I downloaded errbot 4.2.2 here: https://pypi.python.org/pypi/errbot/4.2.2
I installed python-telegram-bot
I created a requirements.txt file
I installed it by pip install errbot and I also clicked installation suggestion in pycharm.
When I check errbot --version, it's 4.2.2
Then I click errbot, I got error.
In case of a bug, please describe the steps we need to take in order to reproduce your issue.
If you cannot easily reproduce the issue please let us know and provide as much information as you can which might help us pinpoint the problem.
### Additional info
If you have any more information, please specify it here.
| closed | 2017-09-20T11:00:22Z | 2017-10-07T08:16:42Z | https://github.com/errbotio/errbot/issues/1103 | [] | locdoan12121997 | 3 |
sczhou/CodeFormer | pytorch | 40 | Background upscale isn't working / Real-ESRGAN ignored? | Hello, Thank you for this great project! 💙
I'm running this on Windows 10 and Anaconda, Installation was very easy and simple to follow thanks to your step-by-step instructions, I appreciate it.
### Problem Description:
I've added the argument: --bg_upsampler realesrgan
But it seems to ignore it and just upscale the face without the background, I get this warning:
```
inference_codeformer.py:22: RuntimeWarning: The unoptimized RealESRGAN is slow on CPU. We do not use it. If you really want to use it, please modify the corresponding codes.
warnings.warn('The unoptimized RealESRGAN is slow on CPU. We do not use it. '
Face detection model: retinaface_resnet50
Background upsampling: False, Face upsampling: False
Processing: 5a.png
detect 1 faces
All results are saved in results/_SOURCE__0.7
```
Since I'm not a programmer I don't know how to fix or mess with code in general,
Can you please tell me how to make it work?
Thanks ahead! | open | 2022-10-04T17:38:28Z | 2024-04-06T15:39:48Z | https://github.com/sczhou/CodeFormer/issues/40 | [] | AlonDan | 6 |
ading2210/poe-api | graphql | 70 | Cannot send message? Object of type Message is not JSON serializable | Whenever i try to send a message using this function:
```py
async def generate_response(message):
with open("message.txt", "w") as f:
for chunk in client.send_message("capybara", message):
f.write(chunk["text_new"], end="", flush=True)
with open("message.txt", "r+") as f:
return f.read()
```
I just get this error:
```
Ignoring exception in on_message
Traceback (most recent call last):
File "/home/runner/Bowkii/venv/lib/python3.9/site-packages/nextcord/client.py", line 512, in _run_event
await coro(*args, **kwargs)
File "main.py", line 370, in on_message
response = await generate_response(message)
File "main.py", line 14, in generate_response
for chunk in client.send_message("capybara", message):
File "/home/runner/Bowkii/venv/lib/python3.9/site-packages/poe.py", line 329, in send_message
message_data = self.send_query("SendMessageMutation", {
File "/home/runner/Bowkii/venv/lib/python3.9/site-packages/poe.py", line 202, in send_query
payload = json.dumps(json_data, separators=(",", ":"))
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/nix/store/p21fdyxqb3yqflpim7g8s1mymgpnqiv7-python3-3.8.12/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Message is not JSON serializable
```
I'm pretty sure I did everything correctly and I have not seen anybody else with this error | closed | 2023-05-16T18:43:02Z | 2023-05-17T13:57:07Z | https://github.com/ading2210/poe-api/issues/70 | [
"invalid"
] | BingusCoOfficial | 2 |
pyro-ppl/numpyro | numpy | 1,238 | Cannot install numpyro for GPU | Hi,
Not sure if this is a numpyro issue or an operators error:
I installed numpyro with the following statement, in a clean miniconda environment:
1) pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
....
Successfully built jax
Installing collected packages: six, numpy, typing-extensions, scipy, opt-einsum, flatbuffers, absl-py, jaxlib, jax
Successfully installed absl-py-1.0.0 flatbuffers-2.0 jax-0.2.25 jaxlib-0.1.73+cuda11.cudnn82 numpy-1.21.4 opt-einsum-3.3.0 scipy-1.7.3 six-1.16.0 typing-extensions-4.0.0
2) I tried to install numpyro for cuda
pip install numpyro[cuda]
Collecting numpyro[cuda]
Downloading numpyro-0.8.0-py3-none-any.whl (264 kB)
|████████████████████████████████| 264 kB 1.5 MB/s
**WARNING: numpyro 0.8.0 does not provide the extra 'cuda'**
Requirement already satisfied: jax>=0.2.13 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from numpyro[cuda]) (0.2.25)
Requirement already satisfied: jaxlib>=0.1.65 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from numpyro[cuda]) (0.1.73+cuda11.cudnn82)
Collecting tqdm
Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB)
Requirement already satisfied: numpy>=1.18 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (1.21.4)
Requirement already satisfied: opt-einsum in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (3.3.0)
Requirement already satisfied: typing-extensions in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (4.0.0)
Requirement already satisfied: absl-py in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (1.0.0)
Requirement already satisfied: scipy>=1.2.1 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (1.7.3)
Requirement already satisfied: flatbuffers<3.0,>=1.12 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jaxlib>=0.1.65->numpyro[cuda]) (2.0)
Requirement already satisfied: six in ./miniconda3/envs/jax/lib/python3.9/site-packages (from absl-py->jax>=0.2.13->numpyro[cuda]) (1.16.0)
Installing collected packages: tqdm, numpyro
Successfully installed numpyro-0.8.0 tqdm-4.62.3
3) I uninstalled it, so I could install numpyro cuda in a different way (see step 4)
$ pip uninstall numpyro
Found existing installation: numpyro 0.8.0
Uninstalling numpyro-0.8.0:
Would remove:
/home/roger/miniconda3/envs/jax/lib/python3.9/site-packages/numpyro-0.8.0.dist-info/*
/home/roger/miniconda3/envs/jax/lib/python3.9/site-packages/numpyro/*
Proceed (Y/n)? y
Successfully uninstalled numpyro-0.8.0
4) pip install numpyro[cuda] -f https://storage.googleapis.com/jax-releases/jax_releases.html
Looking in links: https://storage.googleapis.com/jax-releases/jax_releases.html
Collecting numpyro[cuda]
Using cached numpyro-0.8.0-py3-none-any.whl (264 kB)
****WARNING: numpyro 0.8.0 does not provide the extra 'cuda'****
Requirement already satisfied: jax>=0.2.13 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from numpyro[cuda]) (0.2.25)
Requirement already satisfied: jaxlib>=0.1.65 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from numpyro[cuda]) (0.1.73+cuda11.cudnn82)
Collecting tqdm
Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB)
Requirement already satisfied: numpy>=1.18 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (1.21.4)
Requirement already satisfied: opt-einsum in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (3.3.0)
Requirement already satisfied: typing-extensions in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (4.0.0)
Requirement already satisfied: absl-py in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (1.0.0)
Requirement already satisfied: scipy>=1.2.1 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jax>=0.2.13->numpyro[cuda]) (1.7.3)
Requirement already satisfied: flatbuffers<3.0,>=1.12 in ./miniconda3/envs/jax/lib/python3.9/site-packages (from jaxlib>=0.1.65->numpyro[cuda]) (2.0)
Requirement already satisfied: six in ./miniconda3/envs/jax/lib/python3.9/site-packages (from absl-py->jax>=0.2.13->numpyro[cuda]) (1.16.0)
Installing collected packages: tqdm, numpyro
Successfully installed numpyro-0.8.0 tqdm-4.62.3
My queston is numpyro installed to work with cuda? If not is how do I install a numpyro version that uses cuda.
Thanks,
Petrarca | closed | 2021-11-25T01:56:25Z | 2021-11-27T04:30:15Z | https://github.com/pyro-ppl/numpyro/issues/1238 | [] | PetrarcaBruto | 2 |
huggingface/datasets | machine-learning | 6,585 | losing DatasetInfo in Dataset.map when num_proc > 1 | ### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetInfo
def run_map(num_proc):
dataset = Dataset.from_dict(
{"col1": [0, 1], "col2": [3, 4]},
info=DatasetInfo(
dataset_name="my_dataset",
),
)
ds = dataset.map(lambda x: x, num_proc=num_proc)
print(ds.info.dataset_name)
run_map(1)
run_map(2)
```
This puts out:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
None
```
### Expected behavior
I expect the DatasetInfo to be kept as it was and there should be no difference in the output of running map with num_proc=1 and num_proc=2.
Expected output:
```bash
Map: 100%|██████████| 2/2 [00:00<00:00, 724.66 examples/s]
my_dataset
Map (num_proc=2): 100%|██████████| 2/2 [00:00<00:00, 18.25 examples/s]
my_dataset
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.18
- `huggingface_hub` version: 0.20.2
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.9.2 | open | 2024-01-12T13:39:19Z | 2024-01-12T14:08:24Z | https://github.com/huggingface/datasets/issues/6585 | [] | JochenSiegWork | 2 |
pydata/xarray | pandas | 10,115 | DataArray.rolling fails with chunk size of 1 or 2 (reemergence of issue #9862) | ### What happened?
The problem is exactly as written in closed issue #9862, but I'm using:
- xarray: 2025.1.2
- dask: 2025.2.0
Since everything is the same (including traceback and behavior when pasted into console or binder), please refer to original issue for complete description.
I didn't click "new issue" since it's an old issue that was closed, but is not fixed.
### What did you expect to happen?
We would expect the rolling mean to calculate correctly.
### Minimal Complete Verifiable Example
```Python
import dask.array as da
import xarray as xr
import numpy as np
# Dimensions and sizes
nx, ny, nt = 100, 200, 50 # size of x, y, and time dimensions
x = np.linspace(0, 10, nx) # x-coordinates
y = np.linspace(0, 20, ny) # y-coordinates
time = np.linspace(0, 1, nt) # time coordinates
# Generate a random Dask array with lazy computation
data = da.random.random(size=(nx, ny, nt), chunks=(100, 200, 1))
# Create an xarray DataArray with coordinates and attributes
data_array = xr.DataArray(
data,
dims=["x", "y", "time"],
coords={"x": x, "y": y, "time": time},
name="dummy_data",
attrs={"units": "arbitrary", "description": "Dummy 3D dataset"}
)
d_rolling = data_array.rolling(time=5).mean()
d_rolling.compute()
```
### MVCE confirmation
- [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [x] Complete example — the example is self-contained, including all data and the text of any traceback.
- [x] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
Traceback (most recent call last):
Cell In[6], line 24
d_rolling.compute()
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataarray.py:1206 in compute
return new.load(**kwargs)
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataarray.py:1174 in load
ds = self._to_temp_dataset().load(**kwargs)
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataset.py:900 in load
evaluated_data: tuple[np.ndarray[Any, Any], ...] = chunkmanager.compute(
File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/namedarray/daskmanager.py:85 in compute
return compute(*data, **kwargs) # type: ignore[no-untyped-call, no-any-return]
File /srv/conda/envs/notebook/lib/python3.10/site-packages/dask/base.py:662 in compute
results = schedule(dsk, keys, **kwargs)
File /srv/conda/envs/notebook/lib/python3.10/site-packages/dask/_task_spec.py:740 in __call__
return self.func(*new_argspec, **kwargs)
ValueError: Moving window (=5) must between 1 and 4, inclusive
```
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0]
python-bits: 64
OS: Linux
OS-release: 6.8.0-52-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2025.1.2
pandas: 2.2.3
numpy: 2.1.3
scipy: 1.15.2
netCDF4: 1.7.2
pydap: 3.5.3
h5netcdf: 1.5.0
h5py: 3.13.0
zarr: 2.18.3
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: 3.11.0
bottleneck: 1.4.2
dask: 2025.2.0
distributed: 2025.2.0
matplotlib: 3.10.1
cartopy: 0.24.0
seaborn: 0.13.2
numbagg: 0.9.0
fsspec: 2025.2.0
cupy: None
pint: 0.24.4
sparse: 0.15.5
flox: None
numpy_groupies: None
setuptools: 75.8.0
pip: 25.0
conda: None
pytest: None
mypy: None
IPython: 8.32.0
sphinx: None
</details>
| open | 2025-03-11T20:16:13Z | 2025-03-11T20:16:17Z | https://github.com/pydata/xarray/issues/10115 | [
"bug",
"needs triage"
] | pittwolfe | 1 |
profusion/sgqlc | graphql | 72 | Union + __fields__() has misleading error message | While the error is misleading (I should fix that), in GraphQL you can't select fields of an union type directly, you must use fragments to select depending on each type you want to handle.
In your case, could you try:
```py
import sgqlc
from sgqlc.operation import Operation
from sgqlc.types import String, Type, Union, Field, non_null
class TypeA(Type):
i = int
class TypeB(Type):
s = str
class TypeU(Union):
__types__ = (TypeA, TypeB)
class Query(sgqlc.types.Type):
some_query = Field(non_null(TypeU), graphql_name='someQuery')
op = Operation(Query, name="op_name")
q = op.some_query()
q.__fields__() # this line throws 'AttributeError: TypeA has no field name'
# correct behavior would be to:
q.__as__(TypeA).i()
```
_Originally posted by @barbieri in https://github.com/profusion/sgqlc/issues/71#issuecomment-555237354_ | open | 2019-11-19T16:03:12Z | 2019-11-19T16:03:55Z | https://github.com/profusion/sgqlc/issues/72 | [
"bug"
] | barbieri | 0 |
dask/dask | pandas | 10,979 | dask-expr is now a hard dependency | dask-expr is now a hard dependency of dask[dataframe].
We still need to update
- `distributed/continuous_integration/recipes` (see conda build workflow on distributed, currently failing)
- https://github.com/conda-forge/dask-feedstock
- distributed gpuci failing
- other?
CC @phofl @jrbourbeau
XREFs
- dask/dask#10967
- dask/dask#10976
- dask/distributed#8552
| closed | 2024-03-05T13:11:22Z | 2024-03-12T10:20:17Z | https://github.com/dask/dask/issues/10979 | [
"needs triage"
] | crusaderky | 3 |
slackapi/bolt-python | fastapi | 492 | "Add to Slack" Button throws OAuth error | The "Add to Slack" button here https://api.slack.com/docs/slack-button fails with the error
Oops, Something Went Wrong!
Please try again from here or contact the app owner (reason: invalid_browser: This can occur due to page reload, not beginning the OAuth flow from the valid starting URL, or the /slack/install URL not using https://)
This is repeatable on Firefox & Chrome.
The https://XXX.com/slack/install button works. I can also get the "Add to Slack" button to temporarily work if I first go to the https://XXX.com/slack/install page even if I do not click anything there.
Is there a function that https://XXX.com/slack/install is calling when it loads or a setting I am missing to get this to work?
I am using the URL https://slack.com/oauth/v2/authorize?client_id={{ client_id }}&scope={{ scopes }}&state={{ unique_user_code }} for the button | closed | 2021-10-09T08:40:57Z | 2023-08-22T03:29:08Z | https://github.com/slackapi/bolt-python/issues/492 | [
"question"
] | DareFail | 9 |
kizniche/Mycodo | automation | 653 | Display Addition: Generic 20x4 LCD | Hi Kyle,
Can you add support for generic 20x4 LCD displays?
I was able to set one up and get it to display data using the 16x4 option, but it’s limited to 16 characters wide.

| closed | 2019-04-29T03:01:09Z | 2019-05-07T01:59:57Z | https://github.com/kizniche/Mycodo/issues/653 | [] | Magnum-Pl | 9 |
pennersr/django-allauth | django | 4,133 | the issue with next parameter not accessible anywhere, in the custom adapter | # 🛑 Stop
The issue tracker has been moved to https://codeberg.org/allauth/django-allauth/issues.
Please submit your issue there.
NEXT after google/login/callback is empty or none, no proven way to target or access next_url and other url parameters in url before social login
Dumb enough this is a persistent bug i suppose on this project, yet the long nosed creature is yet to fix it
Fix it | closed | 2024-10-28T16:56:13Z | 2024-10-28T17:16:38Z | https://github.com/pennersr/django-allauth/issues/4133 | [] | iamunadike | 1 |
piskvorky/gensim | machine-learning | 3,495 | How to open doc2vec trained on an older version of gensim? |
I have a large number of models trained on older gensim ? I recently updated my python library, and gensim was bumped to the latest version. The problem is the Doc2Vec.load is refusing to load the older versions. Is there a compatibility mode available ? Or what's the cleanest way to load old models and save them in the new format. I am getting the following error:
attributeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 model = Doc2Vec.load(r'Z:\process\edgar\business_doc2vec\20230731.model')
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\doc2vec.py:815, in Doc2Vec.load(cls, *args, **kwargs)
810 except AttributeError as ae:
811 logger.error(
812 "Model load error. Was model saved using code from an older Gensim version? "
813 "Try loading older model using gensim-3.8.3, then re-saving, to restore "
814 "compatibility with current code.")
--> 815 raise ae
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\doc2vec.py:809, in Doc2Vec.load(cls, *args, **kwargs)
786 """Load a previously saved :class:`~gensim.models.doc2vec.Doc2Vec` model.
787
788 Parameters
(...)
806
807 """
808 try:
--> 809 return super(Doc2Vec, cls).load(*args, rethrow=True, **kwargs)
810 except AttributeError as ae:
811 logger.error(
812 "Model load error. Was model saved using code from an older Gensim version? "
813 "Try loading older model using gensim-3.8.3, then re-saving, to restore "
814 "compatibility with current code.")
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\word2vec.py:1949, in Word2Vec.load(cls, rethrow, *args, **kwargs)
1947 except AttributeError as ae:
1948 if rethrow:
-> 1949 raise ae
1950 logger.error(
1951 "Model load error. Was model saved using code from an older Gensim Version? "
1952 "Try loading older model using gensim-3.8.3, then re-saving, to restore "
1953 "compatibility with current code.")
1954 raise ae
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\word2vec.py:1942, in Word2Vec.load(cls, rethrow, *args, **kwargs)
1923 """Load a previously saved :class:`~gensim.models.word2vec.Word2Vec` model.
1924
1925 See Also
(...)
1939
1940 """
1941 try:
-> 1942 model = super(Word2Vec, cls).load(*args, **kwargs)
1943 if not isinstance(model, Word2Vec):
1944 rethrow = True
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\utils.py:487, in SaveLoad.load(cls, fname, mmap)
484 compress, subname = SaveLoad._adapt_by_suffix(fname)
486 obj = unpickle(fname)
--> 487 obj._load_specials(fname, mmap, compress, subname)
488 obj.add_lifecycle_event("loaded", fname=fname)
489 return obj
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\word2vec.py:1958, in Word2Vec._load_specials(self, *args, **kwargs)
1956 def _load_specials(self, *args, **kwargs):
1957 """Handle special requirements of `.load()` protocol, usually up-converting older versions."""
-> 1958 super(Word2Vec, self)._load_specials(*args, **kwargs)
1959 # for backward compatibility, add/rearrange properties from prior versions
1960 if not hasattr(self, 'ns_exponent'):
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\utils.py:518, in SaveLoad._load_specials(self, fname, mmap, compress, subname)
516 logger.info("loading %s recursively from %s.* with mmap=%s", attrib, cfname, mmap)
517 with ignore_deprecation_warning():
--> 518 getattr(self, attrib)._load_specials(cfname, mmap, compress, subname)
520 for attrib in getattr(self, '__numpys', []):
521 logger.info("loading %s from %s with mmap=%s", attrib, subname(fname, attrib), mmap)
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\utils.py:1522, in deprecated.<locals>.decorator.<locals>.new_func1(*args, **kwargs)
1515 @wraps(func)
1516 def new_func1(*args, **kwargs):
1517 warnings.warn(
1518 fmt.format(name=func.__name__, reason=reason),
1519 category=DeprecationWarning,
1520 stacklevel=2
1521 )
-> 1522 return func(*args, **kwargs)
File C:\Anaconda3\envs\base_small\lib\site-packages\gensim\models\doc2vec.py:328, in Doc2Vec.docvecs(self)
325 @property
326 @deprecated("The `docvecs` property has been renamed `dv`.")
327 def docvecs(self):
--> 328 return self.dv
AttributeError: 'Doc2Vec' object has no attribute 'dv' | closed | 2023-09-07T13:37:28Z | 2023-09-17T18:56:41Z | https://github.com/piskvorky/gensim/issues/3495 | [] | Nirvana2211 | 3 |
ExpDev07/coronavirus-tracker-api | fastapi | 129 | See country total instead of province. | I don't know if I'm just missing it, but I can't seem to find a query parameter that returns the latest cases per country, instead of per province, as is currently the case. Is the only possibility at the moment to go through each province belonging to a country and to add the total cases together?
Thanks | closed | 2020-03-21T17:52:03Z | 2020-03-21T22:01:37Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/129 | [
"question"
] | lburch02 | 10 |
browser-use/browser-use | python | 1,118 | Sitting with "about:blank" in Chrome Ubuntu 24.10 | ### Bug Description
Fails to run with Ubuntu 24.10 on x86_64
Code:
`from langchain_ollama import ChatOllama`
`from browser_use import Agent, Browser, BrowserConfig`
`from pydantic import SecretStr`
`import asyncio`
`from dotenv import load_dotenv`
`load_dotenv()`
`async def main():`
` llm=ChatOllama(model="qwen2.5", num_ctx=32000)`
` agent = Agent(`
` task="Compare deepseek and open ai pricing",`
` llm=llm,`
` )`
` await agent.run()`
`asyncio.run(main())`
The about:blank never gets updated.
Playwright works fine

### Reproduction Steps
Standard install. Just run the code above. Seems it's due to python not being able to control playwright working with local ollama
### Code Sample
```python
from langchain_ollama import ChatOllama
from browser_use import Agent, Browser, BrowserConfig
from pydantic import SecretStr
import asyncio
from dotenv import load_dotenv
load_dotenv()
async def main():
# Initialize the model
llm=ChatOllama(model="qwen2.5", num_ctx=32000)
agent = Agent(
task="Compare deepseek and open ai pricing",
llm=llm,
)
await agent.run()
asyncio.run(main())
```
### Version
0.1.40
### LLM Model
Local Model (Specify model in description)
### Operating System
Ubuntu 24.10
### Relevant Log Output
```shell
``` | open | 2025-03-24T01:23:57Z | 2025-03-24T05:41:14Z | https://github.com/browser-use/browser-use/issues/1118 | [
"bug"
] | 4EverBuilder | 3 |
johnthagen/python-blueprint | pytest | 242 | Enable Ruff format implicit string concatenation | When this is stable, revisit `ISC001` disable lints.
- https://github.com/astral-sh/ruff/issues/9457#issuecomment-2437519130 | closed | 2024-10-28T13:47:50Z | 2025-01-09T19:26:42Z | https://github.com/johnthagen/python-blueprint/issues/242 | [
"enhancement"
] | johnthagen | 1 |
robotframework/robotframework | automation | 5,003 | Set Library Search Order not working in child suites when used in __init__ file | There is a bug in Robotframework that when I set the `Set Library Search Order`, it is not honored anymore in the child suites, when I initially set it in a `__init__.robot`
I use this Search Order with the python remote server:
```
Import Library Remote htttp://xxxxx:yyyy WITH NAME RemoteLib
Set Library Search Order RemoteLib
```
Folder structure
```
my_project
├── __init__.robot => "Set Library Search Order " in the `Suite Setup`
├── test.robot => "Library Search Order not set anymore"
├── ...
│
``` | open | 2024-01-05T09:20:40Z | 2024-02-21T06:05:16Z | https://github.com/robotframework/robotframework/issues/5003 | [] | derived-coder | 1 |
ydataai/ydata-profiling | pandas | 1,523 | Unexpected error of type DispatchError raised while running data exploratory profiler from function spark_get_series_descriptions | ### Current Behaviour
# converts the data types of the columns in the DataFrame to more appropriate types,
# useful for improving the performance of calculations.
# Selects the columns in the DataFrame that are of type object or category,
# which are the types that are typically considered to be categorical
data_to_analyze = dataframe_to_analyze.toPandas()
<html>
<body>
<!--StartFragment-->
ERROR:data_quality_job.scheduler.data_quality_glue_job:Run data exploratory analysis fails for datasource master_wip in data domain stock_wip: Unexpected error of type DispatchError was raised while data exploratory profiler: Function <code object spark_get_series_descriptions at 0x7fb135521370, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 67>Traceback (most recent call last): File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 328, in __call__ return func(*args, **kwargs) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/describe_date_spark.py", line 50, in describe_date_1d_spark bin_edges, hist = df.select(col_name).rdd.flatMap(lambda x: x).histogram(bins_arg) File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1652, in histogram raise TypeError("buckets should be a list or tuple or number(int or long)")TypeError: buckets should be a list or tuple
--
or number(int or long)The above exception was the direct cause of the following exception:Traceback (most recent call last): File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 328, in __call__ return func(*args, **kwargs) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 64, in spark_describe_1d return summarizer.summarize(config, series, dtype=vtype) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/summarizer.py", line 42, in summarize _, _, summary = self.handle(str(dtype), config, series, {"type": str(dtype)}) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 62, in handle return op(*args) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 21, in func2 return f(*res) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 21, in func2 return f(*res) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 21, in func2 return f(*res) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 17, in func2 res = g(*x) File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 330, in __call__ raise DispatchError(f"Function {func.__code__}") from exmultimethod.DispatchError: Function <code object describe_date_1d_spark at 0x7fb135546ce0, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/describe_date_spark.py", line 22>The above exception was the direct cause of the following exception:Traceback (most recent call last): File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 328, in __call__ return func(*args, **kwargs) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 92, in spark_get_series_descriptions for i, (column, description) in enumerate( File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 870, in next raise value File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 88, in multiprocess_1d return column, describe_1d(config, df.select(column), summarizer, typeset) File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 330, in __call__ raise DispatchError(f"Function {func.__code__}") from exmultimethod.DispatchError: Function <code object spark_describe_1d at 0x7fb1355210b0, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 16>The above exception was the direct cause of the following exception:Traceback (most recent call last): File "/tmp/sls_data_quality_library-0.3.0-py3-none-any.whl/data_quality_job/scheduler/data_quality_glue_job.py", line 1074, in run_data_exploratory_analysis self.dq_file_system_metrics_repository_manager.persist_profile_json_report( File "/tmp/sls_data_quality_library-0.3.0-py3-none-any.whl/data_quality_job/services/data_quality_file_system_metrics_repository.py", line 974, in persist_profile_json_report generated_profile.to_file(output_file=f"{local_json_report}") File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 347, in to_file data = self.to_json() File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 479, in to_json return self.json File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 283, in json self._json = self._render_json() File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 449, in _render_json description = self.description_set File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 253, in description_set self._description_set = describe_df( File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/describe.py", line 74, in describe series_description = get_series_descriptions( File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 330, in __call__ raise DispatchError(f"Function {func.__code__}") from exmultimethod.DispatchError: Function <code object spark_get_series_descriptions at 0x7fb135521370, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 67>INFO:py4j.clientserver:Closing down clientserver connectionINFO:py4j.clientserver:Closing down clientserver connectionINFO:py4j.clientserver:Closing down clientserver connectionWARNING:data_quality_job.scheduler.data_quality_glue_job:Processing dataset fails to provide an exploratory data analysis report : Unexpected error of type DispatchError was raised while data exploratory profiler: Function <code object spark_get_series_descriptions at 0x7fb135521370, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 67>
<!--EndFragment-->
</body>
</html>
### Expected Behaviour
While converting my spark dataframe to pandas, the report should be generated properly for the dataset
The dataframe should not be considered as spark dataframe
No error should be raised
### Data Description
<html>
<body>
<!--StartFragment-->
INFO:data_quality_job.services.data_quality_operations:Data profiler dataset data types to analyze: storage_location categorystock_in_transit float32unrestricted_use_stock float32stock_at_vendor float32stock_in_transfer float32stock_in_quality_inspection float32valuation_class float32block_stock_returns float32material_part_number objectstock_in_transfer_plant_to_plant float32stock_value float32material_type categoryblocked_stock float32account_description categoryplant categoryall_restricted_stock float32valuated_stock_quantities float32gl_account float32record
--
_timestamp datetime64[ns]non_valuated_stock_quantities float32dtype: object
<!--EndFragment-->
</body>
</html>
### Code that reproduces the bug
```Python
def determine_run_minimal_mode(self, nb_columns, nb_records):
"""
Determine if the function should run in minimal mode.
Args:
nb_columns (int): The number of columns in the dataset.
nb_records (int): The number of records in the dataset.
Returns:
bool: True if the function should run in minimal mode, False otherwise.
"""
return True if (len(nb_columns) >= EDA_PROFILING_MODE_NB_COLUMNS_LIMIT or nb_records >= EDA_PROFILING_MODE_NB_RECORDS_LIMIT) else False
def create_profile_report(self,
dataset_to_analyze: pd.DataFrame,
report_name: str,
dataset_description_url: str) -> ProfileReport:
"""
Creates a profile report for a given dataset.
Args:
dataset_to_analyze (pd.DataFrame): The dataset to analyze and generate a profile report for.
report_name (str): The name of the report.
dataset_description_url (str): The URL of the dataset description.
Returns:
ProfileReport: The generated profile report.
"""
# Perform data quality operations and generate a profile report
# ...
# variables preferred characterization settings
variables_settings = {
"num": {"low_categorical_threshold": 5, "chi_squared_threshold": 0.999, "histogram_largest": 10},
"cat": {"length": True, "characters": False, "words": False,
"cardinality_threshold": 20, "imbalance_threshold": 0.5,
"n_obs": 5, "chi_squared_threshold": 0.999},
"bool": {"n_obs": 3, "imbalance_threshold": 0.5}
}
missing_diagrams_settings = {
"heatmap": False,
"matrix": True,
"bar": False
}
# Plot rendering option, way how to pass arguments to the underlying matplotlib visualization engine
plot_rendering_settings = {
"histogram": {"x_axis_labels": True, "bins": 0, "max_bins": 10},
"dpi": 200,
"image_format": "png",
"missing": {"cmap": "RdBu_r", "force_labels": True},
"pie": {"max_unique": 10, "colors": ["gold", "b", "#FF796C"]},
"correlation": {"cmap": "RdBu_r", "bad": "#000000"}
}
# Correlation matrices through description_set
correlations_settings = {
"auto": {"calculate": True, "warn_high_correlations": True, "threshold": 0.9},
"pearson": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9},
"spearman": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9},
"kendall": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9},
"phi_k": {"calculate": False, "warn_high_correlations": True, "threshold": 0.9},
"cramers": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9},
}
categorical_maximum_correlation_distinct = 20
report_rendering_settings = {
"precision": 10,
}
interactions_settings = {
"continuous": False,
"targets": []
}
# Customizing the report's theme
html_report_styling = {
"style": {
"theme": "flatly",
"full_width": True,
"primary_colors": {"#66cc00", "#ff9933", "#ff0099"}
}
}
current_datetime = datetime.now()
current_date = current_datetime.date()
current_year = current_date.strftime("%Y")
# compute amount of data used for profiling
samples_percent_size = (min(len(dataset_to_analyze.columns.tolist()), 20) * min(dataset_to_analyze.shape[0], 100000)) / (len(dataset_to_analyze.columns.tolist()) * dataset_to_analyze.shape[0])
samples = {
"head": 0,
"tail": 0,
"random": 0
}
dataset_description = {
"description": f"This profiling report was generated using a sample of {samples_percent_size}% of the filtered original dataset.",
"copyright_year": current_year,
"url": dataset_description_url
}
# Identify time series variables if any
# Enable tsmode to True to automatically identify time-series variables
# and provide the column name that provides the chronological order of your time-series
# time_series_type_schema = {}
time_series_mode = False
# time_series_sortby = None
# for column_name in dataset_to_analyze.columns.tolist():
# if any(keyword in column_name.lower() for keyword in ["date", "timestamp"]):
# self.logger.info("candidate column_name as timeseries %s", column_name)
# time_series_type_schema[column_name] = "timeseries"
# if len(time_series_type_schema) > 0:
# time_series_mode = True
# time_series_sortby = "Date Local"
# is_run_minimal_mode = self.determine_run_minimal_mode(dataset_to_analyze.columns.tolist(), dataset_to_analyze.shape[0])
# Convert the Pandas DataFrame to a Spark DataFrame
# Configure pandas-profiling to handle Spark DataFrames
# while preserving the categorical encoding
# Enable Arrow-based columnar data transfers
self.spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
pd.DataFrame.iteritems = pd.DataFrame.items
# psdf = ps.from_pandas(dataset_to_analyze)
# data_to_analyze = psdf.to_spark()
data_to_analyze = self.spark.createDataFrame(dataset_to_analyze)
ydata_profiling_instance_config = Settings()
ydata_profiling_instance_config.infer_dtypes = True
# ydata_profiling_instance_config.Config.set_option("profilers", {"Spark": {"verbose": True}})
return ProfileReport(
# dataset_to_analyze,
data_to_analyze,
title=report_name,
dataset=dataset_description,
sort=None,
progress_bar=False,
vars=variables_settings,
explorative=True,
plot=plot_rendering_settings,
report=report_rendering_settings,
correlations=correlations_settings,
categorical_maximum_correlation_distinct=categorical_maximum_correlation_distinct,
missing_diagrams=missing_diagrams_settings,
samples=samples,
# correlations=None,
interactions=interactions_settings,
html=html_report_styling,
# minimal=is_run_minimal_mode,
minimal=True,
tsmode=time_series_mode,
# tsmode=False,
# sortby=time_series_sortby,
# type_schema=time_series_type_schema
)
def is_categorical_column(self, df, column_name, n_unique_threshold=20, ratio_unique_values=0.05, exclude_patterns=[]):
"""
Determines whether a column in a pandas DataFrame is categorical.
Args:
df (pandas.DataFrame): The DataFrame to check.
column_name (str): The name of the column to check.
n_unique_threshold (int): The threshold for the number of unique values.
ratio_unique_values (float): The threshold for the ratio of unique values to total values.
exclude_patterns (list): A list of patterns to exclude from consideration.
Returns:
bool: True if the column is categorical, False otherwise.
"""
if df[column_name].dtype in [object, str]:
# Check if the column name matches any of the exclusion patterns
if any(pattern in column_name for pattern in exclude_patterns):
return False
# Check if the number of unique values is less than a threshold
if df[column_name].nunique() < n_unique_threshold:
return True
# Check if the ratio of unique values to total values is less than a threshold
if 1. * df[column_name].nunique() / df[column_name].count() < ratio_unique_values:
print(df[column_name], "ratio is", 1. * df[column_name].nunique() / df[column_name].count())
return True
# Check if any of the other conditions are true
return False
def get_categorical_columns(self, df, n_unique_threshold=10, ratio_threshold=0.05, exclude_patterns=[]):
"""
Determines which columns in a pandas DataFrame are categorical.
Args:
df (pandas.DataFrame): The DataFrame to check.
n_unique_threshold (int): The threshold for the number of unique values.
ratio_threshold (float): The threshold for the ratio of unique values to total values.
exclude_patterns (list): A list of patterns to exclude from consideration.
Returns:
list: A list of the names of the categorical columns.
"""
categorical_cols = []
for column_name in df.columns:
if self.is_categorical_column(df, column_name, n_unique_threshold, ratio_threshold, exclude_patterns):
categorical_cols.append(column_name)
return categorical_cols
def perform_exploratory_data_analysis(self, report_name: str,
dataframe_to_analyze: SparkDataFrame,
columns_list: list,
description_url: str, json_file_path: str) -> None:
"""
Performs exploratory data analysis on a given DataFrame.
Args:
dataframe_to_analyze (DataFrame): The DataFrame to perform exploratory data analysis on.
columns_list (list): A list of dictionaries containing column information.
"""
try:
# Cast the columns in the data DataFrame to match the Glue table column types
self.logger.info("Performs exploratory data analysis on a given DataFrame with columns list: %s",
columns_list)
for analyze_column in columns_list:
dataframe_to_analyze = dataframe_to_analyze.withColumn(
analyze_column["Name"],
dataframe_to_analyze[analyze_column["Name"]].cast(analyze_column["Type"]),
)
# Verify the updated column types
self.logger.info("Dataframe column type casted from data catalog: %s",
dataframe_to_analyze.printSchema())
# converts the data types of the columns in the DataFrame to more appropriate types,
# useful for improving the performance of calculations.
# Selects the columns in the DataFrame that are of type object or category,
# which are the types that are typically considered to be categorical
data_to_analyze = dataframe_to_analyze.toPandas()
data_to_analyze = data_to_analyze.infer_objects()
data_to_analyze.convert_dtypes().dtypes
categorical_cols = self.get_categorical_columns(data_to_analyze, n_unique_threshold=10, ratio_threshold=0.05, exclude_patterns=['date', 'timestamp', 'time', 'year', 'month', 'day', 'hour', 'minute', 'second', 'part_number'])
# categorical_cols = data_to_analyze.select_dtypes(include=["object", "category"]).columns.tolist()
self.logger.info("Data profiler dataset detected potential categorical columns %s and its type %s",
categorical_cols, data_to_analyze.dtypes)
for column_name in data_to_analyze.columns.tolist():
if column_name in categorical_cols:
data_to_analyze[column_name] = data_to_analyze[column_name].astype("category")
else:
# search for undetected categorical columns
if any(term in str.lower(column_name) for term in ["plant", "program"]):
self.logger.info("Undetected potential categorical column %s", column_name)
# for column_name in data_to_analyze.columns.tolist():
# # search for non categorical columns
# # if any(term in str.lower(column_name) for term in ["partnumber", "part_number", "_item", "_number", "plant", "program"]):
# if any(term in str.lower(column_name) for term in ["plant", "program"]):
# if column_name in categorical_cols:
# self.logger.info("Data profiler dataset proposed categorical column %s", column_name)
# data_to_analyze[column_name] = data_to_analyze[column_name].astype("category")
# if any(term in str.lower(column_name) for term in ["partnumber", "part_number", "_item", "_number", "_timestamp", "_date"]):
# self.logger.info("Data profiler dataset detected non categorical column %s", column_name)
# data_to_analyze[column_name] = data_to_analyze[column_name].astype("str")
if any(term in str.lower(column_name) for term in ["timestamp"]):
self.logger.info("Data profiler dataset detected datetime column %s", column_name)
try:
if pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d', errors='coerce').notnull().all():
data_to_analyze[column_name] = data_to_analyze[column_name].apply(pd.to_datetime)
# data_to_analyze[column_name] = data_to_analyze[column_name].astype(np.datetime64)
elif pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S', errors='coerce').notnull().all():
data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S')
elif data_to_analyze[column_name].dtypes in ['numpy.int64', 'int64']:
data_to_analyze[column_name] = data_to_analyze[column_name].apply(lambda x: datetime.fromtimestamp(int(x) / 1000))
elif data_to_analyze[column_name].dtypes == 'datetime64[ms]':
data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%dT%H:%M:%SZ')
data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]')
else:
data_to_analyze[column_name] = data_to_analyze[column_name].astype('str')
# if not isinstance(data_to_analyze[column_name].dtype, np.datetime64):
# data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S')
# # if not np.issubdtype(data_to_analyze[column_name].dtype, np.datetime64):
# # data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S', errors="coerce")
# # elif is_datetime64_any_dtype(data_to_analyze[column_name]):
# # data_to_analyze[column_name] = data_to_analyze[column_name].astype(np.datetime64)
# data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]')
# # elif data_to_analyze[column_name].dtype == 'datetime64[ns]':
# # data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%dT%H:%M:%SZ')
# # data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]')
# # else:
# # data_to_analyze[column_name] = data_to_analyze[column_name].astype('datetime64')
# except ValueError:
# try:
# data_to_analyze[column_name] = data_to_analyze[column_name].astype(np.date_time)
# except ValueError:
# try:
# if (data_to_analyze[column_name].dtypes in ["numpy.int64", "int64"]):
# data_to_analyze[column_name] = data_to_analyze[column_name].apply(
# lambda x: datetime.fromtimestamp(int(x) / 1000))
except ValueError:
data_to_analyze[column_name] = data_to_analyze[column_name].astype('str')
elif any(term in str.lower(column_name) for term in ["date"]):
self.logger.info("Data profiler dataset detected date column %s", column_name)
try:
if pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d', errors='coerce').notnull().all():
data_to_analyze[column_name] = data_to_analyze[column_name].dt.date
elif pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S', errors='coerce').notnull().all():
data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S')
elif data_to_analyze[column_name].dtypes in ['numpy.int64', 'int64']:
data_to_analyze[column_name] = data_to_analyze[column_name].apply(lambda x: datetime.fromtimestamp(int(x) / 1000))
elif data_to_analyze[column_name].dtypes == 'datetime64[ms]':
data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%dT%H:%M:%SZ')
data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]')
else:
data_to_analyze[column_name] = data_to_analyze[column_name].astype('str')
# data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name]).dt.date
# except ValueError:
# try:
# data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name],
# format="%Y-%m-%d", errors="coerce")
# except ValueError:
# try:
# if (data_to_analyze[column_name].dtypes in ["numpy.int64", "int64"]):
# data_to_analyze[column_name] = data_to_analyze[column_name].apply(
# lambda x: datetime.fromtimestamp(int(x) / 1000))
except ValueError:
pass
self.logger.info("Data profiler changed dtypes %s", data_to_analyze.dtypes)
# Downcast data types: If the precision of your data doesn't require float64,
# consider downcasting to a lower precision data type like float32 or even int64.
# This can significantly reduce memory usage and improve computational efficiency.
try:
float64_cols = list(data_to_analyze.select_dtypes(include="float64"))
self.logger.info("Data profiler dataset detected float64 column %s", column_name)
data_to_analyze[float64_cols] = data_to_analyze[float64_cols].astype("float32")
# data_to_analyze[
# data_to_analyze.select_dtypes(np.float64).columns
# ] = data_to_analyze.select_dtypes(np.float64).astype(np.float32)
except ValueError:
pass
data_to_analyze.reset_index(drop=True, inplace=True)
self.logger.info("Data profiler dataset data types to analyze: %s", data_to_analyze.dtypes)
# If dealing with large datasets, consider using sampling techniques
# to reduce the amount of data processed is useful for exploratory
# data analysis or initial profiling.
# Sample 10.000 rows
# if data_to_analyze.count() >= EDA_PROFILING_MODE_NB_RECORDS_LIMIT:
# data_to_analyze = data_to_analyze.sample(EDA_PROFILING_MODE_NB_RECORDS_LIMIT)
# Generates a profile report, providing for time-series data,
# an overview of the behaviour of time dependent variables
# regarding behaviours such as time plots, seasonality, trends,
# stationary and data gaps, and identifying gaps in the time series,
# caused either by missing values or by entries missing in the time index
profile = self.create_profile_report(dataset_to_analyze=data_to_analyze,
report_name=report_name,
dataset_description_url=description_url)
return profile
except Exception as exc:
error_message = f"Unexpected error of type {type(exc).__name__} was raised while data exploratory profiler: {str(exc)}"
self.logger.exception(
"Run data exploratory analysis fails to generate report %s: %s",
report_name, error_message,
)
raise RuntimeError(error_message) from exc
```
### pandas-profiling version
v.4.6.3
### Dependencies
```Text
Ipython-8.19.0
MarkupSafe-2.1.3
PyAthena-3.0.10
PyWavelets-1.5.0
SQLAlchemy-1.4.50
altair-4.2.2
annotated-types-0.6.0
anyio-4.2.0
argon2-cffi-23.1.0
argon2-cffi-bindings-21.2.0
arrow-1.3.0
asn1crypto-1.5.1
asttokens-2.4.1
async-lru-2.0.4
asyncio-3.4.3
awswrangler-3.4.2
babel-2.14.0
beautifulsoup4-4.12.2
bleach-6.1.0
boto-session-manager-1.7.1
boto3-1.34.9
boto3-helpers-1.4.0
botocore-1.34.9
cffi-1.16.0
colorama-0.4.6
comm-0.2.0
cryptography-41.0.7
dacite-1.8.1
debugpy-1.8.0
decorator-5.1.1
defusedxml-0.7.1
delta-spark-2.3.0
deltalake-0.14.0
editorconfig-0.12.3
entrypoints-0.4
exceptiongroup-1.2.0
executing-2.0.1
fastjsonschema-2.19.1
flatten_dict-0.4.2
fqdn-1.5.1
fsspec-2023.12.2
func-args-0.1.1
great-expectations-0.18.7
greenlet-3.0.3
htmlmin-0.1.12
imagehash-4.3.1
ipykernel-6.28.0
ipywidgets-8.1.1
isoduration-20.11.0
iterproxy-0.3.1
jedi-0.19.1
jinja2-3.1.2
jsbeautifier-1.14.11
json2html-1.3.0
json5-0.9.14 jsonpatch-1.33
jsonpath-ng-aerospike-1.5.3
jsonpointer-2.4 jsonschema-4.20.0
jsonschema-specifications-2023.12.1
jupyter-client-8.6.0
jupyter-core-5.6.0
jupyter-events-0.9.0
jupyter-lsp-2.2.1
jupyter-server-2.12.1
jupyter-server-terminals-0.5.1
jupyterlab-4.0.9
jupyterlab-pygments-0.3.0
jupyterlab-server-2.25.2
jupyterlab-widgets-3.0.9
llvmlite-0.41.1
lxml-4.9.4
makefun-1.15.2
markdown-it-py-3.0.0
marshmallow-3.20.1
matplotlib-inline-0.1.6
mdurl-0.1.2
mistune-3.0.2
mmhash3-3.0.1
multimethod-1.10
nbclient-0.9.0
nbconvert-7.13.1
nbformat-5.9.2
nest-asyncio-1.5.8
networkx-3.2.1
notebook-7.0.6
notebook-shim-0.2.3
numba-0.58.1
overrides-7.4.0
pandas-2.0.3
pandocfilters-1.5.0
parso-0.8.3
pathlib-mate-1.3.1
pathlib2-2.3.7.post1
patsy-0.5.5
pexpect-4.9.0
phik-0.12.3
platformdirs-4.1.0 ply-3.11
prometheus-client-0.19.0 prompt-toolkit-3.0.43 psutil-5.9.7 ptyprocess-0.7.0
pure-eval-0.2.2
py4j-0.10.9.5 pyarrow-12.0.1
pycparser-2.21
pydantic-2.5.3
pydantic-core-2.14.6
pydeequ-1.2.0 pygments-2.17.2
pyiceberg-0.5.1
pyparsing-3.1.1
pyspark-3.3.4
python-json-logger-2.0.7
pytz-2023.3.post1
pyzmq-25.1.2
redshift_connector-2.0.918
referencing-0.32.0
requests-2.31.0
rfc3339-validator-0.1.4
rfc3986-validator-0.1.1 rich-13.7.0
rpds-py-0.16.2
ruamel.yaml-0.17.17
s3path-0.4.2
s3pathlib-2.0.1
s3transfer-0.10.0
scramp-1.4.4
send2trash-1.8.2
smart-open-6.4.0
sniffio-1.3.0
sortedcontainers-2.4.0
soupsieve-2.5
sqlalchemy-redshift-0.8.14
sqlalchemy_utils-0.41.1
stack-data-0.6.3
strictyaml-1.7.3
tabulate-0.9.0
tangled-up-in-unicode-0.2.0
terminado-0.18.0
tinycss2-1.2.1
tomli-2.0.1
toolz-0.12.0
tornado-6.4
traitlets-5.14.0
typeguard-4.1.5
types-python-dateutil-2.8.19.14
typing-extensions-4.9.0
tzlocal-5.2
uri-template-1.3.0
urllib3-2.0.7
uuid7-0.1.0
visions-0.7.5
wcwidth-0.2.12
webcolors-1.13
webencodings-0.5.1 websocket-client-1.7.0
widgetsnbextension-4.0.9
wordcloud-1.9.3
```
### OS
linux
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-12-29T22:52:16Z | 2023-12-29T23:10:28Z | https://github.com/ydataai/ydata-profiling/issues/1523 | [
"needs-triage"
] | tboz38 | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 417 | `test_expected_minimum` failure | One test for the expected minimum function fails somewhere deep in scipy. Any ideas on how to track this down?
```
$ pytest --pdb -x -m 'not slow_test' skopt/tests/test_utils.py (skopt)
============================= test session starts ==============================
platform darwin -- Python 3.5.2, pytest-3.0.7, py-1.4.31, pluggy-0.4.0
rootdir: /Users/thead/git/scikit-optimize, inifile:
collected 3 items
skopt/tests/test_utils.py ..F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@pytest.mark.fast_test
def test_expected_minimum():
res = gp_minimize(bench3,
[(-2.0, 2.0)],
x0=[0.],
noise=0.0,
n_calls=20,
> random_state=1)
skopt/tests/test_utils.py:81:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
skopt/optimizer/gp.py:238: in gp_minimize
callback=callback, n_jobs=n_jobs)
skopt/optimizer/base.py:249: in base_minimize
result = optimizer.tell(next_x, next_y, fit=fit_model)
skopt/optimizer/optimizer.py:407: in tell
est.fit(self.space.transform(self.Xi), self.yi)
skopt/learning/gaussian_process/gpr.py:194: in fit
super(GaussianProcessRegressor, self).fit(X, y)
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:217: in fit
bounds))
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:424: in _constrained_optimization
fmin_l_bfgs_b(obj_func, initial_theta, bounds=bounds)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/lbfgsb.py:193: in fmin_l_bfgs_b
**opts)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/lbfgsb.py:330: in _minimize_lbfgsb
f, g = func_and_grad(x)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/lbfgsb.py:278: in func_and_grad
f = fun(x, *args)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/optimize.py:289: in function_wrapper
return function(*(wrapper_args + args))
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/optimize/optimize.py:63: in __call__
fg = self.fun(x, *args)
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:194: in obj_func
theta, eval_gradient=True)
../../anaconda/envs/skopt/lib/python3.5/site-packages/sklearn/gaussian_process/gpr.py:388: in log_marginal_likelihood
L = cholesky(K, lower=True) # Line 2
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/linalg/decomp_cholesky.py:81: in cholesky
check_finite=check_finite)
../../anaconda/envs/skopt/lib/python3.5/site-packages/scipy/linalg/decomp_cholesky.py:20: in _cholesky
a1 = asarray_chkfinite(a)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = array([[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan],
[ nan, na...nan, nan, nan],
[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan]])
dtype = None, order = None
def asarray_chkfinite(a, dtype=None, order=None):
"""Convert the input to an array, checking for NaNs or Infs.
Parameters
----------
a : array_like
Input data, in any form that can be converted to an array. This
includes lists, lists of tuples, tuples, tuples of tuples, tuples
of lists and ndarrays. Success requires no NaNs or Infs.
dtype : data-type, optional
By default, the data-type is inferred from the input data.
order : {'C', 'F'}, optional
Whether to use row-major (C-style) or
column-major (Fortran-style) memory representation.
Defaults to 'C'.
Returns
-------
out : ndarray
Array interpretation of `a`. No copy is performed if the input
is already an ndarray. If `a` is a subclass of ndarray, a base
class ndarray is returned.
Raises
------
ValueError
Raises ValueError if `a` contains NaN (Not a Number) or Inf (Infinity).
See Also
--------
asarray : Create and array.
asanyarray : Similar function which passes through subclasses.
ascontiguousarray : Convert input to a contiguous array.
asfarray : Convert input to a floating point ndarray.
asfortranarray : Convert input to an ndarray with column-major
memory order.
fromiter : Create an array from an iterator.
fromfunction : Construct an array by executing a function on grid
positions.
Examples
--------
Convert a list into an array. If all elements are finite
``asarray_chkfinite`` is identical to ``asarray``.
>>> a = [1, 2]
>>> np.asarray_chkfinite(a, dtype=float)
array([1., 2.])
Raises ValueError if array_like contains Nans or Infs.
>>> a = [1, 2, np.inf]
>>> try:
... np.asarray_chkfinite(a)
... except ValueError:
... print('ValueError')
...
ValueError
"""
a = asarray(a, dtype=dtype, order=order)
if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
raise ValueError(
> "array must not contain infs or NaNs")
E ValueError: array must not contain infs or NaNs
../../anaconda/envs/skopt/lib/python3.5/site-packages/numpy/lib/function_base.py:1022: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /Users/thead/anaconda/envs/skopt/lib/python3.5/site-packages/numpy/lib/function_base.py(1022)asarray_chkfinite()
-> "array must not contain infs or NaNs")
``` | closed | 2017-06-29T06:58:10Z | 2018-01-10T22:50:13Z | https://github.com/scikit-optimize/scikit-optimize/issues/417 | [] | betatim | 2 |
lanpa/tensorboardX | numpy | 42 | wrong histograms | Hi, I am having problems plotting histograms. I think there is a very good chance that it is not because of a bug in tensorboard-pytorch, but I'm not sure what I could be doing wrong, and I'm not sure where to ask, so if someone could help I would appreciate it.
I am trying to plot histograms of the gradients like this:
```
loss.backward()
for n, p in filter(lambda np: np[1].grad is not None, spectral_model.named_parameters()):
print(n, p.grad.data.min(), p.grad.data.max())
summary_writer.add_histogram(n, p.grad.data.cpu().numpy(), global_step=step)
```
The mins and maxes show that the values are all between -.15 and .15 (and in fact most values are much closer to zero than that). But the histograms seem to show that all the values are located at one extremely high value, like 3.01e+18:

| closed | 2017-10-15T12:30:24Z | 2017-10-17T06:23:52Z | https://github.com/lanpa/tensorboardX/issues/42 | [] | greaber | 6 |
Anjok07/ultimatevocalremovergui | pytorch | 779 | Value Error | Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
ValueError: "zero-size array to reduction operation maximum which has no identity"
Traceback Error: "
File "UVR.py", line 6059, in process_start
File "separate.py", line 369, in seperate
File "lib_v5\spec_utils.py", line 125, in normalize
File "numpy\core\_methods.py", line 40, in _amax
"
Error Time Stamp [2023-09-06 18:57:06]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Voc FT
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
is_save_align: True
is_mdx_c_seg_def: True
is_invert_spec: False
is_deverb_vocals: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: MP3
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | closed | 2023-09-06T15:58:12Z | 2023-09-27T23:27:44Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/779 | [] | DealPotato | 1 |
chaos-genius/chaos_genius | data-visualization | 575 | [BUG] DQ Anomaly Metrics should not be displayed when we do count aggregation on a categorical column | ## Describe the bug
If we create a KPI with metric column type categorical and count as aggregation, DQ mean and Max graphs are displayed empty and DQ count graph is only displayed.
## Explain the environment
- **Chaos Genius version**: v0.3.0
## Expected behavior
DQ Graphs should not be displayed. We can't take mean/max for categorical values and DQ Count graph will be the same as the overall KPI graph.
## Screenshots

| closed | 2022-01-05T15:21:42Z | 2022-01-21T06:03:01Z | https://github.com/chaos-genius/chaos_genius/issues/575 | [
"🐛 bug",
"🛠️ backend"
] | Amatullah | 0 |
NullArray/AutoSploit | automation | 1,222 | Unhandled Exception (26a2b144c) | Autosploit version: `4.0`
OS information: `Linux-5.2.0-2parrot1-amd64-x86_64-with-Parrot-4.7-stable`
Running context: `autosploit.py`
Error mesage: ``hosts.txt` and `/home/arc/AutoSploit/hosts.txt` are the same file`
Error traceback:
```
Traceback (most recent call):
File "/home/arc/AutoSploit/lib/term/terminal.py", line 644, in terminal_main_display
self.do_load_custom_hosts(choice_data_list[-1])
File "/home/arc/AutoSploit/lib/term/terminal.py", line 456, in do_load_custom_hosts
shutil.copy(file_path, lib.settings.HOST_FILE)
File "/usr/lib/python2.7/shutil.py", line 139, in copy
copyfile(src, dst)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
raise Error("`%s` and `%s` are the same file" % (src, dst))
Error: `hosts.txt` and `/home/arc/AutoSploit/hosts.txt` are the same file
```
Metasploit launched: `False`
| closed | 2019-12-14T21:25:40Z | 2019-12-15T01:03:03Z | https://github.com/NullArray/AutoSploit/issues/1222 | [] | AutosploitReporter | 0 |
QuivrHQ/quivr | api | 2,639 | [Bug]: failed to fetch;failed to connect 54323,but 5050 report status ok | ### What happened?
A bug happened!
http://localhost:3000/login,After I entered my account and password,web report "failed to fetch". And I checked the [5050](http://localhost:5050/), it said"{"status":"OK"}".but i failed to connect the http://localhost:54323/,it said "Unable to access this website. Localhost refused our connection request."

### Relevant log output
```bash
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
worker | raise mapped_exc(message) from exc
worker | httpx.ConnectError: [Errno 111] Connection refused
backend-core | INFO: 172.26.0.1:45280 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45280 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45280 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45282 - "GET /user HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.26.0.1:45280 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 127.0.0.1:57488 - "GET /healthz HTTP/1.1" 200 OK
backend-core | INFO: 127.0.0.1:41430 - "GET /healthz HTTP/1.1" 200 OK
```
### Twitter / LinkedIn details
_No response_ | closed | 2024-06-07T10:46:09Z | 2024-09-11T12:08:49Z | https://github.com/QuivrHQ/quivr/issues/2639 | [
"bug",
"Stale",
"area: backend"
] | HarrietW221b | 6 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,685 | Wrong hompage in package description | ### What version of GlobaLeaks are you using?
4.13.12
### What browser(s) are you seeing the problem on?
Other
### What operating system(s) are you seeing the problem on?
macOS
### Describe the issue
Due to the shell command 'dpkg -s globaleaks' the information of the installed package is presented.
Actually it shows like this on Ubuntu 22.04:
Package: globaleaks
Status: install ok installed
Priority: optional
Section: web
Installed-Size: 87176
Maintainer: Giovanni Pellerano <giovanni.pellerano@globaleaks.org>
Architecture: all
Version: 4.13.12
Depends: python3:any, adduser, apparmor, apparmor-utils, gnupg, iptables, lsb-base, python3-acme, python3-debian, python3-cryptography, python3-h2, python3-nacl, python3-openssl, python3-gnupg, python3-priority, python3-pyotp, python3-sqlalchemy, python3-twisted, python3-txtorcon, tor
Conffiles:
/etc/apparmor.d/usr.bin.globaleaks 42cc8bb81a4ff0706a6e7635b8cd5e56
/etc/default/globaleaks 753092d375c0453441385ff18f364856
/etc/init.d/globaleaks 436f0388680721cfe13dbfd069ce9f41
Description: Free and open-source whistleblowing software
GlobaLeaks is free, open source software enabling anyone to easily set up and
maintain a secure whistleblowing platform
Homepage: **https://www.globleaks.org/**
### Proposed solution
As a minor issue I like to recommend to correct the Homepage from www.globleaks.org to www.globaleaks.org | closed | 2023-10-08T14:59:37Z | 2023-10-09T22:25:14Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3685 | [
"T: Bug",
"C: Packaging"
] | flashlight4 | 2 |
recommenders-team/recommenders | data-science | 1,376 | [FEATURE] Add Microsoft markdown files | ### Description
Add updated code of conduct and security markdown files
### Expected behavior with the suggested feature
CODE_OF_CONDUCT.md matching: https://github.com/microsoft/repo-templates/blob/main/shared/CODE_OF_CONDUCT.md
SECURITY.md matching https://github.com/microsoft/repo-templates/blob/main/shared/SECURITY.md
### Other Comments
| closed | 2021-04-14T18:18:53Z | 2021-04-15T18:37:03Z | https://github.com/recommenders-team/recommenders/issues/1376 | [
"enhancement"
] | gramhagen | 1 |
hankcs/HanLP | nlp | 1,171 | 自定义词 添加到 CustomDictionary 里面就可以被识别,自己加载词典就不被识别 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2019-05-06T02:41:51Z | 2020-01-01T10:49:50Z | https://github.com/hankcs/HanLP/issues/1171 | [
"ignored"
] | 99sun99 | 15 |
JaidedAI/EasyOCR | pytorch | 1,011 | Train my own recognition model | i want to use english_g2.pth to train my own recognition model,is there any Tutorial? The deep-text-recognition-benchmark model looks like 200MB,It's a little big for me,thanks | open | 2023-05-09T06:43:29Z | 2024-01-25T18:32:46Z | https://github.com/JaidedAI/EasyOCR/issues/1011 | [] | stealth0414 | 3 |
ageitgey/face_recognition | machine-learning | 643 | Train model with more than 1 image per person | * face_recognition version: 1.2.3
* Python version: 2.7.15
* Operating System: Windows 10
### Description
I Would like to train the model with more than 1 image per each person to achieve better recognition results. Is it possible?
One more question is what does [0] mean here:
```
known_face_encoding_user = face_recognition.face_encodings('image.jpg')[0]
```
If I put [1] here I receive "IndexError: list index out of range" error.
| closed | 2018-10-09T10:59:15Z | 2018-10-09T11:37:34Z | https://github.com/ageitgey/face_recognition/issues/643 | [] | cepxuo | 1 |
drivendataorg/cookiecutter-data-science | data-science | 181 | Link to "Edit in Github" still broken on project homepage | This is a continuation of issue #146. The link seems to still be broken.
# Steps to Repro:
- Go to the project homepage http://drivendata.github.io/cookiecutter-data-science/
- In the top right there is a button "Edit on Github" that links to this page: https://github.com/drivendata/cookiecutter-data-science/edit/master/docs/index.md
- Click on that link
# What I got
The link sends me to a 404 "not found" error page on github.
# What I wanted
What I expected was it would send me to some page on GitHub.
# Possible fix
I imagine maybe the docs is built from the gh-pages branch and not the master branch - if that's the case we would need to edit [this line spefically](https://github.com/drivendata/cookiecutter-data-science/blob/9e01bf8d09c6dd65f435acc50444971b771ebfe4/index.html#L74) on the gh-pages branch.
| closed | 2019-09-02T18:18:56Z | 2020-01-23T01:52:35Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/181 | [] | BrunoGomesCoelho | 1 |
litestar-org/litestar | asyncio | 3,464 | Bug: SerializationException when running modeling-and-features demo from docs | ### Description
Hi,
First of all thanks for developing Litestar, it proves to be a very useful piece of software here. Unfortunately I ran into an issue.
I ran into an `msgspec_error` when requesting a page backed by sqlalchemy models which are connected via relationships. It seems that the database is correctly queried, a list of objects are returned, but then an exception is thrown when converting the objects to JSON.
I ran into this issue on my production code but when isolating an MCVE I noticed that the provided example in the documentation also shows the same unexpected behaviour on tested on two different machines. One crucial change to the code is however adding an author to the database.
Since this is quite a show-stopper for me: Thanks in advance for having a look at this!
### URL to code causing the issue
https://docs.litestar.dev/2/tutorials/repository-tutorial/01-modeling-and-features.html
### MCVE
```python
from datetime import date
from typing import TYPE_CHECKING
from uuid import UUID
from sqlalchemy import ForeignKey, select
from sqlalchemy.orm import Mapped, mapped_column, relationship
from litestar import Litestar, get
from litestar.contrib.sqlalchemy.base import UUIDAuditBase, UUIDBase
from litestar.contrib.sqlalchemy.plugins import AsyncSessionConfig, SQLAlchemyAsyncConfig, SQLAlchemyInitPlugin
if TYPE_CHECKING:
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession
# the SQLAlchemy base includes a declarative model for you to use in your models.
# The `Base` class includes a `UUID` based primary key (`id`)
class Author(UUIDBase):
name: Mapped[str]
dob: Mapped[date]
books: Mapped[list["Book"]] = relationship(back_populates="author", lazy="selectin")
# The `AuditBase` class includes the same UUID` based primary key (`id`) and 2
# additional columns: `created_at` and `updated_at`. `created_at` is a timestamp of when the
# record created, and `updated_at` is the last time the record was modified.
class Book(UUIDAuditBase):
title: Mapped[str]
author_id: Mapped[UUID] = mapped_column(ForeignKey("author.id"))
author: Mapped[Author] = relationship(lazy="joined", innerjoin=True, viewonly=True)
session_config = AsyncSessionConfig(expire_on_commit=False)
sqlalchemy_config = SQLAlchemyAsyncConfig(
connection_string="sqlite+aiosqlite:///test.sqlite", session_config=session_config
) # Create 'async_session' dependency.
sqlalchemy_plugin = SQLAlchemyInitPlugin(config=sqlalchemy_config)
async def on_startup() -> None:
"""Initializes the database."""
async with sqlalchemy_config.get_engine().begin() as conn:
await conn.run_sync(UUIDBase.metadata.create_all)
#crucially there needs to be an author in the table for the error to appear
await conn.execute(Author.__table__.insert().values(name="F. Scott Fitzgerald"))
@get(path="/authors")
async def get_authors(db_session: "AsyncSession", db_engine: "AsyncEngine") -> list[Author]:
"""Interact with SQLAlchemy engine and session."""
return list(await db_session.scalars(select(Author)))
app = Litestar(
route_handlers=[get_authors],
on_startup=[on_startup],
plugins=[SQLAlchemyInitPlugin(config=sqlalchemy_config)],
debug=True
)
```
### Steps to reproduce
```bash
1. Go to the https://docs.litestar.dev/2/tutorials/repository-tutorial/01-modeling-and-features.html page
2. Download the code
3. Run the demo with minimal requirements installed and go to http://localhost:8000/authors
4. See the error
```
### Screenshots
_No response_
### Logs
```bash
File "/usr/local/lib/python3.12/site-packages/litestar/serialization/msgspec_hooks.py", line 143, in encode_json
raise SerializationException(str(msgspec_error)) from msgspec_error
litestar.exceptions.base_exceptions.SerializationException: Unsupported type: <class '__main__.Author'>
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/litestar/serialization/msgspec_hooks.py", line 141, in encode_json
return msgspec.json.encode(value, enc_hook=serializer) if serializer else _msgspec_json_encoder.encode(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/serialization/msgspec_hooks.py", line 88, in default_serializer
raise TypeError(f"Unsupported type: {type(value)!r}")
TypeError: Unsupported type: <class '__main__.Author'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/litestar/middleware/exceptions/middleware.py", line 219, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/litestar/routes/http.py", line 82, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/routes/http.py", line 134, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/routes/http.py", line 158, in _call_handler_function
response: ASGIApp = await route_handler.to_response(app=scope["app"], data=response_data, request=request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/handlers/http_handlers/base.py", line 557, in to_response
return await response_handler(app=app, data=data, request=request) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/handlers/http_handlers/_utils.py", line 79, in handler
return response.to_asgi_response(app=None, request=request, headers=normalize_headers(headers), cookies=cookies) # pyright: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/response/base.py", line 451, in to_asgi_response
body=self.render(self.content, media_type, get_serializer(type_encoders)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/response/base.py", line 392, in render
return encode_json(content, enc_hook)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litestar/serialization/msgspec_hooks.py", line 143, in encode_json
raise SerializationException(str(msgspec_error)) from msgspec_error
litestar.exceptions.base_exceptions.SerializationException: Unsupported type: <class '__main__.Author'>
INFO: 127.0.0.1:44906 - "GET /authors HTTP/1.1" 500 Internal Server Error
```
### Litestar Version
2.8.2
### Platform
- [X] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-05-03T13:33:38Z | 2025-03-20T15:54:40Z | https://github.com/litestar-org/litestar/issues/3464 | [
"Bug :bug:",
"Documentation :books:",
"Good First Issue"
] | JorenSix | 3 |
mars-project/mars | scikit-learn | 2,750 | [BUG] NameError: name 'pq' is not defined if pyarrow is not installed | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
```python
mars/services/lifecycle/api/oscar.py:19: in <module>
from ..supervisor.tracker import LifecycleTrackerActor
mars/services/lifecycle/supervisor/__init__.py:15: in <module>
from .service import LifecycleSupervisorService
mars/services/lifecycle/supervisor/service.py:17: in <module>
from .tracker import LifecycleTrackerActor
mars/services/lifecycle/supervisor/tracker.py:21: in <module>
from ...meta.api import MetaAPI
mars/services/meta/__init__.py:15: in <module>
from .api import AbstractMetaAPI, MetaAPI, MockMetaAPI, WebMetaAPI
mars/services/meta/api/__init__.py:16: in <module>
from .oscar import MetaAPI, MockMetaAPI
mars/services/meta/api/oscar.py:21: in <module>
from ....dataframe.core import (
mars/dataframe/__init__.py:33: in <module>
from .datasource.read_parquet import read_parquet
mars/dataframe/datasource/read_parquet.py:98: in <module>
class ParquetEngine:
mars/dataframe/datasource/read_parquet.py:122: in ParquetEngine
use_arrow_dtype=None,
E NameError: name 'pq' is not defined
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.7.7
2. The version of Mars you use latest master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-02-25T03:23:35Z | 2022-02-25T03:24:55Z | https://github.com/mars-project/mars/issues/2750 | [
"reso: duplicate"
] | fyrestone | 1 |
alteryx/featuretools | scikit-learn | 2,453 | fix dfs warnings in get_recommended_primitives | in `_recommend_non_numeric_primitives` we make a call to dfs to generate features for all valid primitives. That list of valid primitives usually includes many numeric primitives that don't get used and cause an UnusedPrimitive warning. | open | 2023-01-18T18:49:35Z | 2023-06-26T19:16:12Z | https://github.com/alteryx/featuretools/issues/2453 | [] | ozzieD | 0 |
lanpa/tensorboardX | numpy | 293 | Can not add graph for dataparallel model | Hi there, I get a `KeyError: '322'` when I try to `add_graph` for data parallel model on multiple GPU. Here is a mini-example which can reproduce the error:
So what should I do for this error?
```
import torch
import torchvision.models as models
from tensorboardX import SummaryWriter
device = 'cuda'
net = torch.nn.DataParallel(models.__dict__['resnet50']().to(device))
dump_input = torch.rand((10, 3, 224, 224), device=device)
SummaryWriter('./tmp').add_graph(net, dump_input, verbose=False)
```
| closed | 2018-12-04T02:31:40Z | 2018-12-10T03:51:06Z | https://github.com/lanpa/tensorboardX/issues/293 | [
"seems fixed"
] | bl0 | 6 |
huggingface/transformers | deep-learning | 36,295 | [Bugs] RuntimeError: No CUDA GPUs are available in transformers v4.48.0 or above when running Ray RLHF example | ### System Info
- `transformers` version: 4.48.0
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Yes
- Using GPU in script?: Yes
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi for all!
I failed to run the vLLM project RLHF example script. The code is exactly same as the vLLM docs page: https://docs.vllm.ai/en/latest/getting_started/examples/rlhf.html
The error messages are:
```
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Error executing method 'init_device'. This might cause deadlock in distributed execution.
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Traceback (most recent call last):
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] return run_method(target, method, args, kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] return func(*args, **kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch.cuda.set_device(self.device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch._C._cuda_setDevice(device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch._C._cuda_init()
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] RuntimeError: No CUDA GPUs are available
(MyLLM pid=70946) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::MyLLM.__init__() (pid=70946, ip=11.163.37.230, actor_id=202b48118215566c51057a0101000000, repr=<test_ray_vllm_rlhf.MyLLM object at 0x7fb7453669b0>)
(MyLLM pid=70946) File "/data/cfs/workspace/test_ray_vllm_rlhf.py", line 96, in __init__
(MyLLM pid=70946) super().__init__(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 1051, in inner
(MyLLM pid=70946) return fn(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 242, in __init__
(MyLLM pid=70946) self.llm_engine = self.engine_class.from_engine_args(
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 484, in from_engine_args
(MyLLM pid=70946) engine = cls(
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 273, in __init__
(MyLLM pid=70946) self.model_executor = executor_class(vllm_config=vllm_config, )
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 262, in __init__
(MyLLM pid=70946) super().__init__(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 51, in __init__
(MyLLM pid=70946) self._init_executor()
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 90, in _init_executor
(MyLLM pid=70946) self._init_workers_ray(placement_group)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 355, in _init_workers_ray
(MyLLM pid=70946) self._run_workers("init_device")
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 476, in _run_workers
(MyLLM pid=70946) self.driver_worker.execute_method(sent_method, *args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 575, in execute_method
(MyLLM pid=70946) raise e
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946) return run_method(target, method, args, kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946) return func(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946) torch.cuda.set_device(self.device)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946) torch._C._cuda_setDevice(device)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946) torch._C._cuda_init()
(MyLLM pid=70946) RuntimeError: No CUDA GPUs are available
```
I found in transformers==4.47.1 the script could run normally. However when I tried transformers==4.48.0, 4.48.1 and 4.49.0 I got the error messages above. Then I checked pip envs with `pip list` and found only transformers versions are different.
I've tried to change vllm version between 0.7.0 and 0.7.2, the behavior is the same.
Related Ray issues:
* https://github.com/vllm-project/vllm/issues/13597
* https://github.com/vllm-project/vllm/issues/13230
### Expected behavior
The script runs normally. | open | 2025-02-20T07:58:49Z | 2025-03-22T08:03:03Z | https://github.com/huggingface/transformers/issues/36295 | [
"bug"
] | ArthurinRUC | 3 |
huggingface/datasets | computer-vision | 6,881 | AttributeError: module 'PIL.Image' has no attribute 'ExifTags' | When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1391 # `IterableDataset` automatically fills missing columns with None.
1392 # This is done with `_apply_feature_types_on_example`.
-> 1393 example = _apply_feature_types_on_example(
1394 example, self.features, token_per_repo_id=self._token_per_repo_id
1395 )
~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id)
1080 encoded_example = features.encode_example(example)
1081 # Decode example for Audio feature, e.g.
-> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
1083 return decoded_example
1084
~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id)
1974
-> 1975 return {
1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0)
1974
1975 return {
-> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
1978 else value
~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id)
1339 # we pass the token to read and decode files from private repositories in streaming mode
1340 if obj is not None and schema.decode:
-> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1342 return obj
1343
~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id)
187 image = PIL.Image.open(BytesIO(bytes_))
188 image.load() # to avoid "Too many open files" errors
--> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
190 image = PIL.ImageOps.exif_transpose(image)
191 if self.mode and self.mode != image.mode:
~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name)
75 )
76 return categories[name]
---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
78
79
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
### Environment info
Since datasets 2.19.0 | closed | 2024-05-08T06:33:57Z | 2024-07-18T06:49:30Z | https://github.com/huggingface/datasets/issues/6881 | [
"bug"
] | albertvillanova | 3 |
InstaPy/InstaPy | automation | 6,232 | unable to like | i get the unable to load media error( for like 4.5 posts ) and after that it gets a media post and
After getting the post when is time to click the like button the app stops
Traceback (most recent call last):
File "C:\Users\ribei\Downloads\Reddit\insta.py", line 32, in
session.like_by_tags(["cats"], amount=10)
File "C:\Users\ribei\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\instapy.py", line 1957, in like_by_tags
inappropriate, user_name, is_video, reason, scope = check_link(
File "C:\Users\ribei\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\like_util.py", line 633, in check_link
media = post_page[0]["shortcode_media"]
KeyError: 0
[Finished in 207.3s] #6230 | closed | 2021-06-14T16:41:42Z | 2023-01-01T11:34:55Z | https://github.com/InstaPy/InstaPy/issues/6232 | [] | diogoribeirodev | 4 |
explosion/spaCy | data-science | 13,620 | A question about document tokenization | Hi, I found a very interesting result to tokenize a document. The example code is:
```
import spacy
nlp = spacy.load("en_core_web_sm")
# doc = nlp("Apple is looking at. startup for $1 billion.")
# for token in doc:
# print(token.text, token.pos_, token.dep_)
# Example text
text = '''Panel C: Gene Associations in LUAD and NATs
In LUAD tumors, ZNF71 is associated with JUN, SAMHD1, RNASEL, IFNGR1, IKKB, and EIF2A.
In non-cancerous adjacent tissues (NATs), the associated genes are OAS1, MP3K7, and IFNAR2.'''
# Process the text
doc = nlp(text)
out_sen = []
# Iterate over the sentences
for sent in doc.sents:
if len(sent) != 0:
print(sent.text)
out_sen.append(sent)
```
The result out_sen's length is 1, and it is treated as a whole sentence. Is this a bug or sth by default? Thanks.
The spacy version is 3.7.6 | open | 2024-09-07T13:23:34Z | 2024-11-10T07:07:11Z | https://github.com/explosion/spaCy/issues/13620 | [] | HelloWorldLTY | 1 |
wiseodd/generative-models | tensorflow | 40 | KL Loss. | Hi, I just noticed that the KL Loss in the VAE paper would look like this:
0.5 * torch.sum(torch.exp(logVar) + mean ** 2 - 1. - logVar)
And here, the KL Loss is:
torch.mean(0.5 * torch.sum(torch.exp(logVar) + mean ** 2 - 1. - logVar, 1))
What's your thought in this? | closed | 2017-10-29T02:08:32Z | 2017-10-30T13:37:13Z | https://github.com/wiseodd/generative-models/issues/40 | [] | Prasanna1991 | 2 |
datapane/datapane | data-visualization | 142 | UTF-8 – CP1252 encoding issue in exported HTML report | <!--
**NOTE** Please use this template to open issues, bugs, etc., only.
See our [GitHub Discussions Board](https://github.com/datapane/datapane/discussions) to discuss feature requests, general support, ideas, and to chat with the community.
-->
### System Information
<!-- Please fill this out to help us understand the bug/issue -->
- OS: Windows 10
- Python version: 3.8.10
- Python environment: conda
- Using jupyter: true
- Datapane version: 0.11.11
### Bug / Issue
When displaying a pandas dataframe in DataPane as a Table (not DataTable, which does work correctly), euro sign characters (€) display as â¬:

This doesn't happen inside JupyterLab, or when exporting the original dataframe to html using `df.to_html()`. I am calling `report.save()` rather than `upload` as I want to generate local html reports.
In #9 you mention it could be an issue with Windows' default encoding not being UTF-8, are there any steps I should take to fix this?
Thank you! | closed | 2021-08-09T11:11:20Z | 2021-08-18T18:10:20Z | https://github.com/datapane/datapane/issues/142 | [
"triage"
] | inigohidalgo | 6 |
iMerica/dj-rest-auth | rest-api | 328 | Is it possible to implement custom email validation in AccountAdapter instead of overriding RegisterSerializer.validate_email? | Hi,
I'm writing a multitenant app and wanted to use AllAuth.
However, it [does not have an option to replace `EmailAddress`](https://github.com/pennersr/django-allauth/issues/2450)
I also found this issue pennersr/django-allauth/issues/976 that allows implementing custom logic to validate email uniqueness in `AccountAdapter`, merged in pennersr/django-allauth/pull/1407.
The change in PR updates `BaseSignupForm.clean_email` method. If I understand correctly `dj-rest-auth` `RegisterSerializer` is modelled after `BaseSignupForm`.
Would it be possible to do the same in `RegisterSerializer` so I do not have to provide my own and can keep the changes only in AccountAdapter?
| open | 2021-11-12T07:30:25Z | 2021-11-12T07:42:55Z | https://github.com/iMerica/dj-rest-auth/issues/328 | [] | 1oglop1 | 0 |
Miserlou/Zappa | django | 1,927 | Package Error: python-dateutil | Would you please support the newest version of python-dateutil?
```
ERROR: zappa 0.48.2 has requirement python-dateutil<2.7.0,>=2.6.1, but you'll have python-dateutil 2.8.0 which is incompatible.
``` | open | 2019-09-13T23:53:24Z | 2021-05-08T16:28:33Z | https://github.com/Miserlou/Zappa/issues/1927 | [] | weasteam | 18 |
benbusby/whoogle-search | flask | 780 | [BUG] Some parts of the UI are light in dark theme | **Describe the bug**
Some parts of the UI are light in dark theme. Some of those parts are not readable because the text and the background are white.
**To Reproduce**
Some searches



**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- 0.7.3
| closed | 2022-06-12T13:48:57Z | 2022-06-13T17:08:31Z | https://github.com/benbusby/whoogle-search/issues/780 | [
"bug"
] | ngosang | 5 |
graphistry/pygraphistry | jupyter | 218 | [ENH] Error propagation in files mode | ```python
df = pd.DataFrame({'s': ['a', 'b', 'c'], 'd': ['b', 'c', 'a']})
graphistry.edges(df, 'WRONG', 'd').plot(as_files=True, render=False)
```
Will not hint at the binding error, while `as_files=False` will. Both should -- unclear if PyGraphistry inspecting validity on viz create response, or validity not being set. | open | 2021-02-15T00:38:52Z | 2021-02-15T00:44:21Z | https://github.com/graphistry/pygraphistry/issues/218 | [
"enhancement",
"good-first-issue"
] | lmeyerov | 0 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 243 | Running locally almost halts the computer | The first time I ran this locally, my computer slowed to a crawl after Python used up 100% of my RAM and CPU. I waited 20 minutes, and ended the task. Is there some setting to ensure it doesn't consume so many resources?
My PC is a year old, and pretty decent:
**Operating System**
Windows 10 Pro 64-bit Version 21H2 (OS Build 19044.1706)
**CPU**
Intel Core i9 10900K @ 3.70GHz, 3696 Mhz, 10 Core(s), 20 Logical Processor(s)
**RAM**
Corsair Vengeance RGB Pro 64 GB (2 x 32 GB) DDR4-3200 CL16
**Motherboard**
Gigabyte Z590 AORUS MASTER (U3E1)
**Graphics**
LG ULTRAWIDE (3840x1600@60Hz)
Intel UHD Graphics 630 (Gigabyte)
2047MB NVIDIA GeForce RTX 3080
| open | 2022-09-25T21:29:14Z | 2022-11-21T09:03:11Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/243 | [] | zvit | 1 |
jschneier/django-storages | django | 1,443 | S3 credentials in links missing for private buckets after upgrade to Django 5.1 | Hi @jschneier!
I found an issue with Django 5.1. After the upgrade (using django-storages 1.14.4), all GET-parameter credentials for private S3 buckets are not being added to the links in my templates anymore. This means that I don't have access to the files.
I have no idea and since I have little knowledge on how this amazing package works, I can't really contribute any suggestions. 😔 I just verified that it's really Django 5.1.
Here's my setup, it might help in reconstructing the case.
```python
from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage
class PrivateMediaStorage(S3Boto3Storage):
location = settings.AWS_PRIVATE_MEDIA_LOCATION
querystring_expire = 3600 # seconds until the generated link expires
default_acl = "bucket-owner-full-control"
file_overwrite = False
custom_domain = False
```
Thanks so much for looking into this!
Ronny
PS: It seens unrelated to https://github.com/jschneier/django-storages/issues/1437 since there, Django 4.2 was used. | closed | 2024-08-15T07:04:05Z | 2025-02-09T01:07:47Z | https://github.com/jschneier/django-storages/issues/1443 | [] | GitRon | 12 |
aminalaee/sqladmin | sqlalchemy | 524 | Show Enum values in detail page | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
In the Create page the Enums are shown properly, in the details page also it should show the Enum values.
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
All
### Additional context
_No response_ | closed | 2023-06-26T12:05:53Z | 2023-06-28T10:02:11Z | https://github.com/aminalaee/sqladmin/issues/524 | [
"good first issue"
] | aminalaee | 0 |
plotly/dash | jupyter | 2,391 | Navigation Button with tooltip error when clicking it | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
```
dash 2.7.1
dash-auth 1.4.1
dash-bootstrap-components 1.3.0
dash-core-components 2.0.0
dash-daq 0.5.0
dash-extensions 0.1.8
dash-html-components 2.0.0
dash-table 5.0.0
python-dash 0.0.1
```
- if frontend related, tell us your Browser, Version and OS
- OS: Mac OS 12.3.1
- Safari, Chrome
**Describe the bug**
MWE:
```
import dash
from dash import dcc
from dash import html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output
app = dash.Dash(__name__, suppress_callback_exceptions=True)
app.layout = html.Div([
dcc.Location(id='url', refresh=False),
html.Div(id='page-content'),
])
def get_overview_page():
page = html.Div([
dcc.Link(
html.Button('Neu', id='new_button',
style={'margin-left': '10px', 'width': '100px', 'height': '27px',
'fontSize': '16px'}),
href='/new-entry'
),
dbc.Tooltip(
"A.",
target="new_button", placement="top"
)
], style={'width': '30%', 'margin-top': '10px', 'display': 'inline-block', 'text-align': 'left'})
return page
# Update the index
@app.callback(Output('page-content', 'children'),
[Input('url', 'pathname')])
def display_page(pathname):
if pathname == '/new-entry':
return html.Div()
else:
return get_overview_page()
if __name__ == '__main__':
#app.run_server(debug=True, port=8080, host='0.0.0.0')
app.run_server(debug=True, port=8086, host='127.0.0.1')
```
When you press the button and move the mouse you receive:
`An object was provided as children instead of a component, string, or number (or list of those). Check the children property that looks something like: { "1": { "props": { "is_open": false } } }`
When you remove the tooltip. It is working, so it has to do something with it.
**Expected behavior**
No error.
| open | 2023-01-19T17:42:54Z | 2024-08-13T19:25:10Z | https://github.com/plotly/dash/issues/2391 | [
"bug",
"P3"
] | Birdy3000 | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,050 | 400 Error? | Hi, I'm trying to spin up a simple socket connection w/ React + Flask... I'm unfortunately getting a 400 error... any thoughts around why this is? Happy to answer any questions around configs.
<img width="498" alt="Screen Shot 2019-08-27 at 2 44 14 PM" src="https://user-images.githubusercontent.com/11385142/63776473-620d2d80-c8d9-11e9-8c77-3e3f09791028.png">
<img width="448" alt="Screen Shot 2019-08-27 at 2 44 27 PM" src="https://user-images.githubusercontent.com/11385142/63776475-620d2d80-c8d9-11e9-8e7a-8f45cfd7410b.png">
<img width="784" alt="Screen Shot 2019-08-27 at 2 44 32 PM" src="https://user-images.githubusercontent.com/11385142/63776476-62a5c400-c8d9-11e9-8168-e4bb9b4b3b39.png">
<img width="791" alt="Screen Shot 2019-08-27 at 2 44 38 PM" src="https://user-images.githubusercontent.com/11385142/63776477-62a5c400-c8d9-11e9-8adb-63e0b445aaa8.png">
<img width="596" alt="Screen Shot 2019-08-27 at 2 44 45 PM" src="https://user-images.githubusercontent.com/11385142/63776479-62a5c400-c8d9-11e9-8d97-2ab361c1acd9.png">
| closed | 2019-08-27T13:46:20Z | 2020-01-02T03:49:33Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1050 | [
"question"
] | leehol | 2 |
waditu/tushare | pandas | 1,725 | import tushare后logging打不出来日志 | Python 3.8.16,下面这样是可以打出日志的
```
import logging
# import tushare
logging.basicConfig(level=logging.INFO)
logging.info("a info log")
```
只要把import tushare的注释去掉,logging就打不出来日志了
求助一下要如何解决 | open | 2024-01-05T16:54:43Z | 2024-01-05T16:55:16Z | https://github.com/waditu/tushare/issues/1725 | [] | CurryGaifan | 0 |
FactoryBoy/factory_boy | sqlalchemy | 936 | How to access Params from Post-Create Hook? | #### Description
I am using this package with SQLAlchemy models. My goal is to use a factory to create a model instance, along with a number of associated model instances. Both specific and generic associations, like these two use cases:
```py
# use two new gyms:
UserFactory(gyms_count=2)
# use specified gym(s)
gym = GymFactory()
UserFactory(gyms=[gym])
```
I am able to use a `post_generation` hook to create the related objects, but I'm having issues accessing one of the params when doing so. Or maybe I'm misunderstanding how the params work.
#### To Reproduce
##### Model / Factory code
Models (many to many association: user has many gyms, and vice versa):
```python
class Gym(db.Model):
__tablename__ = "gyms"
id = db.Column(db.Integer, primary_key=True, index=True)
title = db.Column(db.String, nullable=False) #> "My Gym"
memberships = db.relationship("Membership", back_populates="gym")
class User(db.Model):
__tablename__ = "users"
id = db.Column(db.Integer, primary_key=True, index=True)
email = db.Column(db.String, index=True, nullable=False, unique=True)
memberships = db.relationship("Membership", back_populates="user")
class Membership(db.Model, MyModel):
__tablename__ = "memberships"
id = db.Column(db.Integer, primary_key=True, index=True)
gym_id = db.Column(db.Integer, db.ForeignKey("gyms.id"), nullable=False, index=True)
user_id = db.Column(db.Integer, db.ForeignKey("users.id"), nullable=False, index=True)
gym = db.relationship("Gym", back_populates="memberships", uselist=False)
user = db.relationship("User", back_populates="memberships", uselist=False)
```
Factories:
```python
class BaseFactory(SQLAlchemyModelFactory):
class Meta(object):
sqlalchemy_session = db.session
class UserFactory(BaseFactory):
class Meta:
model = User
id = Sequence(lambda n: n+1)
email = Sequence(lambda n: f"u{n+1}@example.com")
class Params:
gyms_count = 0
@post_generation
def gyms(obj, create, extracted, **kwargs):
if not create:
# Simple build, do nothing.
return
if extracted:
for gym in extracted:
Membership.find_or_create(gym_id=gym.id, user_id=obj.id)
gyms_count = kwargs.get("gyms_count") or 0
#gyms_count = obj.gyms_count
# I tried `kwargs.get("gyms_count")` and `obj.gyms_count` but neither was successful.
# how to get the gyms count param here?
for _ in range(0, gyms_count):
gym = GymFactory()
Membership.find_or_create(gym_id=gym.id, user_id=obj.id)
```
##### The issue
How to access the `gyms_count` param from within the post_generation hook?
| open | 2022-05-30T02:39:14Z | 2023-06-19T11:27:07Z | https://github.com/FactoryBoy/factory_boy/issues/936 | [] | s2t2 | 3 |
CatchTheTornado/text-extract-api | api | 33 | Challenges with LLMs Not Respecting Provided Fields in JSON Outputs | When utilizing Large Language Models to extract data from documents such as invoices and generate structured outputs like JSON files, a common issue arises: the LLM does not always adhere strictly to the provided fields and sometimes invents new ones. This behavior poses significant challenges for applications that require exact data formats for database integration and other automated processes. | closed | 2024-11-13T01:22:29Z | 2024-11-25T12:18:22Z | https://github.com/CatchTheTornado/text-extract-api/issues/33 | [
"question"
] | kreativitat | 4 |
aleju/imgaug | deep-learning | 130 | Batch generator hangs with multithread | When using alongside the keras `ImageDataGenerator` with `multithreading=True`, the process hangs on `recv()`. Switching the number of workers to 0 (i.e. no multithread) works as expected.
> Versions:
imgaug: 0.2.5 (from master 13May18)
python: 3.5
keras: 2.1.3
I saw the last merged PR #126 tacking in the same direction, and have confirmed that the installed version has it, but problem persists.
| open | 2018-05-01T13:04:38Z | 2018-05-29T10:41:28Z | https://github.com/aleju/imgaug/issues/130 | [] | 23pointsNorth | 3 |
pydata/pandas-datareader | pandas | 733 | Stooq: futures, indices, cash, currency, bond yield tickers don't feed | Hello,
I'm trying to scrape multiple historical quotes from Stooq. Equities and indicies work well, while futures, indices, cash, currency, bond yield don't feed. Shall I type the tickers somehow differently?
Below is the code example with the tickers that don't work for me.
```py
now = datetime.now().date()
stooq_tickers = ['PLN_I', 'DX.F', 'FX.C', 'U4.F', 'USDAUD', '10CNY.B', 'UKOUSD6M']
stooqdf = dr.get_data_stooq(stooq_tickers, start='2016-01-01', end=now)
```
Also is there a way to feed economic data, for example 'PMMNCN.M' or 'IMPRCN.M'?
Thank you in advance for any help! | open | 2019-11-30T06:02:16Z | 2019-12-02T12:50:03Z | https://github.com/pydata/pandas-datareader/issues/733 | [] | An-naili | 2 |
pydantic/pydantic-core | pydantic | 1,147 | ImportError: dynamic module does not define module export function (PyInit__pydantic_core) | Hello, I am trying to import openai in my visual studio code and face with "ImportError: dynamic module does not define module export function (PyInit__pydantic_core)" error, I really dont have any idea of how resolving it, my pydantic version is 2.5.3 and my pydantic_core version is 2.15.0, my python code is 3.11.7, I appreciate any help.
<img width="532" alt="Capture" src="https://github.com/pydantic/pydantic-core/assets/156265022/2d8c7f12-3897-433f-9cb1-b2892db3e0b9">
| closed | 2024-01-11T01:25:38Z | 2024-01-17T16:30:38Z | https://github.com/pydantic/pydantic-core/issues/1147 | [
"unconfirmed"
] | ResearcherSara | 3 |
AntonOsika/gpt-engineer | python | 237 | openai key | how do i manually edit my api key? | closed | 2023-06-20T03:59:11Z | 2023-06-21T12:36:38Z | https://github.com/AntonOsika/gpt-engineer/issues/237 | [] | ether8unny | 5 |
polakowo/vectorbt | data-visualization | 71 | example rand_exit_choice_nb can not run | Hi,
I find an example can not run:
https://github.com/polakowo/vectorbt/blob/master/vectorbt/signals/factory.py#L316
```python
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\signals\factory.py", line 61, in __init__
IndicatorFactory.__init__(
TypeError: __init__() got an unexpected keyword argument 'in_output_settings'
```
```python
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\signals\factory.py", line 61, in __init__
IndicatorFactory.__init__(
TypeError: __init__() got an unexpected keyword argument 'param_settings'
```
```python
Traceback (most recent call last):
File "D:/test_vectorbt/demo_stop3.py", line 66, in <module>
my_sig.rand_type_readable
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\indicators\factory.py", line 1181, in attr_readable
return getattr(_self, attr_name).applymap(lambda x: '' if x == -1 else enum._fields[x])
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\frame.py", line 6944, in applymap
return self.apply(infer)
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\frame.py", line 6878, in apply
return op.get_result()
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\apply.py", line 186, in get_result
return self.apply_standard()
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\apply.py", line 313, in apply_standard
results, res_index = self.apply_series_generator()
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\apply.py", line 341, in apply_series_generator
results[i] = self.f(v)
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\pandas\core\frame.py", line 6942, in infer
return lib.map_infer(x.astype(object).values, func)
File "pandas\_libs\lib.pyx", line 2329, in pandas._libs.lib.map_infer
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\indicators\factory.py", line 1181, in <lambda>
return getattr(_self, attr_name).applymap(lambda x: '' if x == -1 else enum._fields[x])
TypeError: tuple indices must be integers or slices, not float
```
| closed | 2020-12-25T02:39:40Z | 2020-12-25T22:01:41Z | https://github.com/polakowo/vectorbt/issues/71 | [] | wukan1986 | 0 |
davidsandberg/facenet | computer-vision | 1,087 | why the time for encoding face embedding is so long? | I rewrite the compare.py to check the time for facenet's face embedding encoding, but to my surprise,the time is above 60ms on my Geforce RTX2070 card,I also check the time for ArcFace,it only use 10 ms; I also found when my check program was running, the GPU load report by GPU-Z is only about 25%,it was clear the GPU's power is not fully utilized, so why the time for facenet's embedding encoding is so long? why GPU's power can not be fully utilized?
below is my code to check the time rewrite in compare.py:
def main(args):
images = load_and_align_data(args.image_files, args.image_size, args.margin, args.gpu_memory_fraction)
with tf.Graph().as_default():
with tf.Session() as sess:
# Load the model
facenet.load_model(args.model)
# Get input and output tensors
images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0")
# Run forward pass to calculate embeddings
feed_dict = { images_placeholder: images, phase_train_placeholder:False }
emb = sess.run(embeddings, feed_dict=feed_dict)
# Check embedding encode time
testTimes = 100
tCount = 0
for t in range(1,testTimes+1):
t0 = time.time()
sess.run(embeddings,feed_dict=feed_dict)
t1 = time.time()
print("Test",t," time=",(t1-t0)*1000.0,"ms")
tCount += t1-t0
avgTime = tCount/testTimes * 1000.0
print("AvgRefTime=",avgTime, "ms")
| closed | 2019-09-20T07:09:10Z | 2019-09-22T11:38:41Z | https://github.com/davidsandberg/facenet/issues/1087 | [] | pango99 | 1 |
HumanSignal/labelImg | deep-learning | 801 | unhandled exception in script -- when run py to exc in windows | Traceback (most recent call last):
File "labelImg.py", line 18, in <module>
ModuleNotFoundError: No module named 'PyQt5'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "labelImg.py", line 27, in <module>
ModuleNotFoundError: No module named 'sip'
| closed | 2021-10-11T11:44:56Z | 2021-10-11T20:06:56Z | https://github.com/HumanSignal/labelImg/issues/801 | [] | saiful6575 | 1 |
Kanaries/pygwalker | matplotlib | 658 | Support Narwhals DataFrames including DuckDB relation and Dask. | In https://github.com/panel-extensions/panel-graphic-walker/pull/22 I will add more support for more data sources to `panel-graphic-walker`. I add general DataFrame support via [Narwhals](https://github.com/narwhals-dev/narwhals) because its what we are going to do for param, panel and rest of HoloViz ecosystem I believe.
With the PR above we will end up supporting

It would be very nice with
- DuckDB Relation and Dask support in pygwalker.
- General support for any Narwhals DataFrame type
- The pygwalker database `Connector` being a support Narwhals DataFrame type. [Context](https://github.com/narwhals-dev/narwhals/issues/1289).
| open | 2024-11-09T14:59:12Z | 2025-02-08T01:31:52Z | https://github.com/Kanaries/pygwalker/issues/658 | [
"enhancement"
] | MarcSkovMadsen | 1 |
coleifer/sqlite-web | flask | 112 | Docker image - arm64 please? | Hi - longtime user of this project on a raspi. Recently jumped to using docker and am reinstalling everything in containers.
Discovered tonight that while the repository works well on raspbian, some of the dependent libraries have platform specificity. Since the image on docker hub is tagged as amd64, it pulls the wrong dependencies for arm64...
Any chance you could publish another tag for arm64 please? | closed | 2023-03-25T01:12:35Z | 2023-04-18T15:32:20Z | https://github.com/coleifer/sqlite-web/issues/112 | [] | barbequesauce | 3 |
fastapi/sqlmodel | pydantic | 127 | Simple instructions for a self referential table | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class Node(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
text: str
parent_id: Optional[int] = Field(foreign_key="node.id")
# parent: Optional["Node"] not sure what to put here
# children: List[Node] not sure what to put here either :)
```
### Description
I am trying to create a simple self referential model - using the SQL model equivalent of the adjacency list pattern described here: https://docs.sqlalchemy.org/en/14/orm/self_referential.html
I am only a litte familiar with SQL alchemy but was unable to translate their example into one that would work with SQLmodel.
In your docs you said: "Based on SQLAlchemy [...] SQLModel is designed to satisfy the most common use cases and to be as simple and convenient as possible for those cases, providing the best developer experience". I was assuming that a self referential model would be a fairly common use case but totally appreciate that I could be wrong on this :)
I see that there is an `sa_relationship` param that you can pass 'complicated stuff' too but I was not sure whether I should be using that (or how I would do so if I was meant to) - sorry just a bit too new to this.
Crossing my fingers that it is straight forward to complete the commented lines in my example.
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.7
### Additional Context
_No response_ | open | 2021-10-11T22:03:46Z | 2022-11-30T18:58:21Z | https://github.com/fastapi/sqlmodel/issues/127 | [
"question"
] | michaelmcandrew | 12 |
explosion/spaCy | data-science | 13,205 | DocBin.to_bytes fails with a "ValueError: bytes object is too large" Spacy v 3.7.2 | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
I am trying to train a model from scratch (NER) on a corpus which has around 8 million sentences, after added data in DocBin() then unable to save it getting error
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: WIN 10 (64bit)
* Python Version Used: 3.9.0
* spaCy Version Used: 3.7.2
* Environment Information:
I have tried to make chunk of DocBin then murge and save single file but same issue
```
import os
import spacy
from spacy.tokens import DocBin
from tqdm import tqdm
from spacy.util import filter_spans
merged_doc_bin = DocBin()
fiels = [
"G:\\success-demo\\product_ner\\test\\train3.spacy",# 3000000 tokens here
"G:\\success-demo\\product_ner\\test\\train1.spacy", # 3000000 tokens here
"G:\\success-demo\\product_ner\\test\\train2.spacy", # 2000000 tokens here
]
for filename in fiels:
doc_bin = DocBin().from_disk(filename)
merged_doc_bin.merge(doc_bin)
merged_doc_bin.to_disk("G:\\success-demo\\product_ner\\test\\final\\murge.spacy")
```


| closed | 2023-12-20T06:17:51Z | 2023-12-20T13:12:54Z | https://github.com/explosion/spaCy/issues/13205 | [
"feat / serialize"
] | rajesh-smartwebtech | 0 |
litestar-org/polyfactory | pydantic | 318 | Bug: factories inside a nested pydantic model with custom types do not inherit the provider map | Hello all, I really like the project, saves me tons of time when writing tests :).
I encountered a problem with nested pydantic models that have custom types.
The following example with a nested pydantic model only works if you override `_get_or_create_factory` by replacing the `get_provider_map` of the created factory with the one of the class. If you do not override this method you will get a `ParameterException`
```
from polyfactory.factories.pydantic_factory import ModelFactory
from pydantic import BaseModel
class MyClass:
def __init__(self, value: int) -> None:
self.value = value
class B(BaseModel):
my_class: MyClass
class Config:
arbitrary_types_allowed = True
class ANested(BaseModel):
b: B
class A(BaseModel):
my_class: MyClass
class Config:
arbitrary_types_allowed = True
class AFactory(ModelFactory):
__model__ = A
@classmethod
def get_provider_map(cls) -> dict[type, Any]:
providers_map = super().get_provider_map()
return {
**providers_map,
MyClass: lambda: MyClass(value=1),
}
class ANestedFactory(ModelFactory):
__model__ = ANested
@classmethod
def get_provider_map(cls) -> dict[type, Any]:
providers_map = super().get_provider_map()
return {
**providers_map,
MyClass: lambda: MyClass(value=1),
}
@classmethod
def _get_or_create_factory(cls, model: type) -> type[BaseFactory[Any]]:
"""Get a factory from registered factories or generate a factory dynamically.
:param model: A model type.
:returns: A Factory sub-class.
"""
if factory := BaseFactory._factory_type_mapping.get(model):
return factory
if cls.__base_factory_overrides__:
for model_ancestor in model.mro():
if factory := cls.__base_factory_overrides__.get(model_ancestor):
return factory.create_factory(model)
for factory in reversed(BaseFactory._base_factories):
if factory.is_supported_type(model):
# what is was originally
return factory.create_factory(model)
# --- CHANGE START --- this makes it work
created_factory = factory.create_factory(model)
created_factory.get_provider_map = cls.get_provider_map
return created_factory
# --- CHANGE END ---
raise ParameterException(f"unsupported model type {model.__name__}") # pragma: no cover
``` | closed | 2023-08-01T20:21:06Z | 2025-03-20T15:53:05Z | https://github.com/litestar-org/polyfactory/issues/318 | [
"bug",
"help wanted",
"good first issue"
] | potatoUnicornDev | 1 |
shibing624/text2vec | nlp | 41 | 预训练的模型会被下载到那个文件下 | ### Describe the Question
我跑了一下demo,会下载一些预先训练的模型,请问下这些模型被下载后放到哪个文件下了
### Describe your attempts
- [ ] I walked through the tutorials
- [ ] I checked the documentation
- [ ] I checked to make sure that this is not a duplicate question
You may also provide a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve) you tried as a workaround, or StackOverflow solution that you have walked through. (e.g. cosmic radiation).
In addition, figure out your version by running `import text2vec; text2vec.__version__`.
| closed | 2022-05-18T07:37:37Z | 2022-05-18T08:03:10Z | https://github.com/shibing624/text2vec/issues/41 | [
"question"
] | melowYX | 1 |
apachecn/ailearning | python | 440 | 为何录制教学版视频 - ApacheCN | http://ailearning.apachecn.org/why-to-record-study-ml-video/
ApacheCN 专注于优秀项目维护的开源组织 | closed | 2018-08-24T07:11:10Z | 2021-09-07T17:44:35Z | https://github.com/apachecn/ailearning/issues/440 | [
"Gitalk",
"7de09eb183e1224f3bcc26a0b7225773"
] | jiangzhonglian | 0 |
modelscope/modelscope | nlp | 1,276 | 导入AutoencoderKLWan, WanPipeline的问题 | from diffusers import AutoencoderKLWan, WanPipeline提示 from diffusers import AutoencoderKLWan, WanPipeline
ImportError: cannot import name 'AutoencoderKLWan' from 'diffusers' (E:\PyCharm2023.3.2\python\pythonProject\.venv\Lib\site-packages\diffusers\__init__.py). Did you mean: 'AutoencoderKL'?
再modelscope.cn问了说是要从源码安装diffusers ,我从源码安装到diffusers-0.33.0版本 还是提示找不到AutoencoderKLWan, WanPipeline这两个模块,这个是什么问题呢@tastelikefeet @wangxingjun778
| open | 2025-03-20T04:30:56Z | 2025-03-20T04:30:56Z | https://github.com/modelscope/modelscope/issues/1276 | [] | swh2026 | 0 |
ageitgey/face_recognition | machine-learning | 854 | Error during face_encodings - code: 7, reason: A call to cuDNN failed | * face_recognition version: v1.2.2
* Python version: 3.6.7
* Operating System: Ubuntu 18.04 (Jetson Nano)
### Description
Following the JetsonNano instructions and incorporating ZED Camera feed by replaced cv2.VideoCapture with cam.retrieve_image (https://www.stereolabs.com/docs/opencv-python/#capturing-video), I get a crash during face_encoding.
I've verified that the image is converted to RGB and scaled down to 1/4 original size, yet still get this crash every time. When I try the original example using the ZED camera as a Universal Video Camera (UVC) [https://www.stereolabs.com/docs/opencv-python/#uvc-capture] there are no issues.
### Error:
`Traceback (most recent call last):
File "facedetect.py", line 251, in <module>
main_loop()
File "facedetect.py", line 163, in main_loop
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
File "/usr/local/lib/python3.6/dist-packages/face_recognition/api.py", line 210, in face_encodings
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
File "/usr/local/lib/python3.6/dist-packages/face_recognition/api.py", line 210, in <listcomp>
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
RuntimeError: Error while calling cudnnConvolutionForward( context(), &alpha, descriptor(data), data.device(), (const cudnnFilterDescriptor_t)filter_handle, filters.device(), (const cudnnConvolutionDescriptor_t)conv_handle, (cudnnConvolutionFwdAlgo_t)forward_algo, forward_workspace, forward_workspace_size_in_bytes, &beta, descriptor(output), output.device()) in file /tmp/pip-build-do6wa1sv/dlib/dlib/cuda/cudnn_dlibapi.cpp:1007. code: 7, reason: A call to cuDNN failed`
### What I Did (main_loop)
`def main_loop():
#conifgure ZED camera
init = sl.InitParameters()
cam = sl.Camera()
if not cam.is_opened():
print("Opening ZED Camera...")
status = cam.open(init)
if status != sl.ERROR_CODE.SUCCESS:
print(repr(status))
exit()
runtime = sl.RuntimeParameters()
mat = sl.Mat()
print_camera_information(cam)
# ZED
# Get image size
image_size = cam.get_resolution()
width = image_size.width
height = image_size.height
left_image_rgba = np.zeros((height, width, 4), dtype=np.uint8)
# Prepare single image containers
left_image = sl.Mat()
# Track how long since we last saved a copy of our known faces to disk as a backup.
number_of_faces_since_save = 0
while True:
# Grab a single frame of video (ZED)
err = cam.grab(runtime)
if err == sl.ERROR_CODE.SUCCESS:
cam.retrieve_image(left_image, sl.VIEW.VIEW_LEFT)
## TODO: Look at what type the images are here. *******
# Copy the left image to the left side of SBS image
left_image_rgba[0:height, 0:width, :] = left_image.get_data()
# Convert SVO image from RGBA to RGB
left_image_rgb = cv2.cvtColor(left_image_rgba, cv2.COLOR_RGBA2RGB)
# Resize frame of video to 1/4 size for faster face recognition processing
small_frame = cv2.resize(left_image_rgb, (0, 0), fx=0.175, fy=0.175) #(ZED)
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_small_frame = small_frame
# Find all the face locations and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_small_frame)
print("Number of faces detected: ", len(face_locations))
print(face_locations)
print(rgb_small_frame.shape)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
# Loop through each detected face and see if it is one we have seen before
# If so, we'll give it a label that we'll draw on top of the video.
face_labels = []
for face_location, face_encoding in zip(face_locations, face_encodings):
# See if this face is in our list of known faces.
metadata = lookup_known_face(face_encoding)
# If we found the face, label the face with some useful information.
if metadata is not None:
time_at_door = datetime.now() - metadata['first_seen_this_interaction']
face_label = f"At door {int(time_at_door.total_seconds())}s"
# If this is a brand new face, add it to our list of known faces
else:
face_label = "New visitor!"
# Grab the image of the the face from the current frame of video
top, right, bottom, left = face_location
face_image = small_frame[top:bottom, left:right]
face_image = cv2.resize(face_image, (150, 150))
# Add the new face to our known face data
register_new_face(face_encoding, face_image)
face_labels.append(face_label)
# Draw a box around each face and label each face
for (top, right, bottom, left), face_label in zip(face_locations, face_labels):
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
top *= 4
right *= 4
bottom *= 4
left *= 4
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
cv2.putText(frame, face_label, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1)
# Display recent visitor images
number_of_recent_visitors = 0
for metadata in known_face_metadata:
# If we have seen this person in the last minute, draw their image
if datetime.now() - metadata["last_seen"] < timedelta(seconds=10) and metadata["seen_frames"] > 5:
# Draw the known face image
x_position = number_of_recent_visitors * 150
frame[30:180, x_position:x_position + 150] = metadata["face_image"]
number_of_recent_visitors += 1
# Label the image with how many times they have visited
visits = metadata['seen_count']
visit_label = f"{visits} visits"
if visits == 1:
visit_label = "First visit"
cv2.putText(frame, visit_label, (x_position + 10, 170), cv2.FONT_HERSHEY_DUPLEX, 0.6, (255, 255, 255), 1)
if number_of_recent_visitors > 0:
cv2.putText(frame, "Visitors at Door", (5, 18), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1)
# Display the final frame of video with boxes drawn around each detected fames
# cv2.imshow('Video', frame)
cv2.imshow("ZED", rgb_small_frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
save_known_faces()
break
# We need to save our known faces back to disk every so often in case something crashes.
if len(face_locations) > 0 and number_of_faces_since_save > 100:
save_known_faces()
number_of_faces_since_save = 0
else:
number_of_faces_since_save += 1
# Release handle to the webcam
#video_capture.release()
cv2.destroyAllWindows()
# Close (ZED)
cam.close()
`
| open | 2019-06-12T04:43:40Z | 2020-08-24T12:10:29Z | https://github.com/ageitgey/face_recognition/issues/854 | [] | suprnrdy | 2 |
mljar/mercury | jupyter | 331 | Can't display an ipydatagrid in mercury | Hello, the below code is not rendering the grid as it should:
df = pd.DataFrame(data=np.random.randn(5,10))
datagrid=DataGrid(df)
datagrid
Mercury is not displaying the grid with the below output:

| open | 2023-07-06T07:22:49Z | 2023-07-06T09:12:04Z | https://github.com/mljar/mercury/issues/331 | [] | gmouawad | 2 |
postmanlabs/httpbin | api | 423 | Add "raw" endpoint to return raw, unparsed request data? | If possible, please consider adding a `/raw` or `/echo` endpoint that returns the raw HTTP request that the server received.
This feature can be used to help diagnose misbehaving applications (e.g. sending duplicate headers with different values or using incorrect line endings) or debug applications using niche HTTP features (e.g. sub-headers for `multipart/form-data` sections). | closed | 2018-01-26T17:52:18Z | 2018-04-26T17:51:17Z | https://github.com/postmanlabs/httpbin/issues/423 | [] | llamasoft | 2 |
marimo-team/marimo | data-visualization | 4,064 | Local to cell __name__ | ### Documentation is
- [x] Missing
- [ ] Outdated
- [ ] Confusing
- [ ] Not sure?
### Explain in Detail
I've been trying to use smolagents library in marimo and was investigating why one of the functions `push_to_hub` not works as expected.
There were several reasons that it doesn't work and I tried to monkey patch them here:
https://github.com/kazemihabib/Huggingface-Agents-Course-Marimo-Edition/blob/marimo/patches/smolagents_patches.py that you could check for further details.
There was a specific undocumented behavior that breaks the library and it's the focus of this issue.
```
def _test():
pass
print(_test.__name__)
```
marimo adds prefixes to cell local name and prints:
`_cell_AJWG_test`
`push_to_hub` function relies on this name:
1) It fetches the source code
2) Replaces the function name with 'forward' (This one breaks as __name__ returns prefixed name)
3) Append the `forward` function to some other code
### Your Suggestion for Changes
IMHO this behavior of prefixing local to cell names with `_cell_{cell_id}` can be documented. | open | 2025-03-11T19:04:32Z | 2025-03-16T16:55:15Z | https://github.com/marimo-team/marimo/issues/4064 | [
"documentation"
] | kazemihabib | 7 |
keras-team/keras | tensorflow | 20,184 | fix: Densenet Documentation | This is code for DenseNet121 in Keras
```
keras.applications.DenseNet121(
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
name="densenet121",
)
```
And the documentation for the keras is not much specify about the `classes` argument
Earlier Documentation : classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified.
Updated Documentation : classes: optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. **Defaults to 1000** | closed | 2024-08-29T11:07:12Z | 2024-08-30T10:57:50Z | https://github.com/keras-team/keras/issues/20184 | [
"type:support"
] | dwgily | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,817 | Crash on new install launch | Press any button or wait 20 seconds to continue.
2016-11-15 12:11:05,838 [ cli] [INFO] PokemonGO Bot v1.0
2016-11-15 12:11:05,846 [ cli] [INFO] commit: 30dbcc0d
2016-11-15 12:11:05,849 [ cli] [INFO] Configuration initialized
2016-11-15 12:11:05,850 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2016-11-15 12:11:05,850 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2016-11-15 12:11:05,858 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com
(23653) wsgi starting up on http://127.0.0.1:4000
[2016-11-15 12:11:05] [SleepSchedule] [INFO] Next sleep at 12:26:41, for a duration of 05:54:05
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Setting start location.
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] [x] Coordinates found in passed in location, not geocoding.
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Location found: 37.809295714,-122.410976772 (37.809295714, -122.410976772, 8.0)
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Now at (37.809295714, -122.410976772, 8.0)
[2016-11-15 12:11:05] [PokemonGoBot] [INFO] Login procedure started.
[2016-11-15 12:11:07] [PokemonGoBot] [INFO] Login successful.
[2016-11-15 12:11:07] [PokemonGoBot] [INFO]
[2016-11-15 12:11:07] [PokemonGoBot] [INFO] [x] Error while opening cached forts: [Errno 2] No such file or directory: u'/Users/moquette/Bot/pokemongo_bot/../data/recent-forts-Evolver1K.json'
[2016-11-15 12:11:08] [PokemonGoBot] [INFO] Level: 3 (Next Level: 2860 XP) (Total: 3140 XP)
[2016-11-15 12:11:08] [PokemonGoBot] [INFO] Pokemon Captured: 5 | Pokestops Visited: 0
[2016-11-15 12:11:08] [PokemonGoBot] [INFO]
[2016-11-15 12:11:08] [PokemonGoBot] [INFO] --- Evolver1K ---
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Pokemon Bag: 5/250
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Items: 74/350
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Stardust: 1100 | Pokecoins: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] PokeBalls: 70 | GreatBalls: 0 | UltraBalls: 0 | MasterBalls: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] RazzBerries: 0 | BlukBerries: 0 | NanabBerries: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] LuckyEgg: 0 | Incubator: 0 | TroyDisk: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Potion: 0 | SuperPotion: 0 | HyperPotion: 0 | MaxPotion: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Incense: 2 | IncenseSpicy: 0 | IncenseCool: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Revive: 0 | MaxRevive: 0
[2016-11-15 12:11:10] [PokemonGoBot] [INFO]
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Pokemon:
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #4 Charmander: (CP 12, IV 0.67)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #72 Tentacool: (CP 36, IV 0.67) | (CP 11, IV 0.69)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #118 Goldeen: (CP 11, IV 0.36)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] #147 Dratini: (CP 37, IV 0.47)
[2016-11-15 12:11:10] [PokemonGoBot] [INFO]
[2016-11-15 12:11:10] [RandomAlivePause] [INFO] Next random alive pause at 13:21:15, for a duration of 0:01:28
[2016-11-15 12:11:10] [RandomPause] [INFO] Next random pause at 12:41:42, for a duration of 0:00:53
[2016-11-15 12:11:10] [RecycleItems] [INFO] Next forced item recycle at 12:15:55
[2016-11-15 12:11:10] [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
[2016-11-15 12:11:10] [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
[2016-11-15 12:11:10] [PokemonGoBot] [INFO] Starting bot...
[2016-11-15 12:11:10] [CollectLevelUpReward] [INFO] Received level up reward:
[2016-11-15 12:11:11] [ cli] [INFO]
[2016-11-15 12:11:11] [ cli] [INFO] Ran for 0:00:06
[2016-11-15 12:11:11] [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h
[2016-11-15 12:11:11] [ cli] [INFO] Travelled 0.00km
[2016-11-15 12:11:11] [ cli] [INFO] Visited 0 stops
[2016-11-15 12:11:11] [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before ()
[2016-11-15 12:11:11] [ cli] [INFO] Threw 0 pokeballs
[2016-11-15 12:11:11] [ cli] [INFO] Earned 0 Stardust
[2016-11-15 12:11:11] [ cli] [INFO] Hatched eggs 0
[2016-11-15 12:11:11] [ cli] [INFO]
[2016-11-15 12:11:11] [ cli] [INFO] Highest CP Pokemon:
[2016-11-15 12:11:11] [ cli] [INFO] Most Perfect Pokemon:
Traceback (most recent call last):
File "pokecli.py", line 846, in <module>
main()
File "pokecli.py", line 205, in main
bot.tick()
File "/Users/moquette/Bot/pokemongo_bot/__init__.py", line 770, in tick
if worker.work() == WorkerResult.RUNNING:
File "/Users/moquette/Bot/pokemongo_bot/cell_workers/buddy_pokemon.py", line 135, in work
if self._km_walked() - self.buddy['last_km_awarded'] >= self.buddy_distance_needed:
KeyError: 'last_km_awarded'
[2016-11-15 12:11:11] [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
Traceback (most recent call last):
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/transport/threaded.py", line 174, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/transport/http.py", line 47, in send
ca_certs=self.ca_certs,
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/utils/http.py", line 66, in urlopen
return opener.open(url, data, timeout)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 494, in open
response = self._open(req, data)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 512, in _open
'_open', req)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 466, in _call_chain
result = func(*args)
File "/Users/moquette/Bot/lib/python2.7/site-packages/raven/utils/http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
File "/Users/moquette/Bot/lib/python2.7/site-packages/future/backports/urllib/request.py", line 1284, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 895, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)
[2016-11-15 12:11:11] [sentry.errors.uncaught] [ERROR] [u"KeyError: 'last_km_awarded'", u' File "pokecli.py", line 846, in <module>', u' File "pokecli.py", line 205, in main', u' File "pokemongo_bot/__init__.py", line 770, in tick', u' File "pokemongo_bot/cell_workers/buddy_pokemon.py", line 135, in work']
Tue Nov 15 12:11:11 PST 2016 Pokebot Stopped.
| closed | 2016-11-15T20:13:21Z | 2020-03-17T19:14:27Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5817 | [] | moquette | 3 |
Josh-XT/AGiXT | automation | 763 | after importing agent with .json KeyError 'chat_history' | ### Description

### Steps to Reproduce the Bug
1.Go import agent with .json format from export
2.go to chat
error will show
### Expected Behavior
no error
### Operating System
- [X] Linux
- [ ] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [ ] Using docker compose
- [X] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-06-20T02:29:30Z | 2023-06-20T03:38:37Z | https://github.com/Josh-XT/AGiXT/issues/763 | [
"type | report | bug",
"needs triage"
] | birdup000 | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 148 | how to make loss visualization | 如何使训练损失loss通过图的形式呈现 | closed | 2019-04-12T05:56:06Z | 2019-05-09T22:57:25Z | https://github.com/KaiyangZhou/deep-person-reid/issues/148 | [] | 18842505953 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.