repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
sequencelengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
tflearn/tflearn
data-science
463
LSTM with more than one layer fails with "Cannot feed value ..."
I'm having an issue where I'm able to train a single layer LSTM without problem but adding a second layer results in a ValueError: Single layer example: ``` net = tflearn.input_data([None,1000,4]) net = tflearn.lstm(net,128,dynamic=True,return_seq=False) net = tflearn.fully_connected(net,1,activation='linear') adam_opt = tflearn.Adam(learning_rate=0.001,epsilon=0.0001) net = tflearn.regression(net,optimizer=adam_opt,loss='mean_square') model =tflearn.DNN(net,tensorboard_verbose=3) model.fit(ss[:-int(N/10)],pirs[:,:-int(N/10)].T,validation_set=(ss[-int(N/10):],pirs[:,-int(N/10):].T),show_metric=True,batch_size=256) ``` If I try adding another LSTM layer like this: ``` net = tflearn.input_data([None,1000,4]) net = tflearn.lstm(net,128,return_seq=True,dynamic=True) net = tflearn.lstm(net,128,dynamic=True,return_seq=False) net = tflearn.fully_connected(net,1,activation='linear') adam_opt = tflearn.Adam(learning_rate=0.001,epsilon=0.0001) net = tflearn.regression(net,optimizer=adam_opt,loss='mean_square') model =tflearn.DNN(net,tensorboard_verbose=3) model.fit(ss[:-int(N/10)],pirs[:,:-int(N/10)].T,validation_set=(ss[-int(N/10):],pirs[:,-int(N/10):].T),show_metric=True,batch_size=256) ``` I get the following error when trying to train: `ValueError: Cannot feed value of shape (256, 1) for Tensor u'TargetsData/Y:0', which has shape '(1000, 1)'` I'm used to TFLearn getting the dimensions right automatically, so this is a strange error. What am I doing wrong?
open
2016-11-14T18:05:06Z
2016-12-20T12:13:24Z
https://github.com/tflearn/tflearn/issues/463
[]
AmitDeshwar
5
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,063
loading tensorflow datasets into google colab
I am a student currently working on a project to make low quality images to high quality images using CycleGan. I have found a dataset on Tensorflow called div2k which have images that are of high and low resolution. I tried downloading it into my computer from the main website of the dataset and moved it into my google drive to link the dataset into colab. However, I can't use it as a dataroot from the file on drive. it says it is not a valid directory. I tried moving the file from drive into the cyclegan folder but, i does not work. What can i do to solve this issue? ![Capture](https://user-images.githubusercontent.com/66714229/84229951-69768e00-ab1d-11ea-8927-2afe4745246a.PNG)
closed
2020-06-10T05:22:27Z
2020-06-17T19:29:09Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1063
[]
randlelim
10
pydantic/FastUI
fastapi
187
Display `pretty_datetime`
Show `ago` for last 48 hours, otherwise should prettier datetime.
open
2024-02-13T12:27:36Z
2024-02-13T12:27:36Z
https://github.com/pydantic/FastUI/issues/187
[]
samuelcolvin
0
mitmproxy/mitmproxy
python
6,707
`mitmproxy --scripts broken.py` with a broken script does not properly handle the error and leaves terminal in a funny state
#### Problem Description I've noticed that when loading a broken script (e.g. syntax error) mitmproxy won't properly exit or keep running and show the error in the event log, as it does when live reloading a script. #### Steps to reproduce the behavior: broken.py ``` x ``` 1. `mitmproxy --scripts broken.py` In the screenshot you can see that it initially rendered the UI and then exited. However: 1. There is no error message 2. The terminal is in an inconsistent state. Whatever I type does no appear and ctrl+c does not "repair" it. ![image](https://github.com/mitmproxy/mitmproxy/assets/679144/dabfcf97-6641-49e0-ba23-cb5f7b61568c) This happens with 10.2.2. With 10.0.0 I get a clean exit with > Error logged during startup: error in script broken.py #### System Information ``` Mitmproxy: 10.2.2 binary Python: 3.12.1 OpenSSL: OpenSSL 3.1.4 24 Oct 2023 Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.38 ```
closed
2024-03-04T07:14:24Z
2024-03-07T20:41:27Z
https://github.com/mitmproxy/mitmproxy/issues/6707
[ "kind/bug", "area/core" ]
Prinzhorn
1
fastapi/sqlmodel
fastapi
476
Inheriting UserMixin class from flask-login library on SQLModel model crashes the app
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python from flask import Flask from sqlmodel import SQLModel, Field, create_engine from flask_login import UserMixin class Account(UserMixin, SQLModel, table=True): id: int | None = Field(default=None, primary_key=True) name: str scotia_id: str = Field(unique=True) email: str = Field(unique=True) password: str engine = create_engine('sqlite:///site.db') SQLModel.metadata.create_all(engine) app = Flask(__name__) if __name__ == '__main__': app.run(debug=True) ``` ### Description (project) ➜ project python test.py Traceback (most recent call last): File "/home/johnny/project/test.py", line 5, in <module> class Account(UserMixin, SQLModel, table=True): File "/home/johnny/.local/share/virtualenvs/project-ABytqz7n/lib64/python3.10/site-packages/sqlmodel/main.py", line 322, in __init__ config = getattr(base, "__config__") AttributeError: type object 'UserMixin' has no attribute '__config__' ### Operating System Linux ### Operating System Details Fedora 36 Gnome Desktop ### SQLModel Version 0.0.8 ### Python Version Python 3.10.7 ### Additional Context _No response_
closed
2022-10-22T00:10:59Z
2022-11-08T14:57:16Z
https://github.com/fastapi/sqlmodel/issues/476
[ "question" ]
km-monzurul-islam
4
vitalik/django-ninja
rest-api
1,149
[BUG] Bearer authentication example from documentation doesn't work
Copying this from the documentation: ``` from ninja.security import HttpBearer class AuthBearer(HttpBearer): def authenticate(self, request, token): if token == "supersecret": return token ``` Results in: ``` AuthBase.__init__() takes 1 positional argument but 2 were given Traceback (most recent call last): File "C:\Users\marcl\.virtualenvs\cw1-ecommerce-all-gNjD29nm\Lib\site-packages\ninja\operation.py", line 156, in _run_authentication result = callback(request) ^^^^^^^^^^^^^^^^^ TypeError: AuthBase.__init__() takes 1 positional argument but 2 were given ``` Python 3.12.1 Django Ninja 1.1.0
closed
2024-05-02T22:57:54Z
2024-05-02T23:03:19Z
https://github.com/vitalik/django-ninja/issues/1149
[]
Marclev78
1
ARM-DOE/pyart
data-visualization
1,602
BUG: Cannot install arm-pyart 1.18.5 on macOS (arm64)
* Py-ART version: 1.18.5 * Python version: 3.12.4 * Operating System: macOS ### Description Trying to instal the package in a fresh venv on this arch errors as ```shell $ python -m pip install arm-pyart ... ERROR: arm-pyart has an invalid wheel, arm-pyart has an invalid wheel, could not read 'arm_pyart-1.18.5.dist-info/WHEEL' file: BadZipFile("Bad CRC-32 for file 'arm_pyart-1.18.5.dist-info/WHEEL'") ``` version 1.18.4 still installs correctly
closed
2024-06-25T09:41:04Z
2024-06-25T21:03:36Z
https://github.com/ARM-DOE/pyart/issues/1602
[]
neutrinoceros
12
streamlit/streamlit
data-science
10,385
Pinned columns (column config) do not work when hide_index=True
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues. - [x] I added a very descriptive title to this issue. - [x] I have provided sufficient information below to help reproduce this issue. ### Summary As the title describes - this is a simple one ### Reproducible Code Example [![Open in Streamlit Cloud](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://issues.streamlitapp.com/?issue=gh-10385) ```Python import streamlit as st import pandas as pd test_df = pd.DataFrame({ 'Date': pd.date_range(start='2024-01-01', periods=10), 'Value1': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value2': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value3': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value4': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value5': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value6': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value7': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value8': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109] }) col_config = { 'Date': st.column_config.DateColumn(width="medium",pinned=True), 'Value1': st.column_config.NumberColumn('Value1 Pinned', width="medium", format='%.2f',pinned=True), 'Value2': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value3': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value4': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value5': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value6': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value7': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value8': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False) } st.write('hide index = True') st.dataframe(test_df, column_config=col_config, hide_index=True) st.write('hide index = False') st.dataframe(test_df, column_config=col_config, hide_index=False) ``` ### Steps To Reproduce Run code. Using data_editor instead of dataframe gets the same result btw. ### Expected Behavior If *any* columns are pinned, and hide_index=False, then pin the index column as well. Or, make the index column configurable (example below) so the user can decide col_config = { 'idx': st.column_config.Column('',width="medium", pinned=True), ... ### Current Behavior Doesn't pin with hide_index=False ### Is this a regression? - [ ] Yes, this used to work in a previous version. ### Debug info - Streamlit version: 1.41.1 - Python version: latest 3 something - Operating System: mac - Browser: chrome ### Additional Information _No response_
open
2025-02-12T23:34:26Z
2025-02-21T21:15:15Z
https://github.com/streamlit/streamlit/issues/10385
[ "type:enhancement", "type:docs", "feature:st.column_config" ]
nickgreengithub
2
dask/dask
scikit-learn
11,765
What scalar type is expected for DataFrame.divisions?
**Describe the issue**: On dask main, `DataFrame.divisions` is a `tuple[np.ndarrray]` where each element is a scalar ndarray: **Minimal Complete Verifiable Example**: ```python In [1]: import dask.dataframe as dd, pandas as pd In [2]: index = [1, 5, 10, 11, 12, 100, 200, 300] ...: df = pd.DataFrame({"a": range(8), "index": index}).set_index("index") ...: ddf = dd.from_pandas(df, npartitions=3) ...: ddf.divisions ...: Out[2]: (np.int64(1), np.int64(11), np.int64(200), np.int64(300)) ``` In this case, that's coming from `sorted_division_locations`, which eventually does a bunch of `Index.__getitem__` calls, which returns scalar ndarrays: ``` (Pdb) pp seq Index([1, 5, 10, 11, 12, 100, 200, 300], dtype='int64', name='index') (Pdb) pp seq[0] np.int64(1) ``` **Anything else we need to know?**: Would we prefer that divisions be a tuple of plain Python scalars? IIRC, that's what it was previously, and would I think fit better with how it's used (comparisons). xref https://github.com/rapidsai/dask-upstream-testing/issues/9, where `pd.isna` check downstream of `.divisions` is causing issues for dask-cudf. But I suspect that solving this when we build divisions might be preferable. I'm able to solve this pretty easily in `sorted_division_locations`, but I think there are other spots where divisions is created (like indexing) and wanted to check whether anyone had a preference before going any further. **Environment**: - Dask version: `2025.2.0+11.gf67825a60` - Python version: - Operating System: - Install method (conda, pip, source):
closed
2025-02-19T19:53:36Z
2025-02-21T10:16:53Z
https://github.com/dask/dask/issues/11765
[ "dataframe" ]
TomAugspurger
1
NVIDIA/pix2pixHD
computer-vision
337
Issues with Running stylegan2_pytorch in gpu settings on colab notebook
It keeps raising AttributeError whenever torch_stylegan2.load_network_pkl() is called. The code that I have run: `import stylegan2_pytorch as torch_stylegan2 !mkdir networks !gdown https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/stylegan2-ffhq-config-f.pkl -O networks/stylegan2-ffhq-config-f.pkl network_pkl = "networks/stylegan2-ffhq-config-f.pkl" model_weights = torch_stylegan2.load_network_pkl(network_pkl) model = StyleGAN2(state_dict=model_weights) model.eval()` The error that reads, `mkdir: cannot create directory ‘networks’: File exists Downloading... From: https://nvlabs-fi cdn.nvidia.com/stylegan2/networks/stylegan2-ffhq-config-f.pkl To: /content/networks/stylegan2-ffhq-config-f.pkl 100% 382M/382M [00:02<00:00, 136MB/s] --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-13-7df4095cee9d>](https://localhost:8080/#) in <cell line: 9>() 7 # Load the pre-trained model weights 8 network_pkl = "networks/stylegan2-ffhq-config-f.pkl" ----> 9 model_weights = torch_stylegan2.load_network_pkl(network_pkl) 10 11 model = StyleGAN2(state_dict=model_weights) AttributeError: module 'stylegan2_pytorch' has no attribute 'load_network_pkl' I have already installed the package by running ' !pip install --upgrade stylegan2_pytorch' on the notebook with the output as described in truncated format: `Requirement already satisfied: websockets in /usr/local/lib/python3.10/dist-packages (from aim->stylegan2_pytorch) (12.0) Requirement already satisfied: boto3 in /usr/local/lib/python3.10/dist-packages (from aim->stylegan2_pytorch) (1.34.82) Requirement already satisfied: base58==2.0.1 in /usr/local/lib/python3.10/dist-packages (from aimrecords==0.0.7->aim->stylegan2_pytorch) (2.0.1) Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from fire->stylegan2_pytorch) (1.16.0) Requirement already satisfied: termcolor in /usr/local/lib/python3.10/dist-packages (from fire->stylegan2_pytorch) (2.4.0) Requirement already satisfied: decorator>=3.4.2 in /usr/local/lib/python3.10/dist-packages (from retry->stylegan2_pytorch) (4.4.2) Requirement already satisfied: py<2.0.0,>=1.4.26 in /usr/local/lib/python3.10/dist-packages (from retry->stylegan2_pytorch) (1.11.0) Requirement already satisfied: Mako in /usr/local/lib/python3.10/dist-packages (from alembic<2,>=1.5.0->aim->stylegan2_pytorch) (1.3.3) Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.10/dist-packages (from cryptography>=3.0->aim->stylegan2_pytorch) (1.16.0) Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in /usr/local/lib/python3.10/dist-packages (from fastapi<1,>=0.69.0->aim->stylegan2_pytorch) (2.6.4) Requirement already satisfied: starlette<0.38.0,>=0.37.2 in /usr/local/lib/python3.10/dist-packages (from fastapi<1,>=0.69.0->aim->stylegan2_pytorch) (0.37.2) Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch->stylegan2_pytorch) (2.1.5) Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy>=1.4.1->aim->stylegan2_pytorch) (3.0.3) Requirement already satisfied: h11>=0.8 in /usr/local/lib/python3.10/dist-packages (from uvicorn<1,>=0.12.0->aim->stylegan2_pytorch) (0.14.0) Requirement already satisfied: botocore<1.35.0,>=1.34.82 in /usr/local/lib/python3.10/dist-packages (from boto3->aim->stylegan2_pytorch) (1.34.82) Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/local/lib/python3.10/dist-packages (from boto3->aim->stylegan2_pytorch) (1.0.1) Requirement already satisfied: s3transfer<0.11.0,>=0.10.0 in /usr/local/lib/python3.10/dist-packages (from boto3->aim->stylegan2_pytorch) (0.10.1) Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->aim->stylegan2_pytorch) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->aim->stylegan2_pytorch) (3.6) Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->aim->stylegan2_pytorch) (2.0.7) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->aim->stylegan2_pytorch) (2024.2.2) Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch->stylegan2_pytorch) (1.3.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.12->cryptography>=3.0->aim->stylegan2_pytorch) (2.22) Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi<1,>=0.69.0->aim->stylegan2_pytorch) (0.6.0) Requirement already satisfied: pydantic-core==2.16.3 in /usr/local/lib/python3.10/dist-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi<1,>=0.69.0->aim->stylegan2_pytorch) (2.16.3) Requirement already satisfied: anyio<5,>=3.4.0 in /usr/local/lib/python3.10/dist-packages (from starlette<0.38.0,>=0.37.2->fastapi<1,>=0.69.0->aim->stylegan2_pytorch) (3.7.1) Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.4.0->starlette<0.38.0,>=0.37.2->fastapi<1,>=0.69.0->aim->stylegan2_pytorch) (1.3.1) Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.4.0->starlette<0.38.0,>=0.37.2->fastapi<1,>=0.69.0->aim->stylegan2_pytorch) (1.2.0) ` However, as I run these two lines to check whether stylegan2_pytorch package has been successfully installed, `import site print(site.getsitepackages()) ` it only manage to return only these two packages, `['/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.10/dist-packages']running import site, print(site.getsitepackages())`
open
2024-04-11T09:18:13Z
2024-04-11T09:24:47Z
https://github.com/NVIDIA/pix2pixHD/issues/337
[]
mavisexp5
0
aimhubio/aim
tensorflow
3,178
Installation Issue [begineer]
## ❓Question while I tried to download It's showing following error: I'm using windows OS. I already installed pytorch-lightning==2. version ```sh (venv) C:\Users\muthu\GitHub\TorchTutorials 😎>pip install aim Collecting aim Using cached aim-3.22.0.tar.gz (1.6 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [6 lines of output] Collecting setuptools Using cached setuptools-70.1.1-py3-none-any.whl.metadata (6.0 kB) Collecting cython==3.0.10 Using cached Cython-3.0.10-cp312-cp312-win_amd64.whl.metadata (3.2 kB) ERROR: Could not find a version that satisfies the requirement aimrocks==0.5.* (from versions: 0.2.0) ERROR: No matching distribution found for aimrocks==0.5.* [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. ```
open
2024-07-01T12:11:05Z
2025-01-01T21:47:10Z
https://github.com/aimhubio/aim/issues/3178
[ "type / question" ]
Muthukamalan
4
polakowo/vectorbt
data-visualization
17
ypeError: No matching definition for argument type(s) array(float64, 2d, C), array(int32, 1d, C), array(bool, 1d, C)
I am trying to reproduce example from readme. I get error in this line ``` # Generate signals fast_ma, slow_ma = vbt.MA.from_combinations(price, windows, 2) ``` The error message is: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-15-9abe364754ce> in <module> 1 # Generate signals ----> 2 fast_ma, slow_ma = vbt.MA.from_combinations(price, windows, 2) C:\ProgramData\Anaconda3\lib\site-packages\vectorbt\indicators\indicators.py in from_combinations(cls, ts, windows, r, ewm, names, **kwargs) 202 names = ['ma' + str(i+1) for i in range(r)] 203 windows, ewm = reshape_fns.broadcast(windows, ewm, writeable=True) --> 204 cache_dict = cls.from_params(ts, windows, ewm=ewm, return_cache=True, **kwargs) 205 param_lists = zip(*itertools.combinations(zip(windows, ewm), r)) 206 mas = [] C:\ProgramData\Anaconda3\lib\site-packages\vectorbt\indicators\indicators.py in from_params(cls, ts, window, ewm, **kwargs) 98 ``` 99 """ --> 100 return super().from_params(ts, window, ewm, **kwargs) 101 102 @classmethod C:\ProgramData\Anaconda3\lib\site-packages\vectorbt\indicators\factory.py in from_params(cls, name, return_raw, *args, **kwargs) 614 results = from_params_pipeline( 615 ts_list, param_list, level_names, len(output_names), --> 616 custom_func, *new_args, pass_lists=pass_lists, return_raw=return_raw, **kwargs) 617 if return_raw or kwargs.get('return_cache', False): 618 return results C:\ProgramData\Anaconda3\lib\site-packages\vectorbt\indicators\factory.py in from_params_pipeline(ts_list, param_list, level_names, num_outputs, custom_func, pass_lists, param_product, broadcast_kwargs, return_raw, *args, **kwargs) 405 # Perform main calculation 406 if pass_lists: --> 407 output_list = custom_func(ts_list, param_list, *args, **kwargs) 408 else: 409 output_list = custom_func(*ts_list, *param_list, *args, **kwargs) C:\ProgramData\Anaconda3\lib\site-packages\vectorbt\indicators\factory.py in custom_func(ts_list, param_list, return_cache, cache, *args) 778 # Caching 779 if cache is None and caching_func is not None: --> 780 cache = caching_func(*typed_ts_list, *param_list, *args) 781 if return_cache: 782 return cache ~\AppData\Roaming\Python\Python37\site-packages\numba\dispatcher.py in _explain_matching_error(self, *args, **kws) 572 msg = ("No matching definition for argument type(s) %s" 573 % ', '.join(map(str, args))) --> 574 raise TypeError(msg) 575 576 def _search_new_conversions(self, *args, **kws): TypeError: No matching definition for argument type(s) array(float64, 2d, C), array(int32, 1d, C), array(bool, 1d, C) ``` I haven't change anything in the code. Looks like numba error?
closed
2020-05-04T13:17:21Z
2020-05-11T14:09:48Z
https://github.com/polakowo/vectorbt/issues/17
[]
MislavSag
23
Johnserf-Seed/TikTokDownload
api
580
异常,接口内容返回异常: status_code=2
[ 💻 ]:Windows平台 [ 🗻 ]:获取最新版本号中! [ 🚩 ]:目前 14200 版本已是最新 [ 配置 ]:配置验证成功! [ 配置 ]:读取本地配置完成! [ 提示 ]:异常,接口内容返回异常: status_code=2 [2023-10-19 00:21:02,655] - Log.py] - ERROR: [ 提示 ]:异常,接口内容返回异常: status_code=2,Traceback (most recent call last): File "Util\Profile.py", line 453, in get_Profile user_profile_info = await self.get_user_profile_info(self.headers, self.sec_user_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Util\Profile.py", line 315, in get_user_profile_info raise RuntimeError(f"接口内容返回异常: status_code={info_status_code}") RuntimeError: 接口内容返回异常: status_code=2
closed
2023-10-19T04:21:50Z
2023-10-19T10:27:14Z
https://github.com/Johnserf-Seed/TikTokDownload/issues/580
[ "无效(invalid)" ]
chengliubrother
0
autogluon/autogluon
computer-vision
4,157
ValueError: Preset 'chronos_tiny' was not found. Valid presets: ['best_quality', 'high_quality', 'medium_quality', 'fast_training']
![image](https://github.com/autogluon/autogluon/assets/67415290/5e140ffc-56fe-49a7-bd1b-00bd2492efe2)
closed
2024-05-02T07:27:41Z
2024-05-02T07:40:33Z
https://github.com/autogluon/autogluon/issues/4157
[]
iganggang
1
widgetti/solara
jupyter
817
why?
Solara server is starting at http://aarch64-conda-linux-gnu:8765 ERROR: [Errno -2] Name or service not known
closed
2024-10-15T08:15:38Z
2024-10-15T09:02:47Z
https://github.com/widgetti/solara/issues/817
[]
luckfu
1
django-oscar/django-oscar
django
4,366
the message "No products found." at browse.html by version 3.2.5
Hi, thank you for nice library. Found a bug? Please fill out the sections below. ### Issue Summary By version 3.2.5, I created a product, product detail page was made, but there is no product on catalogue and category page. I install version 3.2.4 and the product is displayed. ### Steps to Reproduce It's essential that you provide enough information for someone else to replicate the problem you're seeing. Simply describing something that's broken on your current project is not enough! 1. pip install django-oscar[sorl-thumbnail]==3.2.5 2. setting like https://django-oscar.readthedocs.io/en/latest/internals/getting_started.html#install-oscar-and-its-dependencies 3. runserver and create a product on dashboard 4. access http://127.0.0.1:8000/catalogue/ Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead? pip install django-oscar[sorl-thumbnail]==3.2.4, and problem was solved. and I have reviewed the code for version 3.2.5 and found that below code affects this problem. virtualenv_name\Lib\site-packages\oscar\apps\search\facets.py line 25 sqs = sqs.filter_and(is_public="true", structure__in=["standalone", "parent"]) ### Technical details Python 3.9.6 Package Version ------------------------ ----------- asgiref 3.8.1 babel 2.16.0 Django 4.2.16 django-extensions 3.2.3 django-extra-views 0.14.0 django-haystack 3.3.0 django-oscar 3.2.5 django-phonenumber-field 6.4.0 django-tables2 2.3.4 django-treebeard 4.7.1 django-widget-tweaks 1.5.0 factory-boy 3.2.1 Faker 30.0.0 packaging 24.1 phonenumbers 8.13.46 pillow 10.4.0 pip 21.1.3 purl 1.6 pycountry 24.6.1 python-dateutil 2.9.0.post0 setuptools 56.0.0 six 1.16.0 sorl-thumbnail 12.9.0 sqlparse 0.5.1 typing-extensions 4.12.2 tzdata 2024.2 Best regards
open
2024-10-03T15:56:02Z
2025-02-03T19:04:30Z
https://github.com/django-oscar/django-oscar/issues/4366
[]
kobe2sha
10
tortoise/tortoise-orm
asyncio
1,405
How to write this in tortoise
select count(*) from (select count(*) as count_1 from group GROUP BY user_id) tmp where count_1 > 300 How to write this in tortoise
closed
2023-06-15T06:26:03Z
2024-12-26T11:56:07Z
https://github.com/tortoise/tortoise-orm/issues/1405
[ "question" ]
hu0514
1
adbar/trafilatura
web-scraping
229
Keep orderedness information of lists
I was wondering if it is possible to know whether a `<list>` in the extracted output came from an `<ul>` or an `<ol>`. Even with `include_formatting` (which I don't need), this information is omitted. I saw that `<head>`, for instance, retains the level in the `rend` attribute. A similar attribute would be handy in `<list>` as well. On a related note, description lists (`<dl>`s) are handled even worse, as both `<dt>`s and `<dd>`s are extracted as separate `<item>`s. Something like this: - _dt1_: _dd1_ - _dt2_: _dd2_ would probably serve the purpose better.
closed
2022-07-29T14:03:56Z
2022-12-21T12:11:42Z
https://github.com/adbar/trafilatura/issues/229
[ "feedback" ]
DavidNemeskey
4
replicate/cog
tensorflow
1,330
Which Cuda versions are not supported by Cog?
Does Cog support all CUDA versions or not? I want to know if older CUDA versions are compatible and functional.
closed
2023-10-05T10:32:28Z
2023-10-05T19:28:20Z
https://github.com/replicate/cog/issues/1330
[]
geoxpert0001
2
saulpw/visidata
pandas
2,203
[vdsql] Polishing vdsql, list of issues
- [ ] #282: Select starting table in postgres from command-line - [ ] #579: [Postgres] Allow inserting / deleting rows - [ ] #522: [postgres] parms in options - [ ] #586: SQL query data - [ ] #727: [postgres] Transaction error when viewing table - [ ] #729: Integrate generic SQL loader
open
2023-12-31T07:37:54Z
2023-12-31T07:38:18Z
https://github.com/saulpw/visidata/issues/2203
[ "vdsql" ]
saulpw
0
qubvel-org/segmentation_models.pytorch
computer-vision
361
ModuleNotFoundError: No module named 'segmentation_models_pytorch.losses'
Hello! Thanks so much for your work! When I want to use the losses here, there is a error:ModuleNotFoundError: No module named 'segmentation_models_pytorch.losses'. Would you help me to solve this problem? Thank you very much!
closed
2021-03-08T01:24:27Z
2023-05-20T00:12:55Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/361
[ "Stale" ]
XinlingQiu
7
cvat-ai/cvat
computer-vision
8,396
Could not create the task Open the Browser Console to get details
### Actions before raising this issue - [X] I searched the existing issues and did not find anything similar. - [X] I read/searched [the docs](https://docs.cvat.ai/docs/) ### Steps to Reproduce I'm simply trying to use the tool, but as soon as I add the image to the task I want to work on, the error "Could not create the task Open the Browser Console to get details" appears when i press "Submit&". ### Expected Behavior _No response_ ### Possible Solution _No response_ ### Context ```Markdown I am currently use the just the online version, in the console i get theese errors: - Failed to load resource: the server responded with a status of 404 (Not Found) - Failed to load resource: the server responded with a status of 403 (Forbidden) - Unchecked runtime.lastError: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received ``` ### Environment
closed
2024-09-03T14:07:00Z
2024-09-04T14:08:26Z
https://github.com/cvat-ai/cvat/issues/8396
[ "bug" ]
jacopich
0
microsoft/hummingbird
scikit-learn
480
Add support for SHAP package
SHAP is a general model attrition package: https://fburl.com/5lq7ho5y I have a use case to first use SHAP to explain a sklearn/xgboost classifier and then convert the model to pytorch. SHAP is not supported yet for model conversion. Is there a quick way to walk it around?
closed
2021-03-31T03:37:24Z
2021-04-02T19:00:38Z
https://github.com/microsoft/hummingbird/issues/480
[]
seuzha
2
xinntao/Real-ESRGAN
pytorch
161
Epoch configuration
Hi. I'm training to control the amount of epochs when finetuning but I see no option to control it in finetune_realesrgan_x4plus.yml. Thanks!
open
2021-11-23T14:46:03Z
2022-08-04T09:37:09Z
https://github.com/xinntao/Real-ESRGAN/issues/161
[]
SirSykon
1
axnsan12/drf-yasg
django
354
Request Body Param is being sent as a `String` instead of an `Array` of `Integers`
My auto generated docs appear correct but when sending a POST, django is receive the parameter as a string. What it's expecting is ``` { 'ids': [<id1>, <id2>, ...] } ``` Django is instead receiving the following when making requests from yasg ``` { 'ids': '[<id1>, <id2>, ...]' } ``` Trying to generate the docs manually, I have come up with ``` post_item = Schema( title='Post new items', type=openapi.TYPE_OBJECT, properties=dict( ids=dict( type=openapi.TYPE_ARRAY, items=Items( type=openapi.TYPE_INTEGER, format=openapi.FORMAT_INT32, ), description='IDs (example: [4389, 4762])', ) ), required=['ids'], ) post_params = [ Parameter( 'pk', openapi.IN_PATH, required=True, type=openapi.TYPE_INTEGER, ) ] @action(methods=['post'], detail=False) @swagger_auto_schema( manual_parameters=post_params, request_body=post_item) def bulk(self, request, *args, **kwargs): .... ``` This has the same behavior the auto-generated docs has. Any suggestions?
open
2019-04-23T19:57:08Z
2025-03-07T12:16:49Z
https://github.com/axnsan12/drf-yasg/issues/354
[ "triage" ]
jkleve
2
jazzband/django-oauth-toolkit
django
990
Generated Access token using JWT
<!-- What is your question? --> - Hi everyone, i'm try generate token with function `signed_token_generator` - This is my settings: ``` OAUTH2_PROVIDER = { "SCOPES_BACKEND_CLASS": "OauthToolket_RestFramework.scopes.ScopesBackend", "ACCESS_TOKEN_GENERATOR": "oauthlib.oauth2.rfc6749.tokens.signed_token_generator", "OIDC_RSA_PRIVATE_KEY": ""PRIAVATE_KEY"""" } ``` - I try create app with this: ![image](https://user-images.githubusercontent.com/54214642/122915882-75c73500-d386-11eb-8b74-ddbe4cfcf24b.png) - This is my error, some one can help for problem ![image](https://user-images.githubusercontent.com/54214642/122915944-88416e80-d386-11eb-9752-8eb3facc4cd6.png)
closed
2021-06-22T11:21:20Z
2022-01-19T03:57:09Z
https://github.com/jazzband/django-oauth-toolkit/issues/990
[ "question" ]
KhoaDauTay
2
skypilot-org/skypilot
data-science
4,533
[Serve] Support event-based autoscaling
<!-- Describe the bug report / feature request here --> Right now, we only have QPS-based autoscaling, which works pretty well for a lot of situations. But honestly, it’d be awesome if we could generalize it to support event-based scaling too. By making the autoscaler more flexible, we could let it play nicely with other systems or triggers. Picture this: scaling your services based on things like the number of files in object storage—or any custom event. This would be super handy, especially for those tricky scaling scenarios like going from 1 to 0 or 0 to 1. ## Scenario 1 - Scale down to 0 at night You could use a `cron` trigger to implement the logic about scaling up during daytime and scale down to 0 at night. ```yaml service: replica_policy: max_replicas: 10 target_qps_per_replica: 3 upscale_delay_seconds: 300 downscale_delay_seconds: 1200 autoscaling_rules: - type: cron # Use a cron-based policy to schedule scaling metadata: timezone: America/New_York # Set the timezone for the cron schedule # Start time for scaling up: 09:00 AM every weekday (Monday to Friday) start: 0 9 * * 1-5 # The desired number of replicas during the day (work hours) min_replicas: "2" # Keeps at least 2 replicas running during the day - type: cron # Another cron policy for scaling down at night metadata: timezone: America/New_York # Same timezone as above # Scale down to 0 at 17:00 (05:00 PM) every weekday (Monday to Friday) start: 0 17 * * 1-5 # The desired number of replicas at night (scale down to 0) min_replicas: "0" # Scales down to 0 replicas after work hours ``` ## Scenario 2 - Autoscale when new nodes added The cluster could be dynamically updated. Some times new nodes will be added and the services could be autoscaled based on this metric: ```yaml service: replica_policy: max_replicas: 10 target_qps_per_replica: 3 upscale_delay_seconds: 300 downscale_delay_seconds: 1200 autoscaling_rules: - type: node_based # Trigger scaling based on node changes metadata: # Scale up when new nodes are added to the cluster scale_up_on: node_added # Trigger scaling logic when nodes are added scale_down_on: node_removed # Optional: Define the minimum number of replicas per node replicas_per_node: 1 # Scale 1 replica per new node added # Optional: Set a maximum replica limit to avoid over-scaling ``` <!-- If relevant, fill in versioning info to help us troubleshoot --> _Version & Commit info:_ * `sky -v`: PLEASE_FILL_IN * `sky -c`: PLEASE_FILL_IN
open
2025-01-05T01:47:32Z
2025-01-05T01:47:32Z
https://github.com/skypilot-org/skypilot/issues/4533
[]
gaocegege
0
Anjok07/ultimatevocalremovergui
pytorch
1,244
FIXED - Linux Mint install problems
Hi, after following your instructions, the installation step crashes (after downloading all dependencies) in the `pip3 install -r requirements.txt`: PLEASE help me to fix this error: I tried a few times to install it, but I get the same problem.... https://imgur.com/IhQiZgi `Collecting sklearn Downloading sklearn-0.0.post12.tar.gz (2.6 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [15 lines of output] The 'sklearn' PyPI package is deprecated, use 'scikit-learn' rather than 'sklearn' for pip commands. Here is how to fix this error in the main use cases: - use 'pip install scikit-learn' rather than 'pip install sklearn' - replace 'sklearn' by 'scikit-learn' in your pip requirements files (requirements.txt, setup.py, setup.cfg, Pipfile, etc ...) - if the 'sklearn' package is used by one of your dependencies, it would be great if you take some time to track which package uses 'sklearn' instead of 'scikit-learn' and report it to their issue tracker - as a last resort, set the environment variable SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True to avoid this error More information is available at https://github.com/scikit-learn/sklearn-pypi-package [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. `
closed
2024-03-17T15:58:41Z
2024-03-20T16:54:19Z
https://github.com/Anjok07/ultimatevocalremovergui/issues/1244
[]
CodesoundR
4
kubeflow/katib
scikit-learn
1,946
Should MutatingWebhook failurePolicy be `Ignore`?
/kind feature **Describe the solution you'd like** Not sure if this is a bug or a feature request, but the MutatingWebhook `failurePolicy`s are set to `Ignore`, which means it is easy to not notice if your webhook calls are failing. This was noticed when debugging #1795. Is this the best configuration? Are there cases where having the Experiment and Pod mutations fail can still lead to viable setups? If there are not, maybe `failurePolicy: Fail` would be better so configuration errors are easier to find. --- <!-- Don't delete this message to encourage users to support your issue! --> Love this feature? Give it a 👍 We prioritize the features with the most 👍
closed
2022-09-02T20:54:50Z
2023-08-04T23:27:55Z
https://github.com/kubeflow/katib/issues/1946
[ "kind/feature" ]
ca-scribner
3
GibbsConsulting/django-plotly-dash
plotly
450
small window
Hello guys, I have a problem after deploying the app to Django (on PythonAnywhere). The dash app appears in a very small window. One option to solve this is to use 'ratio' argument. But is there any option like ratio=auto? ![smallwindow](https://user-images.githubusercontent.com/107844961/229786130-361e37d6-bffb-4481-a327-9d0505fab1b4.PNG) Thank you very much
open
2023-04-04T12:09:27Z
2023-04-11T09:36:45Z
https://github.com/GibbsConsulting/django-plotly-dash/issues/450
[]
mhostn3
4
babysor/MockingBird
deep-learning
20
保姆级别教程(持续更新各类社区/非官方教程----
(作者借楼编辑ing 社区视频教程: 奶糖 https://www.bilibili.com/video/BV1dq4y137pH
open
2021-08-19T09:28:58Z
2024-04-07T08:21:23Z
https://github.com/babysor/MockingBird/issues/20
[]
zhuzaileiting
145
modoboa/modoboa
django
2,208
Domain DKIM don't sign mail
Hello everyone, I don't think i'm the only one to have this issue. When I do all the configuration for domain and the DKIM box it's green, my mail don't pass dkim on gmail or other webmail, can someone help me to fix that please, i read some articles about openDKIM on mobodoa website or github but don't work for me, or I fail somewhere. Thanks in advance for your answer.
closed
2021-03-29T17:14:12Z
2021-06-30T11:48:20Z
https://github.com/modoboa/modoboa/issues/2208
[]
Orminor77
29
lukas-blecher/LaTeX-OCR
pytorch
393
[M1 MAC] RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
Hi, I got some error while trying to run the latexocr, am I doing something wrong? <img width="1432" alt="image" src="https://github.com/user-attachments/assets/9556549c-bfdf-43ae-974a-50a2c26e21b7">
open
2024-08-21T13:26:22Z
2024-08-21T13:46:24Z
https://github.com/lukas-blecher/LaTeX-OCR/issues/393
[]
pesslovany
2
ydataai/ydata-profiling
jupyter
1,523
Unexpected error of type DispatchError raised while running data exploratory profiler from function spark_get_series_descriptions
### Current Behaviour # converts the data types of the columns in the DataFrame to more appropriate types, # useful for improving the performance of calculations. # Selects the columns in the DataFrame that are of type object or category, # which are the types that are typically considered to be categorical data_to_analyze = dataframe_to_analyze.toPandas() <html> <body> <!--StartFragment--> ERROR:data_quality_job.scheduler.data_quality_glue_job:Run data exploratory analysis fails for datasource master_wip in data domain stock_wip: Unexpected error of type DispatchError was raised while data exploratory profiler: Function <code object spark_get_series_descriptions at 0x7fb135521370, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 67>Traceback (most recent call last): File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 328, in __call__ return func(*args, **kwargs) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/describe_date_spark.py", line 50, in describe_date_1d_spark bin_edges, hist = df.select(col_name).rdd.flatMap(lambda x: x).histogram(bins_arg) File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1652, in histogram raise TypeError("buckets should be a list or tuple or number(int or long)")TypeError: buckets should be a list or tuple -- or number(int or long)The above exception was the direct cause of the following exception:Traceback (most recent call last): File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 328, in __call__ return func(*args, **kwargs) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 64, in spark_describe_1d return summarizer.summarize(config, series, dtype=vtype) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/summarizer.py", line 42, in summarize _, _, summary = self.handle(str(dtype), config, series, {"type": str(dtype)}) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 62, in handle return op(*args) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 21, in func2 return f(*res) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 21, in func2 return f(*res) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 21, in func2 return f(*res) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/handler.py", line 17, in func2 res = g(*x) File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 330, in __call__ raise DispatchError(f"Function {func.__code__}") from exmultimethod.DispatchError: Function <code object describe_date_1d_spark at 0x7fb135546ce0, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/describe_date_spark.py", line 22>The above exception was the direct cause of the following exception:Traceback (most recent call last): File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 328, in __call__ return func(*args, **kwargs) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 92, in spark_get_series_descriptions for i, (column, description) in enumerate( File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 870, in next raise value File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 88, in multiprocess_1d return column, describe_1d(config, df.select(column), summarizer, typeset) File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 330, in __call__ raise DispatchError(f"Function {func.__code__}") from exmultimethod.DispatchError: Function <code object spark_describe_1d at 0x7fb1355210b0, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 16>The above exception was the direct cause of the following exception:Traceback (most recent call last): File "/tmp/sls_data_quality_library-0.3.0-py3-none-any.whl/data_quality_job/scheduler/data_quality_glue_job.py", line 1074, in run_data_exploratory_analysis self.dq_file_system_metrics_repository_manager.persist_profile_json_report( File "/tmp/sls_data_quality_library-0.3.0-py3-none-any.whl/data_quality_job/services/data_quality_file_system_metrics_repository.py", line 974, in persist_profile_json_report generated_profile.to_file(output_file=f"{local_json_report}") File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 347, in to_file data = self.to_json() File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 479, in to_json return self.json File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 283, in json self._json = self._render_json() File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 449, in _render_json description = self.description_set File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/profile_report.py", line 253, in description_set self._description_set = describe_df( File "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/describe.py", line 74, in describe series_description = get_series_descriptions( File "/home/spark/.local/lib/python3.10/site-packages/multimethod/__init__.py", line 330, in __call__ raise DispatchError(f"Function {func.__code__}") from exmultimethod.DispatchError: Function <code object spark_get_series_descriptions at 0x7fb135521370, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 67>INFO:py4j.clientserver:Closing down clientserver connectionINFO:py4j.clientserver:Closing down clientserver connectionINFO:py4j.clientserver:Closing down clientserver connectionWARNING:data_quality_job.scheduler.data_quality_glue_job:Processing dataset fails to provide an exploratory data analysis report : Unexpected error of type DispatchError was raised while data exploratory profiler: Function <code object spark_get_series_descriptions at 0x7fb135521370, file "/home/spark/.local/lib/python3.10/site-packages/ydata_profiling/model/spark/summary_spark.py", line 67> <!--EndFragment--> </body> </html> ### Expected Behaviour While converting my spark dataframe to pandas, the report should be generated properly for the dataset The dataframe should not be considered as spark dataframe No error should be raised ### Data Description <html> <body> <!--StartFragment--> INFO:data_quality_job.services.data_quality_operations:Data profiler dataset data types to analyze: storage_location categorystock_in_transit float32unrestricted_use_stock float32stock_at_vendor float32stock_in_transfer float32stock_in_quality_inspection float32valuation_class float32block_stock_returns float32material_part_number objectstock_in_transfer_plant_to_plant float32stock_value float32material_type categoryblocked_stock float32account_description categoryplant categoryall_restricted_stock float32valuated_stock_quantities float32gl_account float32record -- _timestamp datetime64[ns]non_valuated_stock_quantities float32dtype: object <!--EndFragment--> </body> </html> ### Code that reproduces the bug ```Python def determine_run_minimal_mode(self, nb_columns, nb_records): """ Determine if the function should run in minimal mode. Args: nb_columns (int): The number of columns in the dataset. nb_records (int): The number of records in the dataset. Returns: bool: True if the function should run in minimal mode, False otherwise. """ return True if (len(nb_columns) >= EDA_PROFILING_MODE_NB_COLUMNS_LIMIT or nb_records >= EDA_PROFILING_MODE_NB_RECORDS_LIMIT) else False def create_profile_report(self, dataset_to_analyze: pd.DataFrame, report_name: str, dataset_description_url: str) -> ProfileReport: """ Creates a profile report for a given dataset. Args: dataset_to_analyze (pd.DataFrame): The dataset to analyze and generate a profile report for. report_name (str): The name of the report. dataset_description_url (str): The URL of the dataset description. Returns: ProfileReport: The generated profile report. """ # Perform data quality operations and generate a profile report # ... # variables preferred characterization settings variables_settings = { "num": {"low_categorical_threshold": 5, "chi_squared_threshold": 0.999, "histogram_largest": 10}, "cat": {"length": True, "characters": False, "words": False, "cardinality_threshold": 20, "imbalance_threshold": 0.5, "n_obs": 5, "chi_squared_threshold": 0.999}, "bool": {"n_obs": 3, "imbalance_threshold": 0.5} } missing_diagrams_settings = { "heatmap": False, "matrix": True, "bar": False } # Plot rendering option, way how to pass arguments to the underlying matplotlib visualization engine plot_rendering_settings = { "histogram": {"x_axis_labels": True, "bins": 0, "max_bins": 10}, "dpi": 200, "image_format": "png", "missing": {"cmap": "RdBu_r", "force_labels": True}, "pie": {"max_unique": 10, "colors": ["gold", "b", "#FF796C"]}, "correlation": {"cmap": "RdBu_r", "bad": "#000000"} } # Correlation matrices through description_set correlations_settings = { "auto": {"calculate": True, "warn_high_correlations": True, "threshold": 0.9}, "pearson": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9}, "spearman": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9}, "kendall": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9}, "phi_k": {"calculate": False, "warn_high_correlations": True, "threshold": 0.9}, "cramers": {"calculate": False, "warn_high_correlations": False, "threshold": 0.9}, } categorical_maximum_correlation_distinct = 20 report_rendering_settings = { "precision": 10, } interactions_settings = { "continuous": False, "targets": [] } # Customizing the report's theme html_report_styling = { "style": { "theme": "flatly", "full_width": True, "primary_colors": {"#66cc00", "#ff9933", "#ff0099"} } } current_datetime = datetime.now() current_date = current_datetime.date() current_year = current_date.strftime("%Y") # compute amount of data used for profiling samples_percent_size = (min(len(dataset_to_analyze.columns.tolist()), 20) * min(dataset_to_analyze.shape[0], 100000)) / (len(dataset_to_analyze.columns.tolist()) * dataset_to_analyze.shape[0]) samples = { "head": 0, "tail": 0, "random": 0 } dataset_description = { "description": f"This profiling report was generated using a sample of {samples_percent_size}% of the filtered original dataset.", "copyright_year": current_year, "url": dataset_description_url } # Identify time series variables if any # Enable tsmode to True to automatically identify time-series variables # and provide the column name that provides the chronological order of your time-series # time_series_type_schema = {} time_series_mode = False # time_series_sortby = None # for column_name in dataset_to_analyze.columns.tolist(): # if any(keyword in column_name.lower() for keyword in ["date", "timestamp"]): # self.logger.info("candidate column_name as timeseries %s", column_name) # time_series_type_schema[column_name] = "timeseries" # if len(time_series_type_schema) > 0: # time_series_mode = True # time_series_sortby = "Date Local" # is_run_minimal_mode = self.determine_run_minimal_mode(dataset_to_analyze.columns.tolist(), dataset_to_analyze.shape[0]) # Convert the Pandas DataFrame to a Spark DataFrame # Configure pandas-profiling to handle Spark DataFrames # while preserving the categorical encoding # Enable Arrow-based columnar data transfers self.spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true") pd.DataFrame.iteritems = pd.DataFrame.items # psdf = ps.from_pandas(dataset_to_analyze) # data_to_analyze = psdf.to_spark() data_to_analyze = self.spark.createDataFrame(dataset_to_analyze) ydata_profiling_instance_config = Settings() ydata_profiling_instance_config.infer_dtypes = True # ydata_profiling_instance_config.Config.set_option("profilers", {"Spark": {"verbose": True}}) return ProfileReport( # dataset_to_analyze, data_to_analyze, title=report_name, dataset=dataset_description, sort=None, progress_bar=False, vars=variables_settings, explorative=True, plot=plot_rendering_settings, report=report_rendering_settings, correlations=correlations_settings, categorical_maximum_correlation_distinct=categorical_maximum_correlation_distinct, missing_diagrams=missing_diagrams_settings, samples=samples, # correlations=None, interactions=interactions_settings, html=html_report_styling, # minimal=is_run_minimal_mode, minimal=True, tsmode=time_series_mode, # tsmode=False, # sortby=time_series_sortby, # type_schema=time_series_type_schema ) def is_categorical_column(self, df, column_name, n_unique_threshold=20, ratio_unique_values=0.05, exclude_patterns=[]): """ Determines whether a column in a pandas DataFrame is categorical. Args: df (pandas.DataFrame): The DataFrame to check. column_name (str): The name of the column to check. n_unique_threshold (int): The threshold for the number of unique values. ratio_unique_values (float): The threshold for the ratio of unique values to total values. exclude_patterns (list): A list of patterns to exclude from consideration. Returns: bool: True if the column is categorical, False otherwise. """ if df[column_name].dtype in [object, str]: # Check if the column name matches any of the exclusion patterns if any(pattern in column_name for pattern in exclude_patterns): return False # Check if the number of unique values is less than a threshold if df[column_name].nunique() < n_unique_threshold: return True # Check if the ratio of unique values to total values is less than a threshold if 1. * df[column_name].nunique() / df[column_name].count() < ratio_unique_values: print(df[column_name], "ratio is", 1. * df[column_name].nunique() / df[column_name].count()) return True # Check if any of the other conditions are true return False def get_categorical_columns(self, df, n_unique_threshold=10, ratio_threshold=0.05, exclude_patterns=[]): """ Determines which columns in a pandas DataFrame are categorical. Args: df (pandas.DataFrame): The DataFrame to check. n_unique_threshold (int): The threshold for the number of unique values. ratio_threshold (float): The threshold for the ratio of unique values to total values. exclude_patterns (list): A list of patterns to exclude from consideration. Returns: list: A list of the names of the categorical columns. """ categorical_cols = [] for column_name in df.columns: if self.is_categorical_column(df, column_name, n_unique_threshold, ratio_threshold, exclude_patterns): categorical_cols.append(column_name) return categorical_cols def perform_exploratory_data_analysis(self, report_name: str, dataframe_to_analyze: SparkDataFrame, columns_list: list, description_url: str, json_file_path: str) -> None: """ Performs exploratory data analysis on a given DataFrame. Args: dataframe_to_analyze (DataFrame): The DataFrame to perform exploratory data analysis on. columns_list (list): A list of dictionaries containing column information. """ try: # Cast the columns in the data DataFrame to match the Glue table column types self.logger.info("Performs exploratory data analysis on a given DataFrame with columns list: %s", columns_list) for analyze_column in columns_list: dataframe_to_analyze = dataframe_to_analyze.withColumn( analyze_column["Name"], dataframe_to_analyze[analyze_column["Name"]].cast(analyze_column["Type"]), ) # Verify the updated column types self.logger.info("Dataframe column type casted from data catalog: %s", dataframe_to_analyze.printSchema()) # converts the data types of the columns in the DataFrame to more appropriate types, # useful for improving the performance of calculations. # Selects the columns in the DataFrame that are of type object or category, # which are the types that are typically considered to be categorical data_to_analyze = dataframe_to_analyze.toPandas() data_to_analyze = data_to_analyze.infer_objects() data_to_analyze.convert_dtypes().dtypes categorical_cols = self.get_categorical_columns(data_to_analyze, n_unique_threshold=10, ratio_threshold=0.05, exclude_patterns=['date', 'timestamp', 'time', 'year', 'month', 'day', 'hour', 'minute', 'second', 'part_number']) # categorical_cols = data_to_analyze.select_dtypes(include=["object", "category"]).columns.tolist() self.logger.info("Data profiler dataset detected potential categorical columns %s and its type %s", categorical_cols, data_to_analyze.dtypes) for column_name in data_to_analyze.columns.tolist(): if column_name in categorical_cols: data_to_analyze[column_name] = data_to_analyze[column_name].astype("category") else: # search for undetected categorical columns if any(term in str.lower(column_name) for term in ["plant", "program"]): self.logger.info("Undetected potential categorical column %s", column_name) # for column_name in data_to_analyze.columns.tolist(): # # search for non categorical columns # # if any(term in str.lower(column_name) for term in ["partnumber", "part_number", "_item", "_number", "plant", "program"]): # if any(term in str.lower(column_name) for term in ["plant", "program"]): # if column_name in categorical_cols: # self.logger.info("Data profiler dataset proposed categorical column %s", column_name) # data_to_analyze[column_name] = data_to_analyze[column_name].astype("category") # if any(term in str.lower(column_name) for term in ["partnumber", "part_number", "_item", "_number", "_timestamp", "_date"]): # self.logger.info("Data profiler dataset detected non categorical column %s", column_name) # data_to_analyze[column_name] = data_to_analyze[column_name].astype("str") if any(term in str.lower(column_name) for term in ["timestamp"]): self.logger.info("Data profiler dataset detected datetime column %s", column_name) try: if pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d', errors='coerce').notnull().all(): data_to_analyze[column_name] = data_to_analyze[column_name].apply(pd.to_datetime) # data_to_analyze[column_name] = data_to_analyze[column_name].astype(np.datetime64) elif pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S', errors='coerce').notnull().all(): data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S') elif data_to_analyze[column_name].dtypes in ['numpy.int64', 'int64']: data_to_analyze[column_name] = data_to_analyze[column_name].apply(lambda x: datetime.fromtimestamp(int(x) / 1000)) elif data_to_analyze[column_name].dtypes == 'datetime64[ms]': data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%dT%H:%M:%SZ') data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]') else: data_to_analyze[column_name] = data_to_analyze[column_name].astype('str') # if not isinstance(data_to_analyze[column_name].dtype, np.datetime64): # data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S') # # if not np.issubdtype(data_to_analyze[column_name].dtype, np.datetime64): # # data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S', errors="coerce") # # elif is_datetime64_any_dtype(data_to_analyze[column_name]): # # data_to_analyze[column_name] = data_to_analyze[column_name].astype(np.datetime64) # data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]') # # elif data_to_analyze[column_name].dtype == 'datetime64[ns]': # # data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%dT%H:%M:%SZ') # # data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]') # # else: # # data_to_analyze[column_name] = data_to_analyze[column_name].astype('datetime64') # except ValueError: # try: # data_to_analyze[column_name] = data_to_analyze[column_name].astype(np.date_time) # except ValueError: # try: # if (data_to_analyze[column_name].dtypes in ["numpy.int64", "int64"]): # data_to_analyze[column_name] = data_to_analyze[column_name].apply( # lambda x: datetime.fromtimestamp(int(x) / 1000)) except ValueError: data_to_analyze[column_name] = data_to_analyze[column_name].astype('str') elif any(term in str.lower(column_name) for term in ["date"]): self.logger.info("Data profiler dataset detected date column %s", column_name) try: if pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d', errors='coerce').notnull().all(): data_to_analyze[column_name] = data_to_analyze[column_name].dt.date elif pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S', errors='coerce').notnull().all(): data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%d %H:%M:%S') elif data_to_analyze[column_name].dtypes in ['numpy.int64', 'int64']: data_to_analyze[column_name] = data_to_analyze[column_name].apply(lambda x: datetime.fromtimestamp(int(x) / 1000)) elif data_to_analyze[column_name].dtypes == 'datetime64[ms]': data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], format='%Y-%m-%dT%H:%M:%SZ') data_to_analyze[column_name] = data_to_analyze[column_name].values.astype(dtype='datetime64[ns]') else: data_to_analyze[column_name] = data_to_analyze[column_name].astype('str') # data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name]).dt.date # except ValueError: # try: # data_to_analyze[column_name] = pd.to_datetime(data_to_analyze[column_name], # format="%Y-%m-%d", errors="coerce") # except ValueError: # try: # if (data_to_analyze[column_name].dtypes in ["numpy.int64", "int64"]): # data_to_analyze[column_name] = data_to_analyze[column_name].apply( # lambda x: datetime.fromtimestamp(int(x) / 1000)) except ValueError: pass self.logger.info("Data profiler changed dtypes %s", data_to_analyze.dtypes) # Downcast data types: If the precision of your data doesn't require float64, # consider downcasting to a lower precision data type like float32 or even int64. # This can significantly reduce memory usage and improve computational efficiency. try: float64_cols = list(data_to_analyze.select_dtypes(include="float64")) self.logger.info("Data profiler dataset detected float64 column %s", column_name) data_to_analyze[float64_cols] = data_to_analyze[float64_cols].astype("float32") # data_to_analyze[ # data_to_analyze.select_dtypes(np.float64).columns # ] = data_to_analyze.select_dtypes(np.float64).astype(np.float32) except ValueError: pass data_to_analyze.reset_index(drop=True, inplace=True) self.logger.info("Data profiler dataset data types to analyze: %s", data_to_analyze.dtypes) # If dealing with large datasets, consider using sampling techniques # to reduce the amount of data processed is useful for exploratory # data analysis or initial profiling. # Sample 10.000 rows # if data_to_analyze.count() >= EDA_PROFILING_MODE_NB_RECORDS_LIMIT: # data_to_analyze = data_to_analyze.sample(EDA_PROFILING_MODE_NB_RECORDS_LIMIT) # Generates a profile report, providing for time-series data, # an overview of the behaviour of time dependent variables # regarding behaviours such as time plots, seasonality, trends, # stationary and data gaps, and identifying gaps in the time series, # caused either by missing values or by entries missing in the time index profile = self.create_profile_report(dataset_to_analyze=data_to_analyze, report_name=report_name, dataset_description_url=description_url) return profile except Exception as exc: error_message = f"Unexpected error of type {type(exc).__name__} was raised while data exploratory profiler: {str(exc)}" self.logger.exception( "Run data exploratory analysis fails to generate report %s: %s", report_name, error_message, ) raise RuntimeError(error_message) from exc ``` ### pandas-profiling version v.4.6.3 ### Dependencies ```Text Ipython-8.19.0 MarkupSafe-2.1.3 PyAthena-3.0.10 PyWavelets-1.5.0 SQLAlchemy-1.4.50 altair-4.2.2 annotated-types-0.6.0 anyio-4.2.0 argon2-cffi-23.1.0 argon2-cffi-bindings-21.2.0 arrow-1.3.0 asn1crypto-1.5.1 asttokens-2.4.1 async-lru-2.0.4 asyncio-3.4.3 awswrangler-3.4.2 babel-2.14.0 beautifulsoup4-4.12.2 bleach-6.1.0 boto-session-manager-1.7.1 boto3-1.34.9 boto3-helpers-1.4.0 botocore-1.34.9 cffi-1.16.0 colorama-0.4.6 comm-0.2.0 cryptography-41.0.7 dacite-1.8.1 debugpy-1.8.0 decorator-5.1.1 defusedxml-0.7.1 delta-spark-2.3.0 deltalake-0.14.0 editorconfig-0.12.3 entrypoints-0.4 exceptiongroup-1.2.0 executing-2.0.1 fastjsonschema-2.19.1 flatten_dict-0.4.2 fqdn-1.5.1 fsspec-2023.12.2 func-args-0.1.1 great-expectations-0.18.7 greenlet-3.0.3 htmlmin-0.1.12 imagehash-4.3.1 ipykernel-6.28.0 ipywidgets-8.1.1 isoduration-20.11.0 iterproxy-0.3.1 jedi-0.19.1 jinja2-3.1.2 jsbeautifier-1.14.11 json2html-1.3.0 json5-0.9.14 jsonpatch-1.33 jsonpath-ng-aerospike-1.5.3 jsonpointer-2.4 jsonschema-4.20.0 jsonschema-specifications-2023.12.1 jupyter-client-8.6.0 jupyter-core-5.6.0 jupyter-events-0.9.0 jupyter-lsp-2.2.1 jupyter-server-2.12.1 jupyter-server-terminals-0.5.1 jupyterlab-4.0.9 jupyterlab-pygments-0.3.0 jupyterlab-server-2.25.2 jupyterlab-widgets-3.0.9 llvmlite-0.41.1 lxml-4.9.4 makefun-1.15.2 markdown-it-py-3.0.0 marshmallow-3.20.1 matplotlib-inline-0.1.6 mdurl-0.1.2 mistune-3.0.2 mmhash3-3.0.1 multimethod-1.10 nbclient-0.9.0 nbconvert-7.13.1 nbformat-5.9.2 nest-asyncio-1.5.8 networkx-3.2.1 notebook-7.0.6 notebook-shim-0.2.3 numba-0.58.1 overrides-7.4.0 pandas-2.0.3 pandocfilters-1.5.0 parso-0.8.3 pathlib-mate-1.3.1 pathlib2-2.3.7.post1 patsy-0.5.5 pexpect-4.9.0 phik-0.12.3 platformdirs-4.1.0 ply-3.11 prometheus-client-0.19.0 prompt-toolkit-3.0.43 psutil-5.9.7 ptyprocess-0.7.0 pure-eval-0.2.2 py4j-0.10.9.5 pyarrow-12.0.1 pycparser-2.21 pydantic-2.5.3 pydantic-core-2.14.6 pydeequ-1.2.0 pygments-2.17.2 pyiceberg-0.5.1 pyparsing-3.1.1 pyspark-3.3.4 python-json-logger-2.0.7 pytz-2023.3.post1 pyzmq-25.1.2 redshift_connector-2.0.918 referencing-0.32.0 requests-2.31.0 rfc3339-validator-0.1.4 rfc3986-validator-0.1.1 rich-13.7.0 rpds-py-0.16.2 ruamel.yaml-0.17.17 s3path-0.4.2 s3pathlib-2.0.1 s3transfer-0.10.0 scramp-1.4.4 send2trash-1.8.2 smart-open-6.4.0 sniffio-1.3.0 sortedcontainers-2.4.0 soupsieve-2.5 sqlalchemy-redshift-0.8.14 sqlalchemy_utils-0.41.1 stack-data-0.6.3 strictyaml-1.7.3 tabulate-0.9.0 tangled-up-in-unicode-0.2.0 terminado-0.18.0 tinycss2-1.2.1 tomli-2.0.1 toolz-0.12.0 tornado-6.4 traitlets-5.14.0 typeguard-4.1.5 types-python-dateutil-2.8.19.14 typing-extensions-4.9.0 tzlocal-5.2 uri-template-1.3.0 urllib3-2.0.7 uuid7-0.1.0 visions-0.7.5 wcwidth-0.2.12 webcolors-1.13 webencodings-0.5.1 websocket-client-1.7.0 widgetsnbextension-4.0.9 wordcloud-1.9.3 ``` ### OS linux ### Checklist - [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues) - [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report. - [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
open
2023-12-29T22:52:16Z
2023-12-29T23:10:28Z
https://github.com/ydataai/ydata-profiling/issues/1523
[ "needs-triage" ]
tboz38
1
matterport/Mask_RCNN
tensorflow
2,674
Try using custom_callbacks. Counting accuracy takes a lot of time, so don't use it every epoch.
Try using custom_callbacks. Counting accuracy takes a lot of time, so don't use it every epoch. mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model,\ model_inference, dataset_val, calculate_map_at_every_X_epoch=5, verbose=1) model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=100, layers='heads', custom_callbacks=[mean_average_precision_callback]) _Originally posted by @VtlNmnk in https://github.com/matterport/Mask_RCNN/issues/1839#issuecomment-549430332_
open
2021-08-20T17:17:17Z
2021-08-20T17:32:23Z
https://github.com/matterport/Mask_RCNN/issues/2674
[]
t00lbelt
1
taverntesting/tavern
pytest
538
Key(s) not found in format: <var_name> = '???' Error After upgrading to 1.0.
I have recently upgraded the tavern to 1.0 but after upgrading and changing all tests compatible to 1.0. Eg - ```body: -> json:``` in response. Almost tests are failing with format error. One scenerio: 1. data.yaml ```yaml --- # Each file should have a name and description name: User Variables description: Varibales used for all tests for User # Variables should just be a mapping of key: value pairs variables: host: "{tavern.env_vars.TEST_API_HOST}" invalid_otp: 10000 app_version: 50 app_version_msg: "Please update to new version." ``` 2. test_user.tavern.yaml ```yaml includes: - !include data.yaml test_name: "User Initialization: environment setup (/api/v1/testing/integration/)" marks: - integration stages: - name: Initial data setup for accounts request: url: "{host}/api/v1/testing/integration/" method: POST json: setup: "accounts" headers: Content-Type: application/json response: status_code: 200 json: success: true mobile: !anyint otp: !anystr save: json: mobile: mobile valid_otp: otp --- test_name: "Test User Login and Verify" marks: - integration stages: - name: Test for APP version check request: url: "{host}/api/v1/accounts/login/" method: POST json: mobile: "{mobile}" response: status_code: 400 json: detail: "{app_version_msg}" ``` In above test first test is passing, but second test is failing with below error. ```': 'tests/accounts/test_user.tavern.yaml::Test User Login and Verify (call)'}}}] Key(s) not found in format: app_version_msg app_version_msg = '???' Source test stage (line 35): - name: Test for APP version check request: url: "{host}/api/v1/accounts/login/" method: POST json: mobile: "{mobile}" response: status_code: 400 json: detail: "{app_version_msg}" Unable to get formatted stage Errors: E tavern.util.exceptions.MissingFormatError: host ``` And all my tests follow the same conventions, I save common variables in `data.yaml`, env_vars are loaded as well. I could not find any breaking changes regarding format in 1.0. Please address this if anyone know this error.
closed
2020-04-16T07:02:32Z
2020-04-17T10:22:03Z
https://github.com/taverntesting/tavern/issues/538
[]
imkaka
4
recommenders-team/recommenders
data-science
1,246
How to generate context embedding for DKN?[ASK]
### Description <!--- Describe your general ask in detail --> I notice that context embedding is not used in the [dkn_deep_dive.ipynb](https://github.com/microsoft/recommenders/blob/master/examples/02_model_content_based_filtering/dkn_deep_dive.ipynb). But as DKN article describes, context embedding can improve the quality of the model. I wonder how we can generate a context embedding for DKN using MIND dataset? ### Other Comments
open
2020-11-16T00:40:27Z
2020-12-17T09:14:56Z
https://github.com/recommenders-team/recommenders/issues/1246
[ "help wanted" ]
ConnollyLeon
5
microsoft/nni
tensorflow
5,666
Requests exception: too many redirects
**Describe the issue**: running the following example experiment: `nnictl create --config .\nni\examples\trials\mnist-pytorch\config_windows.yml` getting this error: [2023-08-18 10:22:56] Creating experiment, Experiment ID: nu5kfm0o [2023-08-18 10:22:56] Starting web server... [2023-08-18 10:22:59] WARNING: Timeout, retry... [2023-08-18 10:23:01] WARNING: Timeout, retry... [2023-08-18 10:23:02] ERROR: Create experiment failed Traceback (most recent call last): File "C:\Users\alzeinha\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\alzeinha\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\alzeinha\BNext_PyTorch\torch_venv\Scripts\nnictl.exe\__main__.py", line 7, in <module> File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\tools\nnictl\nnictl.py", line 503, in parse_args args.func(args) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\tools\nnictl\launcher.py", line 91, in create_experiment exp.start(port, debug, RunMode.Detach) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\experiment\experiment.py", line 135, in start self._start_impl(port, debug, run_mode, None, []) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\experiment\experiment.py", line 103, in _start_impl self._proc = launcher.start_experiment(self._action, self.id, config, port, debug, run_mode, File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\experiment\launcher.py", line 148, in start_experiment raise e File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\experiment\launcher.py", line 126, in start_experiment _check_rest_server(port, url_prefix=url_prefix) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\experiment\launcher.py", line 196, in _check_rest_server rest.get(port, '/check-status', url_prefix) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\experiment\rest.py", line 43, in get return request('get', port, api, prefix=prefix) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\nni\experiment\rest.py", line 31, in request resp = requests.request(method, url, timeout=timeout) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\requests\sessions.py", line 725, in send history = [resp for resp in gen] File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\requests\sessions.py", line 725, in <listcomp> history = [resp for resp in gen] File "c:\users\alzeinha\bnext_pytorch\torch_venv\lib\site-packages\requests\sessions.py", line 191, in resolve_redirects raise TooManyRedirects( requests.exceptions.TooManyRedirects: Exceeded 30 redirects. **Environment**: - NNI version: 1.5 - Training service (local|remote|pai|aml|etc): local - Client OS: windows 10 - Python version: 3.8 **Log message**: - nnimanager.log: `[2023-08-18 10:22:56] INFO (main) Start NNI manager [2023-08-18 10:22:57] INFO (NNIDataStore) Datastore initialization done [2023-08-18 10:22:57] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/" [2023-08-18 10:22:57] WARNING (NNITensorboardManager) Tensorboard may not installed, if you want to use tensorboard, please check if tensorboard installed. [2023-08-18 10:22:57] INFO (RestServer) REST server started. `
open
2023-08-18T08:38:27Z
2023-08-18T08:38:27Z
https://github.com/microsoft/nni/issues/5666
[]
Hadiaz1
0
kensho-technologies/graphql-compiler
graphql
489
Add Float field to test schema
Neither [Neo4j](https://neo4j.com/docs/cypher-manual/current/syntax/values/) nor [RedisGraph](https://oss.redislabs.com/redisgraph/cypher_support/#types) support Decimal types, but Neo4j does support floats and RedisGraph 64-bit doubles, so it would be nice to add a field like that to the schema and integration test data so that we can test these types for integration tests.
open
2019-08-16T14:11:28Z
2019-10-08T15:50:03Z
https://github.com/kensho-technologies/graphql-compiler/issues/489
[ "maintainer quality-of-life" ]
LWprogramming
0
HumanSignal/labelImg
deep-learning
349
Installing modified software not working properly
- OS: Windows 10 - PyQt version:5 I modified your software and adapted it to my needs and it works well when I run it from the source code. But when I tried to install it using pyinstaller, it doesn't read the images when I open a directory. Can you provide me with the correct way to install it?
open
2018-08-13T11:01:26Z
2018-08-13T11:01:26Z
https://github.com/HumanSignal/labelImg/issues/349
[]
issahammoud
0
PokeAPI/pokeapi
graphql
379
Pokemon types are inverted
All the pokemon secondary type appear as the first element in the type list. The order is correct when I ran a local Django instance
closed
2018-10-03T01:28:29Z
2020-08-12T20:24:13Z
https://github.com/PokeAPI/pokeapi/issues/379
[]
tien
6
lux-org/lux
pandas
113
No module named 'luxwidget' when installing and activating the api
The console leaves me a "ModuleNotFoundError: No module named 'luxwidget'" when i try to run "jupyter nbextension install --sys-prefix --symlink --overwrite --py luxwidget" in cmd.
closed
2020-10-16T09:53:50Z
2020-10-16T22:27:33Z
https://github.com/lux-org/lux/issues/113
[]
Smartog
6
sloria/TextBlob
nlp
403
Pluralization gives dubious results on some corner cases
Hi, Thanks for providing and maintaining this library. I noticed that some corner cases are not handled properly. Notably: ` from textblob import Word print(Word("lynx").pluralize()) # prints "lynges" print(Word("jeans").pluralize()) # prints "jeanss" ` I'm not a native speaker but I can't find any mention of "lynges" in any dictionary, so I assume it's wrong. The merriam-webster suggests "lynxes". The double s problem is self explanatory.
open
2021-10-30T04:54:13Z
2021-10-30T04:54:13Z
https://github.com/sloria/TextBlob/issues/403
[]
alcinos
0
klen/mixer
sqlalchemy
51
Peewee backend doesn't commit by default
I found this surprising, and it seems like a bug, but the Peewee backend doesn't save by default. It seems like it should default to committing, like the Django backend does. Here's a simple test case: ``` python from mixer.backend.peewee import mixer from models import Foo print(Foo.select().count()) # 0 mixer.blend(Foo) print(Foo.select().count()) # Still 0, which was unexpected. ``` Doing this works just fine, but it's not documented anywhere that this is required: ``` python with mixer.ctx(commit=True): foo = mixer.blend(Foo) ```
closed
2015-11-20T04:23:22Z
2015-11-23T12:30:11Z
https://github.com/klen/mixer/issues/51
[]
ghost
1
jumpserver/jumpserver
django
14,414
[Question] How to fix a jquery old version vulnerability?
### Product Version 4.3.1 ### Product Edition - [X] Community Edition - [ ] Enterprise Edition - [ ] Enterprise Trial Edition ### Installation Method - [X] Online Installation (One-click command installation) - [ ] Offline Package Installation - [ ] All-in-One - [ ] 1Panel - [ ] Kubernetes - [ ] Source Code ### Environment Information We have a PAM Jumpserver in cluster mode. ### 🤔 Question Description we have a report about a vulnerability in jquery old version 1.4.4. After scanning we have a recommendation to update jquery to the latest version. We have updated to the latest versions of PAM Jumpserver, but the jquery version inside the container remains unchanged. [jQuery_vulnerability.xlsx](https://github.com/user-attachments/files/17652431/jQuery_vulnerability.xlsx) ![jquery old version vulnerability](https://github.com/user-attachments/assets/becebd4a-6824-4018-b964-257d4d31964b) ### Expected Behavior How can we fix the vulnerability or is there any way to update jquery to the latest version? ### Additional Information _No response_
closed
2024-11-06T20:09:26Z
2025-03-20T13:13:13Z
https://github.com/jumpserver/jumpserver/issues/14414
[ "✅ Done", "🤔 Question", "📦 z~release:PAM", "📦 z~release:v4.8.0" ]
obgranat
8
luispedro/mahotas
numpy
58
Bug in SURF sum_rect()
Shouldn't the lines 42 and 43 in file https://github.com/luispedro/mahotas/blob/master/mahotas/features/_surf.cpp be: ``` cpp y1 = std::min<int>(y1, integral.dim(0)-1); x1 = std::min<int>(x1, integral.dim(1)-1); ``` ?
closed
2015-04-04T14:03:33Z
2015-04-06T19:05:15Z
https://github.com/luispedro/mahotas/issues/58
[]
Mimino666
1
Josh-XT/AGiXT
automation
532
Streamlit - Add console output for Tasks
### Problem Description Well to be able to see the action you got to view the logs in the device that you are hosting from. This is an issue as the multi user setup won't be able to remote into the host machine. ### Proposed Solution have a console output so that any user can see the console output ONLY their agent and user should be able the see their outputs only not other users. ### Alternatives Considered This is idea is inspired by cognosys where they have the logs being showed as the agent runs through. ### Additional Context _No response_ ### Acknowledgements - [X] My issue title is concise, descriptive, and in title casing. - [X] I have searched the existing issues to make sure this feature has not been requested yet. - [X] I have provided enough information for the maintainers to understand and evaluate this request.
closed
2023-05-31T04:00:22Z
2023-06-09T02:33:29Z
https://github.com/Josh-XT/AGiXT/issues/532
[ "type | request | enhancement", "help wanted" ]
birdup000
5
dmlc/gluon-nlp
numpy
1,142
packaging is a required dependency
``` 2020-02-07 04:15:34,466 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - import gluonnlp as nlp 2020-02-07 04:15:34,466 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/usr/local/lib/python3.6/site-packages/gluonnlp/__init__.py", line 49, in <module> 2020-02-07 04:15:34,466 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - utils.version.check_version('1.6.0', warning_only=True, library=mxnet) 2020-02-07 04:15:34,466 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/usr/local/lib/python3.6/site-packages/gluonnlp/utils/version.py", line 43, in check_version 2020-02-07 04:15:34,466 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - from packaging.version import parse 2020-02-07 04:15:34,467 [INFO ] W-9000-model-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ModuleNotFoundError: No module named 'packaging' ``` CC @eric-haibin-lin
closed
2020-02-07T17:13:47Z
2020-02-22T17:42:50Z
https://github.com/dmlc/gluon-nlp/issues/1142
[ "bug", "release focus" ]
leezu
0
open-mmlab/mmdetection
pytorch
11,882
Different image resize scales used in training & test pipeline in YOLOX_tiny config file
In yolox tiny config file : _https://github.com/open-mmlab/mmdetection/blob/main/configs/yolox/yolox_tiny_8xb8-300e_coco.py_ img_scale = (640, 640) is used in train_pipeline while (416, 416) is used in test_pipeline. Shouldn't both be same? ![image](https://github.com/user-attachments/assets/db1d44cd-338d-439e-a4fb-6ddc756ddb19)
open
2024-07-26T10:14:49Z
2024-07-26T10:15:05Z
https://github.com/open-mmlab/mmdetection/issues/11882
[]
girinchutia-bh
0
alteryx/featuretools
scikit-learn
2,038
Fix flaky Dask test behavior
There is some flaky behavior with Dask tests, that seem to be related to the tests that use the `cluster_scheduler` fixture. Sometimes these tests do not get run, or do not get reported as being run by codecov, while other times they do. One test that seems especially problematic is `test_dask_kwargs` in `test_calculate_feature_matrix.py`. [Example codecov report](https://app.codecov.io/gh/alteryx/featuretools/compare/2035/changes) We should investigate the cause of this flaky behavior and fix, as this is causing the codecov project coverage CI check to fail frequently in PR's.
closed
2022-04-26T19:08:18Z
2022-05-16T19:15:52Z
https://github.com/alteryx/featuretools/issues/2038
[]
thehomebrewnerd
0
codertimo/BERT-pytorch
nlp
26
Bidirectional Encoder = Transformer (self-attention), Is it true?
https://github.com/codertimo/BERT-pytorch/blob/alpha0.0.1a4/bert_pytorch/model/transformer.py#L9 Thank you!
closed
2018-10-23T03:40:23Z
2018-10-23T04:16:11Z
https://github.com/codertimo/BERT-pytorch/issues/26
[ "question" ]
guotong1988
2
graphistry/pygraphistry
pandas
65
Codec error when running Marvel Tutorial with Python 2 Kernel
To reproduce, execute marvel tutorial notebook with a python 2 kernel. The command: `plotter2.plot(unique_coappearences, heroes)` Produces the error: ``` --------------------------------------------------------------------------- UnicodeEncodeError Traceback (most recent call last) <ipython-input-10-7baa7f94961d> in <module>() ----> 1 plotter2.plot(unique_coappearences, heroes) /usr/local/lib/python2.7/site-packages/graphistry/plotter.pyc in plot(self, graph, nodes, name) 310 if (api_version == 1): 311 dataset = self._plot_dispatch(g, n, name, 'json') --> 312 info = PyGraphistry._etl1(dataset) 313 elif (api_version == 2): 314 dataset = self._plot_dispatch(g, n, name, 'vgraph') /usr/local/lib/python2.7/site-packages/graphistry/pygraphistry.pyc in _etl1(dataset) 303 'key': PyGraphistry.api_key()} 304 --> 305 out_file = PyGraphistry._get_data_file(dataset, 'json') 306 response = requests.post(PyGraphistry._etl_url(), out_file.getvalue(), 307 headers=headers, params=params, /usr/local/lib/python2.7/site-packages/graphistry/pygraphistry.pyc in _get_data_file(dataset, mode) 274 with gzip.GzipFile(fileobj=out_file, mode='w', compresslevel=9) as f: 275 if sys.version_info < (3,0) and isinstance(json_dataset, str): --> 276 f.write(json_dataset) 277 else: 278 f.write(json_dataset.encode('utf8')) /usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/gzip.pyc in write(self, data) 239 240 if len(data) > 0: --> 241 self.fileobj.write(self.compress.compress(data)) 242 self.size += len(data) 243 self.crc = zlib.crc32(data, self.crc) & 0xffffffffL UnicodeEncodeError: 'ascii' codec can't encode character u'\xc1' in position 10735680: ordinal not in range(128) ``` Pygraphistry version is 0.9.28
closed
2016-05-18T21:13:11Z
2016-05-19T06:02:47Z
https://github.com/graphistry/pygraphistry/issues/65
[ "bug", "p4" ]
padentomasello
0
arogozhnikov/einops
numpy
216
[Feature suggestion] Naming einops keras layers
Hello, it would be really nice if one could name the eniops layers just like any other keras layer. Right now, the following code triggers an error. ```python from einops.layers.tensorflow import Rearrange tf.keras.Sequential([ tf.keras.Input((224, 224, 3), name="inputs"), Rearrange("b h w c -> b c h w", name="rearrange_layer_1"), ]) ``` The error goes away if we do not name the einops layer. ```python from einops.layers.tensorflow import Rearrange tf.keras.Sequential([ tf.keras.Input((224, 224, 3), name="inputs"), Rearrange("b h w c -> b c h w"), ]) ``` Naming layers is very useful in keras, specially when using Functional models, to extract intermediate representations or to add new nodes to the graph. This process of extracting nodes is done by accessing the model's layer with `model.get_layer(name)`.
closed
2022-10-08T05:08:54Z
2024-01-24T23:30:30Z
https://github.com/arogozhnikov/einops/issues/216
[ "feature suggestion" ]
Sangohe
1
0xTheProDev/fastapi-clean-example
graphql
6
question on service-to-service
Is it an issue that we expose models directly? For example in BookService we expose model in get method, and we also use models of Author repository. This implies that we are allowed to use those interfaces - CRUD, read all properties, call other relations, etc. Is exposing schemas objects between domains is a better solution?
closed
2023-02-13T10:09:41Z
2023-04-25T07:10:49Z
https://github.com/0xTheProDev/fastapi-clean-example/issues/6
[]
Murtagy
1
kymatio/kymatio
numpy
333
Test for raised errors in `Scattering2D.forward`
One `TypeError` and three `RuntimeError`s may be raised by `forward`, but are not currently tested. For example, see [here](https://codecov.io/gh/kymatio/kymatio/src/82f21ed11e9152cd234e63af96d48300210429a9/kymatio/scattering2d/scattering2d.py#L122).
closed
2019-02-16T21:54:27Z
2019-03-02T19:30:30Z
https://github.com/kymatio/kymatio/issues/333
[ "2D", "tests" ]
janden
0
bigscience-workshop/petals
nlp
551
Reachability Issue for private swarm
I am creating a private swarm using backbone peer hosted at aws EC2 instance and trying to connect my machine's GPU but getting reachability issues. I have followed the list of commands listed below. ```python3 -m petals.cli.run_dht --host_maddrs /ip4/0.0.0.0/tcp/31337 --identity_path bootstrap1.id``` I am getting reachability as False on backbone peer: ```reachability.rpc_check(remote_peer=...YELDdS, check_peer=...YELDdS) -> False``` I have followed commands from petals documentation for creating backbone peer and running petals server ```python3 -m petals.cli.run_server meta-llama/Llama-2-13b-hf --token mytoken --initial_peers /ip4/EC2_public_ip/tcp/31337/p2p/Qm_mybackbone_peer```
closed
2024-01-12T06:34:03Z
2024-11-26T21:09:09Z
https://github.com/bigscience-workshop/petals/issues/551
[]
VarunJoshi10
1
deepinsight/insightface
pytorch
2,455
Arcface_paddle :Why are the contents of FresResNet50.pdiparams output by export.py different?
When checking the two output by export.py with md5sum, only the contents of FresResNet50.pdiparams are different. sh scripts/export_static.sh && md5sum /FresResNet50/exported_model/* 4166d80f82ba77cc7254432871c21470 /FresResNet50.pdiparams 96637fd6ffa626790b2607445ef21cce /FresResNet50.pdmodel sh scripts/export_static.sh && md5sum /FresResNet50/exported_model/* 5e1e9f239754d76f4db4ea85f5e4067b /FresResNet50.pdiparams 96637fd6ffa626790b2607445ef21cce /FresResNet50.pdmodel sh scripts/export_static.sh && md5sum /FresResNet50/exported_model/* 5e1e9f239754d76f4db4ea85f5e4067b /FresResNet50.pdiparams 96637fd6ffa626790b2607445ef21cce /FresResNet50.pdmodel Why do people and labels not match even in video recognition?
open
2023-10-18T02:33:47Z
2023-10-18T02:33:47Z
https://github.com/deepinsight/insightface/issues/2455
[]
kenmiyauchi
0
plotly/dash-table
dash
279
Use new refs pattern instead of legacy string pattern
React 16 introduced a new pattern for refs, described [here](https://reactjs.org/docs/refs-and-the-dom.html) - we are using the string pattern, described in that doc as "legacy". It's probably not a concern until it will be deprecated in React 18 (or later?) but it could give us some performance gains. Making this issue just to note that we are using what is described as a "legacy api" instead of the latest preferred pattern.
open
2018-12-04T17:56:27Z
2018-12-04T17:56:27Z
https://github.com/plotly/dash-table/issues/279
[ "dash-type-maintenance" ]
valentijnnieman
0
dpgaspar/Flask-AppBuilder
flask
2,263
'_FakeStack' object has no attribute '__ident_func__'
### Environment win 11 Python 3.8.10 Flask-Appbuilder version: 4.5.0 pip freeze output: aliyun-python-sdk-core==2.13.36 aliyun-python-sdk-dysmsapi==2.1.2 aliyun-python-sdk-kms==2.16.2 apispec==6.6.1 async-timeout==4.0.2 attrs==23.2.0 Babel==2.15.0 bcrypt==4.0.1 beautifulsoup4==4.12.2 blinker==1.6.2 cachelib==0.9.0 certifi==2021.10.8 cffi==1.16.0 charset-normalizer==2.0.12 click==8.1.7 colorama==0.4.6 colored==1.4.3 comtypes==1.1.14 contourpy==1.0.7 coverage==6.5.0 crcmod==1.7 crypto==1.4.1 cryptography==42.0.7 cycler==0.11.0 Deprecated==1.2.14 dnspython==2.6.1 email_validator==2.2.0 et-xmlfile==1.1.0 exceptiongroup==1.0.4 Flask==2.3.3 Flask-Admin==1.6.1 Flask-AppBuilder==4.5.0 Flask-Babel==2.0.0 Flask-Caching==2.3.0 Flask-HTTPAuth==4.8.0 Flask-JSON==0.4.0 Flask-JWT-Extended==4.6.0 Flask-Limiter==3.7.0 Flask-Login==0.6.3 Flask-SQLAlchemy==2.4.0 Flask-WTF==1.2.1 fonttools==4.39.3 future==0.18.3 Gooey==1.0.8.1 greenlet==3.0.3 htmldocx==0.0.6 idna==3.3 importlib-metadata==6.8.0 importlib-resources==5.12.0 iniconfig==1.1.1 itsdangerous==2.1.2 Jinja2==3.1.2 jmespath==0.10.0 jpg2pdf==0.1.0 jsonschema==4.22.0 jsonschema-specifications==2023.12.1 kiwisolver==1.4.4 limits==3.13.0 lxml==4.9.2 markdown-it-py==3.0.0 MarkupSafe==2.1.3 marshmallow==3.21.3 marshmallow-sqlalchemy==0.28.2 matplotlib==3.7.1 mdurl==0.1.2 mysqlclient==2.2.4 Naked==0.1.32 natsort==8.4.0 netmiko==4.2.0 ntc_templates==4.0.1 numpy==1.24.4 opencv-contrib-python==4.7.0.72 opencv-python==4.7.0.72 openpyxl==3.0.9 ordered-set==4.1.0 orjson==3.10.5 oss2==2.18.5 packaging==22.0 pandas==2.0.3 paramiko==3.3.1 Pillow==9.1.1 pkgutil_resolve_name==1.3.10 pluggy==1.0.0 prison==0.2.1 psutil==5.9.1 py==1.11.0 pycparser==2.21 pycryptodome==3.20.0 Pygments==2.18.0 pygtrie==2.4.2 PyJWT==2.8.0 PyMySQL==1.1.0 PyNaCl==1.5.0 pyparsing==3.0.9 pyproj==3.5.0 pyserial==3.5 PySimpleGUI==4.60.4 pysolr==3.9.0 pytest==7.2.1 pytest-cov==4.0.0 pytest-datadir==1.4.1 pytest-html==3.2.0 pytest-metadata==2.0.4 python-dateutil==2.8.2 python-docx==0.8.11 pytz==2021.3 pywin32==306 PyYAML==6.0.1 redis==4.6.0 redlock-py==1.0.8 referencing==0.35.1 requests==2.27.1 rich==13.7.1 rpds-py==0.18.1 scp==0.14.5 shellescape==3.8.1 six==1.16.0 soupsieve==2.4.1 SQLAlchemy==1.4.52 SQLAlchemy-Utils==0.41.2 tesserocr @ file:///D:/dl/tesserocr-2.5.2-cp38-cp38-win32.whl textfsm==1.1.3 tinydb==4.7.0 tk==0.1.0 tomli==2.0.1 typing_extensions==4.12.2 tzdata==2024.1 urllib3==1.26.9 Werkzeug==3.0.3 wrapt==1.16.0 WTForms==3.1.2 xhlib @ file:///D:/G/xhlib zhconv==1.4.3 zipp==3.15.0 ### Describe the expected results run demo success. ### Describe the actual results ```powershell (data) PS D:\G\Flask-AppBuilder\examples\quickhowto> flask run Traceback (most recent call last): File "D:\py38\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "D:\py38\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "D:\pv\data\Scripts\flask.exe\__main__.py", line 7, in <module> File "D:\pv\data\lib\site-packages\flask\cli.py", line 1064, in main cli.main() File "D:\pv\data\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "D:\pv\data\lib\site-packages\click\core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "D:\pv\data\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "D:\pv\data\lib\site-packages\click\core.py", line 783, in invoke return __callback(*args, **kwargs) File "D:\pv\data\lib\site-packages\click\decorators.py", line 92, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "D:\pv\data\lib\site-packages\click\core.py", line 783, in invoke return __callback(*args, **kwargs) File "D:\pv\data\lib\site-packages\flask\cli.py", line 912, in run_command raise e from None File "D:\pv\data\lib\site-packages\flask\cli.py", line 898, in run_command app = info.load_app() File "D:\pv\data\lib\site-packages\flask\cli.py", line 313, in load_app app = locate_app(import_name, None, raise_if_not_found=False) File "D:\pv\data\lib\site-packages\flask\cli.py", line 219, in locate_app __import__(module_name) File "D:\G\Flask-AppBuilder\examples\quickhowto\app\__init__.py", line 11, in <module> db = SQLA(app) File "D:\pv\data\lib\site-packages\flask_sqlalchemy\__init__.py", line 715, in __init__ self.session = self.create_scoped_session(session_options) File "D:\pv\data\lib\site-packages\flask_sqlalchemy\__init__.py", line 748, in create_scoped_session scopefunc = options.pop('scopefunc', _app_ctx_stack.__ident_func__) AttributeError: '_FakeStack' object has no attribute '__ident_func__' ```
closed
2024-07-19T14:31:01Z
2024-07-20T00:11:00Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/2263
[]
LeiYangGH
2
ets-labs/python-dependency-injector
flask
177
Question on testing with python-dependency-injector
Hello and thank you for such a great library. I have a question regarding of how to test application, that uses python-dependency-injector library. Lets take simple usecase: ``` class EmailSender: def send(self, email): pass class SmtpEmailSender: # implementation with use of smpt library class EchoingEmailSender: # logging / printing to stdout implementation def notify_users(email_sender, email): email_sender.send(email) ``` In production i want to use SmtpEmailSender, but in tests only EchoingEmailSender. I have configured a container which provides me with production-ready class of EmailSender and using it like: ``` Services.notify_users(email) ``` So, notify_users get production-ready dependency injected. So the question is: how do i switch implementation in tests? Surely i can override this specific dependency, and it will work okay, but what if i have 10 containers with different providers, that is used by application, should i override them in every test i write? I think it can become an error-prone approach. Thanks.
closed
2018-01-14T07:08:37Z
2018-01-17T13:47:12Z
https://github.com/ets-labs/python-dependency-injector/issues/177
[ "question" ]
asyncee
9
errbotio/errbot
automation
1,439
!repos command fails with error '''Computer says nooo. See logs for details: b'repo_index' '''
In order to let us help you better, please fill out the following fields as best you can: ### I am... * [ *] Reporting a bug * [ ] Suggesting a new feature * [ ] Requesting help with running my bot * [ ] Requesting help writing plugins * [ ] Here about something else ### I am running... * Errbot version: 6.1.1 * OS version: Ubuntu 20.04 LTS * Python version: 3.8 * Using a virtual environment: no ### Issue description Please describe your bug/feature/problem here. The !repos command is not at all working. ``` GS 10:55 PM !repos install https://github.com/Gnoffel/errbot-kudos bot2APP 10:55 PM Installing https://github.com/Gnoffel/errbot-kudos... 10:55 Computer says nooo. See logs for details: b'repo_index' ``` Now another output ``` GS 11:00 PM !repos bot2APP 11:00 PM ┏━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━┓ ┃ Status ┃ Name ┃ Description ┃ ┡━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━┩ │ │ │ │ └────────┴──────┴─────────────┘ ``` ### Steps to reproduce NA ### Additional info NA
closed
2020-08-21T05:37:19Z
2020-08-31T06:46:42Z
https://github.com/errbotio/errbot/issues/1439
[ "type: support/question", "fix-available" ]
netwninja
1
davidteather/TikTok-Api
api
544
[BUG] - gevent.exceptions.InvalidSwitchError: Invalid switch into Event.wait(): ()
Sorry, I'm from Russia, I use a translator. Your library is working properly, but often after a while such an error is thrown out Traceback (most recent call last): File "main.py", line 1283, in <module> _token_bot.polling(none_stop=True, interval=0) File "/usr/local/lib/python3.8/dist-packages/telebot/__init__.py", line 487, in polling self.__non_threaded_polling(none_stop, interval, timeout, long_polling_timeout) File "/usr/local/lib/python3.8/dist-packages/telebot/__init__.py", line 591, in __non_threaded_polling raise e File "/usr/local/lib/python3.8/dist-packages/telebot/__init__.py", line 562, in __non_threaded_polling self.__retrieve_updates(timeout, long_polling_timeout) File "/usr/local/lib/python3.8/dist-packages/telebot/__init__.py", line 322, in __retrieve_updates updates = self.get_updates(offset=(self.last_update_id + 1), timeout=timeout, long_polling_timeout = long_polling_timeout) File "/usr/local/lib/python3.8/dist-packages/telebot/__init__.py", line 292, in get_updates json_updates = apihelper.get_updates(self.token, offset, limit, timeout, allowed_updates, long_polling_timeout) File "/usr/local/lib/python3.8/dist-packages/telebot/apihelper.py", line 281, in get_updates return _make_request(token, method_url, params=payload) File "/usr/local/lib/python3.8/dist-packages/telebot/apihelper.py", line 126, in _make_request result = _get_req_session().request( File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3/dist-packages/requests/adapters.py", line 439, in send resp = conn.urlopen( File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen httplib_response = self._make_request( File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 421, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 416, in _make_request httplib_response = conn.getresponse() File "/usr/lib/python3.8/http/client.py", line 1347, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 307, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 268, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/usr/lib/python3.8/socket.py", line 669, in readinto return self._sock.recv_into(b) File "/usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py", line 325, in recv_into if not util.wait_for_read(self.socket, self.socket.gettimeout()): File "/usr/lib/python3/dist-packages/urllib3/util/wait.py", line 146, in wait_for_read return wait_for_socket(sock, read=True, timeout=timeout) File "/usr/lib/python3/dist-packages/urllib3/util/wait.py", line 107, in poll_wait_for_socket return bool(_retry_on_intr(do_poll, timeout)) File "/usr/lib/python3/dist-packages/urllib3/util/wait.py", line 43, in _retry_on_intr return fn(timeout) File "/usr/lib/python3/dist-packages/urllib3/util/wait.py", line 105, in do_poll return poll_obj.poll(t) File "/usr/local/lib/python3.8/dist-packages/gevent/select.py", line 314, in poll result.event.wait(timeout=timeout) File "src/gevent/event.py", line 163, in gevent._gevent_cevent.Event.wait File "src/gevent/_abstract_linkable.py", line 521, in gevent._gevent_c_abstract_linkable.AbstractLinkable._wait File "src/gevent/_abstract_linkable.py", line 487, in gevent._gevent_c_abstract_linkable.AbstractLinkable._wait_core File "src/gevent/_abstract_linkable.py", line 490, in gevent._gevent_c_abstract_linkable.AbstractLinkable._wait_core File "src/gevent/_abstract_linkable.py", line 442, in gevent._gevent_c_abstract_linkable.AbstractLinkable._AbstractLinkable__wait_to_be_notified File "src/gevent/_abstract_linkable.py", line 451, in gevent._gevent_c_abstract_linkable.AbstractLinkable._switch_to_hub File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch File "/usr/local/lib/python3.8/dist-packages/playwright/sync_api/_context_manager.py", line 48, in greenlet_main loop.run_until_complete(self._connection.run_as_sync()) File "/usr/lib/python3.8/asyncio/base_events.py", line 603, in run_until_complete self.run_forever() File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever self._run_once() File "/usr/lib/python3.8/asyncio/base_events.py", line 1823, in _run_once event_list = self._selector.select(timeout) File "/usr/local/lib/python3.8/dist-packages/gevent/selectors.py", line 201, in select self._ready.wait(timeout) File "src/gevent/event.py", line 163, in gevent._gevent_cevent.Event.wait File "src/gevent/_abstract_linkable.py", line 521, in gevent._gevent_c_abstract_linkable.AbstractLinkable._wait File "src/gevent/_abstract_linkable.py", line 487, in gevent._gevent_c_abstract_linkable.AbstractLinkable._wait_core File "src/gevent/_abstract_linkable.py", line 490, in gevent._gevent_c_abstract_linkable.AbstractLinkable._wait_core File "src/gevent/_abstract_linkable.py", line 442, in gevent._gevent_c_abstract_linkable.AbstractLinkable._AbstractLinkable__wait_to_be_notified File "src/gevent/_abstract_linkable.py", line 455, in gevent._gevent_c_abstract_linkable.AbstractLinkable._switch_to_hub gevent.exceptions.InvalidSwitchError: Invalid switch into Event.wait(): () If I knew that there could be a problem at all, then I would take the code in try except, here is a possible problem area `def loadPreset(message): while True: mycursor = mydb.cursor() mycursor.execute(f"SELECT * FROM users WHERE id='{message.from_user.id}'") myresult_user = mycursor.fetchone() mycursor.close() if myresult_user[7] == 0: _token_bot.send_message(message.chat.id, "⚠Настрой пресеты!.") break if myresult_user[8] == 0: break try: trending = api.trending(50) except ValueError: return _token_bot.send_message(message.chat.id, "⚠Отправлены неверные данные. Попробуйте еще раз.") i = 0 for tiktok in trending: if i >= 3: break stop_donwload = 0 mycursor = mydb.cursor() mycursor.execute(f"SELECT * FROM tokens WHERE user_id='{message.from_user.id}'") myresult = mycursor.fetchall() mycursor.close() for down_down in myresult: mycursor1 = mydb.cursor() mycursor1.execute(f"SELECT * FROM clips WHERE token='{down_down[2]}'") myresult = mycursor1.fetchall() mycursor1.close() for row in myresult: if row[2] == tiktok["id"]: stop_donwload = 1 break if i >= 3: break if stop_donwload == 0: url = "https://www.tiktok.com/@" + tiktok["author"]["uniqueId"] + "/video/" + tiktok["id"] x = threading.Thread(target=_load_snaptik, args=(url, message, tiktok["desc"], 1, tiktok["id"])) x.start() i = i + 1 time.sleep(myresult_user[7])`
closed
2021-03-30T12:18:10Z
2022-02-14T03:09:11Z
https://github.com/davidteather/TikTok-Api/issues/544
[ "bug" ]
K1NDER-ai
1
koxudaxi/datamodel-code-generator
pydantic
1,901
datamodel-codegen produces class with missing body parts
**Describe the bug** Within the OSCAL schema is: ``` "EmailAddressDatatype" : { "description" : "An email address string formatted according to RFC 6531.", "allOf" : [ { "$ref" : "#/definitions/StringDatatype" }, { "type" : "string", "format" : "email", "pattern" : "^.+@.+$" } ] }, ``` An **empty body** is produced by datamodel-codegen: ``` class EmailAddressDatatype(OscalBaseModel): """ An email address string formatted according to RFC 6531. """ ``` **To Reproduce** 1. download OSCAL schemas from here [https://github.com/usnistgov/OSCAL/releases/tag/v1.1.2](https://github.com/usnistgov/OSCAL/releases/tag/v1.1.2) 2. run datamodel-codegen (shown below) Example schema (snippet): ```json "EmailAddressDatatype" : { "description" : "An email address string formatted according to RFC 6531.", "allOf" : [ { "$ref" : "#/definitions/StringDatatype" }, { "type" : "string", "format" : "email", "pattern" : "^.+@.+$" } ] }, ``` Follow link above for full schema. Used commandline: ``` datamodel-codegen --disable-timestamp --disable-appending-item-suffix --use-schema-description --input-file-type jsonschema --input release-1.1.2-schemas/oscal_assessment-plan_schema.json --base-class trestle.core.base_model.OscalBaseModel --output trestle/oscal/tmp/assessment_plan.py ``` **Expected behavior** The generated code body should not be empty, but rather contain that which is found in the json schema. **Version:** - OS: Red Hat Enterprise Linux release 8.9 (Ootpa) - Python version: 3.9.9 - datamodel-code-generator version: 0.25.5 **Additional context** NA
open
2024-04-03T18:00:11Z
2024-04-03T18:00:11Z
https://github.com/koxudaxi/datamodel-code-generator/issues/1901
[]
degenaro
0
deepspeedai/DeepSpeed
machine-learning
6,914
Train batch size errors
[rank0]: AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size 8 != 2 * 1 * 1 在前面的dist.world_size打印获取=4,但是依然不行
closed
2024-12-25T09:56:11Z
2025-01-24T15:53:18Z
https://github.com/deepspeedai/DeepSpeed/issues/6914
[]
lckkkk02
4
google-research/bert
tensorflow
759
Exporting probabilities over the learned vocabulary
Currently, the `extract_features.py` file supports extracting representations before the last output layer but I want to extract the final probabilities over the vocabulary. I modified the code with the following addition but it doesn't seem to work (the probabilities I'm getting are all very small). ``` model_output = model.get_sequence_output() batch_size = tf.shape(model_output)[0] length = tf.shape(model_output)[1] hidden_size = tf.shape(model_output)[2] model_output = tf.reshape(inputs, [-1, hidden_size]) logits = tf.matmul(model_output, model.get_embedding_table(), transpose_b=True) probs = tf.nn.softmax(logits, axis=-1) probs = tf.reshape(probs, [batch_size, length, bert_config.vocab_size]) ```
closed
2019-07-12T18:44:11Z
2019-07-16T15:32:33Z
https://github.com/google-research/bert/issues/759
[]
lioutasb
0
cleanlab/cleanlab
data-science
910
Add underperforming_group issue type among the Datalab defaults
Test issue manager with different datasets (Image, tabular etc.) to make sure that the underperforming group in the dataset is extracted successfully. List any failure cases that might need to be addressed before adding this issue type to the defaults.
closed
2023-12-07T04:27:04Z
2024-02-12T16:27:16Z
https://github.com/cleanlab/cleanlab/issues/910
[ "enhancement", "next release" ]
tataganesh
2
flaskbb/flaskbb
flask
661
Update tests to use modern versions of python
Change the build tests so they no longer use the obsolete versions of python and use currently supported ones.
open
2024-04-08T17:22:49Z
2024-04-08T17:22:49Z
https://github.com/flaskbb/flaskbb/issues/661
[]
gmweinberg
0
nltk/nltk
nlp
3,261
Unable to get local issuer certificate CentOS_7
Hi, I'm trying to download the punkt package using nltk.download('punkt') command and I got the following error: [nltk_data] Error loading punkt: <urlopen error [SSL: [nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed: [nltk_data] unable to get local issuer certificate (_ssl.c:1002)> I'm using python 3.11 (pyenv) on CenOS7. Thank you for your Help.
open
2024-06-04T09:14:07Z
2025-01-30T17:46:28Z
https://github.com/nltk/nltk/issues/3261
[]
francoiscap
2
pytorch/pytorch
deep-learning
149,475
IndexError in linear_binary when X and Y are the same with max-autotune enabled
### 🐛 Describe the bug When the x and y are the same in the inputs with max-autotune enabled, an index error occurs. Simple reproducer: ``` class Model(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(1024, 1024) def forward(self, input): out = self.linear(input) out = out + input return out if __name__ == "__main__": input = torch.randn(1024, 1024) m = Model().eval() dtype = torch.bfloat16 input = input.to(dtype) with torch.autocast(enabled=True, device_type="cpu", dtype=dtype): c_m = torch.compile(m, mode="max-autotune") inductor_res = c_m(input) ``` ``` Traceback (most recent call last): File "pytorchs/test/test_linear.py", line 72, in <module> inductor_res = c_m(input) File "pytorchs/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "pytorchs/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) File "pytorchs/pytorch/torch/_dynamo/eval_frame.py", line 663, in _fn raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1 File "pytorchs/pytorch/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler raise BackendCompilerFailed( File "pytorchs/pytorch/torch/_dynamo/output_graph.py", line 1519, in _call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) File "pytorchs/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__ compiled_gm = compiler_fn(gm, example_inputs) File "pytorchs/pytorch/torch/__init__.py", line 2349, in __call__ return compile_fx(model_, inputs_, config_patches=self.config) File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1745, in compile_fx return compile_fx( File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 2103, in compile_fx return aot_autograd( File "pytorchs/pytorch/torch/_dynamo/backends/common.py", line 101, in __call__ cg = aot_module_simplified(gm, example_inputs, **self.kwargs) File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified compiled_fn = AOTAutogradCache.load( File "pytorchs/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 775, in load compiled_fn = dispatch_and_compile() File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile compiled_fn, _ = create_aot_dispatcher_function( File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function return _create_aot_dispatcher_function( File "pytorchs/pytorch/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function compiled_fn, fw_metadata = compiler_fn( File "pytorchs/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base compiled_fw = compiler(fw_module, updated_flat_args) File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1643, in fw_compiler_freezing optimized_function = inner_compile( File "miniforge3/envs/ecao/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 628, in compile_fx_inner return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")( File "pytorchs/pytorch/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper inner_compiled_fn = compiler_fn(gm, example_inputs) File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 735, in _compile_fx_inner mb_compiled_graph = fx_codegen_and_compile( File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1309, in fx_codegen_and_compile return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs) File "pytorchs/pytorch/torch/_inductor/compile_fx.py", line 1128, in codegen_and_compile graph.run(*example_inputs) File "pytorchs/pytorch/torch/_inductor/graph.py", line 879, in run return super().run(*args) File "pytorchs/pytorch/torch/fx/interpreter.py", line 171, in run self.env[node] = self.run_node(node) File "pytorchs/pytorch/torch/_inductor/graph.py", line 1529, in run_node result = super().run_node(n) File "pytorchs/pytorch/torch/fx/interpreter.py", line 240, in run_node return getattr(self, n.op)(n.target, args, kwargs) File "pytorchs/pytorch/torch/_inductor/graph.py", line 1125, in call_function return target(*args, **kwargs) File "pytorchs/pytorch/torch/_inductor/fx_passes/mkldnn_fusion.py", line 620, in fn return L[fusion_op](*computation_args) File "pytorchs/pytorch/torch/_inductor/lowering.py", line 466, in wrapped out = decomp_fn(*args, **kwargs) File "pytorchs/pytorch/torch/_inductor/mkldnn_lowerings.py", line 349, in linear_binary result = autotune_select_algorithm( File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2345, in autotune_select_algorithm return _ALGORITHM_SELECTOR_CACHE(*args, **kwargs) File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1985, in __call__ timings = do_autotuning(precompile_fn) File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1913, in do_autotuning timings = self.lookup( File "pytorchs/pytorch/torch/_inductor/codecache.py", line 321, in lookup timings = benchmark(choices) File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1893, in autotune return make_benchmark_fn()(choices) File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2119, in benchmark_in_current_process raise e from None File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2083, in benchmark_in_current_process timing = benchmark_choice_in_current_process(choice, inputs) File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 2063, in benchmark_choice_in_current_process result = choice.benchmark(*inpts, out=output) File "pytorchs/pytorch/torch/_inductor/select_algorithm.py", line 1535, in benchmark new_args, new_out = self._preprocessor(args, out) File "pytorchs/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 937, in preprocessor *maybe_to_dense(*reorder_and_filter(inputs, layout)) File "pytorchs/pytorch/torch/_inductor/codegen/cpp_gemm_template.py", line 846, in reorder_and_filter inputs[inp_idx], torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: IndexError: tuple index out of range ``` ### Versions latest Pytorch.
open
2025-03-19T02:23:52Z
2025-03-19T02:25:20Z
https://github.com/pytorch/pytorch/issues/149475
[ "oncall: cpu inductor" ]
CaoE
0
521xueweihan/HelloGitHub
python
2,278
【项目推荐】Vue Vben Admin ,一个基于vite+vue3+antd的免费开源的中后台模版
## 推荐项目 <!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。--> <!-- 点击上方 “Preview” 立刻查看提交的内容 --> <!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址--> - 项目地址:https://github.com/vbenjs/vue-vben-admin <!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)--> - 类别:JS <!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 --> - 项目标题:Vue Vben Admin ,一个基于vite+vue3+antd的免费开源的中后台模版 <!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符--> - 项目描述:Vue Vben Admin ,一个基于vite+vue3+antd的免费开源的中后台模版,内置通用化的组件以及基于组合式api的通用hooks, 极具学习价值。并且可以开箱即用,快速搭建中后台项目。 <!--令人眼前一亮的点是什么?类比同类型项目有什么特点!--> - 亮点:1. 项目中拥有中后台开发需要的全量组件;2. 基于前端目前最新的技术栈:vue3 + vite ; 3. 前端交互非常流畅,且添加了很多赏心悦目的动效;4. 对新手优化,新手可以基于简版的项目,逐步迭代;5. 支持桌面端部署。 - 示例代码:监听视窗变化的hook ```js import { tryOnMounted, tryOnUnmounted } from '@vueuse/core'; import { useDebounceFn } from '@vueuse/core'; interface WindowSizeOptions { once?: boolean; immediate?: boolean; listenerOptions?: AddEventListenerOptions | boolean; } export function useWindowSizeFn<T>(fn: Fn<T>, wait = 150, options?: WindowSizeOptions) { let handler = () => { fn(); }; const handleSize = useDebounceFn(handler, wait); handler = handleSize; const start = () => { if (options && options.immediate) { handler(); } window.addEventListener('resize', handler); }; const stop = () => { window.removeEventListener('resize', handler); }; tryOnMounted(() => { start(); }); tryOnUnmounted(() => { stop(); }); return [start, stop]; } ``` - 截图:![示例](https://camo.githubusercontent.com/54505ac7198981205151e84a87a2620a9bdd2f243e0081120f49f7b4bc23eebf/68747470733a2f2f616e6e6377622e6769746875622e696f2f616e6e6377622f696d616765732f70726576696577322e706e67) - 后续更新计划:
closed
2022-07-10T03:45:30Z
2022-07-31T07:25:12Z
https://github.com/521xueweihan/HelloGitHub/issues/2278
[ "JavaScript 项目" ]
ethanguo770
2
cvat-ai/cvat
tensorflow
9,171
CVAT Installation using Singularity instead of Docker?
### Actions before raising this issue - [x] I searched the existing issues and did not find anything similar. - [x] I read/searched [the docs](https://docs.cvat.ai/docs/) ### Is your feature request related to a problem? Please describe. I am trying to run a CVAT service on my school cluster, and I was wondering if you could add the functionality to install CVAT using singularity instead of docker-compose. I think it could be very useful for many users. ### Describe the solution you'd like _No response_ ### Describe alternatives you've considered _No response_ ### Additional context _No response_
closed
2025-03-04T16:20:23Z
2025-03-06T16:29:47Z
https://github.com/cvat-ai/cvat/issues/9171
[ "enhancement" ]
rachitsaluja
1
holoviz/panel
matplotlib
7,158
Add Context Menu Feature to Tabulator
#### Is your feature request related to a problem? Please describe. No #### Describe the solution you'd like Tabulator has a left/right click context menu feature where callbacks can be registered. Having a context menu option for rows and cells would let the developer provide multiple options to perform additional operations/computations/visualizations/etc. This feature would be an amazing addition for providing a wide range of options for interactive data analysis. Context menu details: https://tabulator.info/docs/6.2/menu #### Describe alternatives you've considered NA #### Additional context https://tabulator.info/docs/6.2/menu
open
2024-08-17T10:12:18Z
2025-02-20T15:04:49Z
https://github.com/holoviz/panel/issues/7158
[ "type: enhancement" ]
Cyb3r-Monk
0
google-research/bert
tensorflow
1,240
How to deal with this problem
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: <tf.Tensor 'args_0:0' shape=() dtype=float32>
open
2021-06-28T12:39:32Z
2021-06-28T12:39:32Z
https://github.com/google-research/bert/issues/1240
[]
justyyau
0
deepset-ai/haystack
machine-learning
8,132
clean up FilterRetrieverdocstrings
closed
2024-08-01T07:10:38Z
2024-08-01T11:16:45Z
https://github.com/deepset-ai/haystack/issues/8132
[]
agnieszka-m
0
dgtlmoon/changedetection.io
web-scraping
2,307
UI - Text Filtering not displaying filters when watched site is added to a group with filters.
**Describe the bug** Given a watched website with text filters, if it is added to a group which has filters too, the text filters in the website edit page do not show anymore, but are applied. The system default is playwright. **Version** v0.45.17 **To Reproduce** Steps to reproduce the behavior: 1. Create a group with CSS filter and remove path filter. 2. Add a website link in the "Add a new change detection watch" field and assign the group you just created to it. 3. Click on "Edit > Watch" 4. In the "Filters & Triggers" tab, in the "Text Filters" section, add a text filter to "Trigger/wait for text" and to "Extract text" 5. Save 6. Wait for the website to be scraped 7. Edit the watched website entry. 8. In the "Filters & Triggers" tab, the text filters in "Trigger/wait for text" and to "Extract text" are not visible, but are being applied to the final result. ! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE - USE THE 'SHARE WATCH' FEATURE AND PASTE IN THE SHARE-LINK! **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: Mac Sonoma 14.4.1 - Browser Firefox - Version 128 **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
closed
2024-04-14T11:17:34Z
2024-04-16T16:48:52Z
https://github.com/dgtlmoon/changedetection.io/issues/2307
[ "user-interface", "triage" ]
Dragonatorul
2
onnx/onnx
tensorflow
6,601
[RFC] ONNX next wheel build platform: for example manylinux-2.28?
# Ask a Question ### Question Sooner or later, we will have to decide when we want to switch to a new manylinux platform and how we want to organize the transition. Pytorch deals with it in the following way. (https://github.com/pytorch/pytorch/issues/123649 I would be very interested and happy to hear about the current requirements and requests of the community?
open
2024-12-29T17:30:01Z
2025-03-12T05:50:39Z
https://github.com/onnx/onnx/issues/6601
[ "question", "rfc" ]
andife
2
pydantic/pydantic-settings
pydantic
203
How to Override Deeply Nested Settings using Environment Variables?
Hello, Is it possible to override a deeply nested setting without having to redefine the entirety of the model? Below is a modified example based off of [Parsing environment variable values](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#parsing-environment-variable-values): ```python import os from pydantic import BaseModel from pydantic_settings import BaseSettings, SettingsConfigDict class DeepSubModel(BaseModel): v4: str class SubModel(BaseModel): v1: str v2: bytes v3: int deep: DeepSubModel class Settings(BaseSettings): model_config = SettingsConfigDict(env_nested_delimiter='__') v0: str sub_model: SubModel @classmethod def settings_customise_sources( cls, settings_cls, init_settings, env_settings, dotenv_settings, file_secret_settings): return env_settings, init_settings, file_secret_settings # Ideal scenario would be a simple point modification os.environ['SUB_MODEL__DEEP__V4'] = 'override-v4' try: print(Settings(v0='0', sub_model=SubModel(v1='init-v1', v2=b'init-v2', v3=3, deep=DeepSubModel(v4='init-v4'))).model_dump()) except ValidationError as e: print(e) """ pydantic_core._pydantic_core.ValidationError: 3 validation errors for Settings sub_model.v1 Field required [type=missing, input_value={'deep': {'v4': 'override-v4'}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.5/v/missing sub_model.v2 Field required [type=missing, input_value={'deep': {'v4': 'override-v4'}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.5/v/missing sub_model.v3 Field required [type=missing, input_value={'deep': {'v4': 'override-v4'}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.5/v/missing """ # Current scenario seems to require entire definition of nested modes etc. os.environ['SUB_MODEL'] = '{"v1": "reinit-v1", "v2": "reinit-v2"}' os.environ['SUB_MODEL__V3'] = '33' print(Settings(v0='0', sub_model=SubModel(v1='init-v1', v2=b'init-v2', v3=3, deep=DeepSubModel(v4='init-v4'))).model_dump()) """ {'v0': '0', 'sub_model': {'v1': 'reinit-v1', 'v2': b'reinit-v2', 'v3': 33, 'deep': {'v4': 'override-v4'}}} """ ``` The difference here is `Settings` is defined through instantiation instead of environment variables. Ideally, the below concept would still apply to `Settings` with respect to nested precedence, allowing for point modifications of nested variables: > Nested environment variables take precedence over the top-level environment variable JSON...
closed
2023-12-28T18:10:16Z
2024-02-21T02:45:06Z
https://github.com/pydantic/pydantic-settings/issues/203
[]
kschwab
10
jupyter-incubator/sparkmagic
jupyter
446
Why %%sql outputs timestamp in UTC instead of local format?
Same queries have different output. Looks like autoviz converts timestamp to UTC unnecessarily (We're in UTC+8). It's so annoying. How could we make it show in our local timezone? Sparkmagic version is 0.11.4 ![image](https://user-images.githubusercontent.com/2177337/37889849-e4b6a0e4-3100-11e8-8416-c3a727e7ca0b.png)
open
2018-03-26T06:36:19Z
2018-03-26T06:36:19Z
https://github.com/jupyter-incubator/sparkmagic/issues/446
[]
raiden2012
0
nonebot/nonebot2
fastapi
3,241
Plugin: 群聊总结
### PyPI 项目名 nonebot_plugin_summary_group ### 插件 import 包名 nonebot_plugin_summary_group ### 标签 [{"label":"群聊总结","color":"#b8e994"},{"label":"AI","color":"#1289a7"},{"label":"分析","color":"#0652dd"}] ### 插件配置项 ```dotenv GEMINI_KEY=xyz ``` ### 插件测试 - [ ] 如需重新运行插件测试,请勾选左侧勾选框
closed
2025-01-05T07:20:22Z
2025-01-07T08:07:30Z
https://github.com/nonebot/nonebot2/issues/3241
[ "Plugin", "Publish" ]
StillMisty
3
jupyter-book/jupyter-book
jupyter
1,443
singlehtml builder not available
### Link to the documentation you'd like to improve https://jupyterbook.org/basics/build.html#types-of-build-outputs ### What to improve The information provided about the list of builders seems inaccurate: singlehtml builder doesn't seem to be available. ```python > jupyter-book --version Jupyter Book : 0.11.2 External ToC : 0.2.3 MyST-Parser : 0.13.7 MyST-NB : 0.12.3 Sphinx Book Theme : 0.1.1 Jupyter-Cache : 0.4.3 NbClient : 0.5.4 > jupyter-book build --builder singlehtml . Usage: jupyter-book build [OPTIONS] PATH_SOURCE Try 'jupyter-book build -h' for help. Error: Invalid value for '--builder': invalid choice: singlehtml. (choose from html, dirhtml, pdfhtml, latex, pdflatex, linkcheck, custom) ```
closed
2021-08-27T14:56:41Z
2021-08-27T16:03:11Z
https://github.com/jupyter-book/jupyter-book/issues/1443
[ "documentation" ]
akhmerov
3
open-mmlab/mmdetection
pytorch
11,324
test_pipleline in RTMDet config file
Hello How are you? I am using the latest released version (3.2.0) of mmdetection. I found that there is an issue in test_pipeline of RTMDet config file. ![image](https://github.com/open-mmlab/mmdetection/assets/47862419/18249c00-66f7-4515-bb45-4e78bf2e2596) I think that "LoadAnnotations" should be placed right after "LoadImageFromFile".
open
2023-12-30T01:49:59Z
2024-01-08T03:11:30Z
https://github.com/open-mmlab/mmdetection/issues/11324
[]
rose-jinyang
1
matplotlib/mplfinance
matplotlib
308
how can I embed my mplfinance graph(mpf_animation_demo1.py) in Tkinter ?
Ask anything you want about mplfinance usage, project philosophy and/or priorities, or anything else related to mplfinance. Display the following animation graph on the tkinter,environment:python3.6 +windows10 desktop Below is mpf_animation_demo1.py: ```python import pandas as pd import mplfinance as mpf import matplotlib.animation as animation idf = pd.read_csv('data/SPY_20110701_20120630_Bollinger.csv',index_col=0,parse_dates=True) idf.shape idf.head(3) idf.tail(3) df = idf.loc['2011-07-01':'2011-12-30',:] fig = mpf.figure(style='charles',figsize=(7,8)) ax1 = fig.add_subplot(2,1,1) ax2 = fig.add_subplot(3,1,3) def animate(ival): if (20+ival) > len(df): print('no more data to plot') ani.event_source.interval *= 3 if ani.event_source.interval > 12000: exit() return data = df.iloc[0:(20+ival)] ax1.clear() ax2.clear() mpf.plot(data,ax=ax1,volume=ax2,type='candle') ani = animation.FuncAnimation(fig, animate, interval=250) mpf.show() ```
open
2021-01-02T09:54:18Z
2021-01-10T00:09:14Z
https://github.com/matplotlib/mplfinance/issues/308
[ "question" ]
hmhjapan
2
bendichter/brokenaxes
matplotlib
38
height_ratios ignored
Specified height_ratio=[1,4] below (or any other ratio) but it is being ignored. Current bottom axis looks squished otherwise. ``` #define lower y-axis range and upper y-axis range bax = brokenaxes( ylims=((0, 2), (3,10)), height_ratios=[1,4]) # Plot bax.plot('Year', 'Barbiturates', data=df2, color='blue') bax.plot('Year', 'Chloral Hydrate', data=df2, color='red') # Labels, legend, and title bax.set_ylabel('Rate per 1,000 encounters', 30) bax.set_xlabel('Year', 25) bax.legend(bbox_to_anchor=(0, -0.2, 1, 0), loc=2, ncol=2, mode="expand", borderaxespad=0) bax.set_title("Trend Plot") # Hack to get grid lines to appear on plots in first row as well, set the x-axis tick marks the same bax.axs[0].xaxis.set_major_locator(bax.axs[1].xaxis.get_major_locator()) # Make invisible the tick label and tick tick lines that made plot look weird plt.setp(bax.axs[0].xaxis.get_majorticklabels(), visible=False) plt.setp(bax.axs[0].xaxis.get_majorticklines(), visible=False) #turn on grid bax.axs[0].grid(True) bax.axs[1].grid(True) bax.axs[0].set_yticks([3, 4, 5, 6, 7, 8, 9, 10]) bax.axs[1].set_yticks([0.25, 0.50, 0.75, 1.0, 1.25, 1.50, 1.75, 2.0, 2.25]) plt.savefig('image2.jpg', bbox_inches='tight', dpi=300) # export to jpg at 300 dpi plt.show() ``` ![image2](https://user-images.githubusercontent.com/10066227/63563472-3a137800-c52f-11e9-8add-0df6d43e4d93.jpg)
closed
2019-08-23T02:50:34Z
2019-09-12T17:23:13Z
https://github.com/bendichter/brokenaxes/issues/38
[]
janetfb9109
4
zappa/Zappa
flask
1,053
Will my data base get lots of connections If there are many requests happen
<!--- Provide a general summary of the issue in the Title above --> ## Context I love zappa very much as my team can run the flask application on aws lambda. We don't have to worry our application will be offline someday. But we are also using SQLAlchemy to connect db in our application. With my understanding lambda will be started every time when a request comes. **Does it mean each application starting will create a new db connection right?** You know, there only few db connections can be consumed in an online database. BTW, if not, how zappa handle the db connection in this situation? Thanks support team, looking forward your suggestions on this.
closed
2021-10-10T16:16:38Z
2022-07-16T04:19:52Z
https://github.com/zappa/Zappa/issues/1053
[]
clown-0726
2
mckinsey/vizro
pydantic
361
Decimal numbers are sometimes being cut-off in sliders/range-sliders
### Description Decimal numbers are sometimes being cut-off in sliders/range-sliders. ![Screenshot 2024-03-12 at 10 00 53](https://github.com/mckinsey/vizro/assets/90609403/c2ddec5f-b46a-480f-a0a3-44c07b9f0d7d) This seems to **only** happen when the `step` argument is not defined. When the `step` argument is defined the max-width of the input field seems to scale correctly: <img width="306" alt="Screenshot 2024-03-12 at 14 52 27" src="https://github.com/mckinsey/vizro/assets/108531476/0b22f7de-49b4-470b-a44b-ba95c525d414"> However, it doesn't always happen, so it doesn't seem to be a pure CSS issue as e.g. these work: ![Screenshot 2024-03-12 at 10 02 11](https://github.com/mckinsey/vizro/assets/90609403/0339b8a5-0360-4402-9424-ecc1984fa749) ### Expected behavior Input field expanding to its content and not cutting of digits after the separator. ### Which package? vizro ### Package version 0.1.12 ### Python version 3.9 ### OS - ### How to Reproduce Run any example with sliders ### Output _No response_ ### Code of Conduct - [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
closed
2024-03-12T09:03:05Z
2024-07-09T15:48:02Z
https://github.com/mckinsey/vizro/issues/361
[ "Bug Report :bug:" ]
huong-li-nguyen
2
adbar/trafilatura
web-scraping
203
ModuleNotFoundError: No module named '_lzma' in 1.2.1
closed
2022-05-02T12:25:22Z
2022-09-09T23:23:33Z
https://github.com/adbar/trafilatura/issues/203
[ "feedback" ]
marban
3
plotly/jupyter-dash
jupyter
38
jupyter-dash R library
Hello, Is there a way to create an R library so that I could run r-Dash in jupyter lab? Thanks! Elze
open
2020-09-30T18:26:43Z
2020-09-30T18:26:43Z
https://github.com/plotly/jupyter-dash/issues/38
[]
elzerac
0
paperless-ngx/paperless-ngx
machine-learning
9,302
[BUG] Unable to remove user group from permissions
### Description When removing a user group from the permission panel, it is ignored on reload. Even when using bulk edit. ### Steps to reproduce 1. Add two or more user groups: ![Image](https://github.com/user-attachments/assets/dd44bd08-ee24-40bf-a2ed-5cc726fe0670) 2. Remove one of these groups and hit save: ![Image](https://github.com/user-attachments/assets/007c4f2b-a7dd-4b7d-b2fe-9eb14c3d9d41) 3. Reload the page or switch to another document and back: the removed user group returned ### Webserver logs ```bash nothing interesting here ``` ### Browser logs ```bash nothing interesting here ``` ### Paperless-ngx version 2.14.7 ### Host OS Linux-6.8.0-54-generic-x86_64-with-glibc2.39 ### Installation method Bare metal ### System status ```json { "pngx_version": "2.14.7", "server_os": "Linux-6.8.0-54-generic-x86_64-with-glibc2.39", "install_type": "bare-metal", "storage": { "total": 61106941952, "available": 42460422144 }, "database": { "type": "postgresql", "url": "paperless", "status": "OK", "error": null, "migration_status": { "latest_migration": "paperless_mail.0001_initial_squashed_0009_mailrule_assign_tags", "unapplied_migrations": [] } }, "tasks": { "redis_url": "redis://localhost:6379", "redis_status": "OK", "redis_error": null, "celery_status": "OK", "index_status": "OK", "index_last_modified": "2025-03-05T13:33:41.056123Z", "index_error": null, "classifier_status": "OK", "classifier_last_trained": null, "classifier_error": null } } ``` ### Browser Chrome ### Configuration changes nothing major was changed, I don't think there is something interesting here ### Please confirm the following - [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [x] I have already searched for relevant existing issues and discussions before opening this report. - [x] I have updated the title field above with a concise description.
closed
2025-03-05T13:36:09Z
2025-03-05T15:57:08Z
https://github.com/paperless-ngx/paperless-ngx/issues/9302
[ "not a bug" ]
TimGoll
5
ymcui/Chinese-LLaMA-Alpaca
nlp
319
多进程跑出现loss为0,eval loss为nan
感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。 *提示:将[ ]中填入x,表示打对钩。提问时删除上面这两行。请只保留符合的选项,删掉其他。* ### 详细描述问题 采用多个进程微调chinese_lora_alpaca_plus_13b模型的时候出现loss为0,并且eval loss为nan,padding_side为right ### 运行截图或log ![image](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/19610534/04480a3b-1e09-4a0f-9a2b-43c0be4f7fac) 运行命令如下: WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=1,2 torchrun --nproc_per_node=2 finetune.py --base_model '/data/public-model/plus-13b-lora/merge_chinese_lora_alpaca_plus_13b' --data_path './data/merge-46w.json' --output_dir "./plus-13b-output/alpaca-plus-13b-test-001" --batch_size 32 --micro_batch_size 16 --num_epochs 2 --learning_rate 3e-4 --cutoff_len 512 --val_set_size 5000 --lora_r 64 --lora_alpha 128 --lora_dropout 0.1 --lora_target_modules '[q_proj,k_proj,v_proj,o_proj,gate_proj,down_proj,up_proj]' --train_on_inputs --group_by_length 这个大佬们抽空能帮忙解答下大概什么问题?
closed
2023-05-12T04:09:50Z
2023-08-27T02:45:22Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/319
[ "stale" ]
heshuguo
7
CorentinJ/Real-Time-Voice-Cloning
pytorch
705
A s s e r t i o n f a i l e d !
`python K:\cloning\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\demo_toolbox.py` A s s e r t i o n f a i l e d ! P r o g r a m : c : \ p y t h o n 3 8 \ p y t h o n . e x e F i l e : s r c / h o s t a p i / w d m k s / p a _ w i n _ w d m k s . c , L i n e 1 0 8 1 E x p r e s s i o n : F A L S E
closed
2021-03-16T17:50:20Z
2021-04-09T16:50:05Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/705
[]
FedericoFedeFede
3
deeppavlov/DeepPavlov
nlp
829
Difference between ELMo embedding for sentence/separate tokens
Hello, I am trying to use ELMo embedder for further text classification task. I cannot find information on that, so forced to ask a question here. What is the conceptual difference between this `elmo([['вопрос', 'жизни', 'и' ,'смерти']])` and this `elmo([['вопрос жизни и смерти']])` The embedding size is the same, but the values are different. Thanks in advance.
closed
2019-05-05T10:23:41Z
2019-05-14T18:57:31Z
https://github.com/deeppavlov/DeepPavlov/issues/829
[]
joistick11
2
Lightning-AI/pytorch-lightning
deep-learning
19,829
How to incorporate vLLM in Lightning for LLM inference?
### Description & Motivation [vLLM](https://github.com/vllm-project/vllm) is one of the most popular and effective tool for quick, large-scale LLM inference. Are there any existing examples of incorporating vLLM in Lightning? I have not found any so far. ### Pitch Adding inference via vLLM under the Lightning framework. ### Alternatives _No response_ ### Additional context _No response_ cc @borda
open
2024-04-30T20:08:03Z
2024-09-04T06:42:13Z
https://github.com/Lightning-AI/pytorch-lightning/issues/19829
[ "feature", "needs triage" ]
YuWang916
3
Neoteroi/BlackSheep
asyncio
76
How to bind path to api endpoint?
Hey, I'm trying to bind dynamic path to api endpoint in BlackSheep just like we do in flask and quart. But it's not working. Example: In flask and quart it can be done like this: url: xyz.com/get/file/dynamic/path/to/file ``` @api.route('/get/file/<path:filepath>', methods=['GET']) def get_file(filepath): ..... ``` How to do this in BlackSheep?
closed
2021-01-19T06:10:36Z
2021-01-24T19:17:35Z
https://github.com/Neoteroi/BlackSheep/issues/76
[ "enhancement", "fixed in branch" ]
abhinavatai
5
zihangdai/xlnet
nlp
111
error in pretrain an XLNet . train_gpu.py.
I have successfully ran `spm_train \ --input=$INPUT \ --model_prefix=sp10m.cased.v3 \ --vocab_size=32000 \ --character_coverage=0.99995 \ --model_type=unigram \ --control_symbols=<cls>,<sep>,<pad>,<mask>,<eod> \ --user_defined_symbols=<eop>,.,(,),",-,–,£,€ \ --shuffle_input_sentence \ --input_sentence_size=10000000` and `python data_utils.py \ --bsz_per_host=32 \ --num_core_per_host=16 \ --seq_len=512 \ --reuse_len=256 \ --input_glob=*.txt \ --save_dir=${SAVE_DIR} \ --num_passes=20 \ --bi_data=True \ --sp_path=spiece.model \ --mask_alpha=6 \ --mask_beta=1 \ --num_predict=85`. now when i want to run train_gpu.py : `sudo python3 train_gpu.py --record_info_dir=/home/ubuntu/xlnet/training/tfrecords --train_batch_size=2048 --seq_len=512 --reuse_len=256 --mem_len=384 --perm_size=256 --n_layer=24 --d_model=1024 --d_embed=1024 --n_head=16 --d_head=64 --d_inner=4096 --untie_r=True --mask_alpha=6 --mask_beta=1 --num_predict=85 --model_dir=/home/ubuntu/axalimodeli` i have the error: `/usr/local/lib/python3.5/dist-packages/tensorflow-plugins /home/ubuntu/.local/lib/python3.5/site-packages/tensorflow-plugins /usr/lib/python3/dist-packages/tensorflow-plugins /usr/lib/python3.5/dist-packages/tensorflow-plugins I0702 11:32:34.041983 139935611332352 train_gpu.py:319] n_token 32000 I0702 11:32:34.042275 139935611332352 data_utils.py:795] Use the following tfrecord dirs: ['/home/ubuntu/xlnet/training/tfrecords'] I0702 11:32:34.042413 139935611332352 data_utils.py:799] [0] Record glob: /home/ubuntu/xlnet/training/tfrecords/record_info-train-*.bsz-2048.seqlen-512.reuse-256.bi.alpha-6.beta-1.fnp-85.json I0702 11:32:34.042960 139935611332352 data_utils.py:803] [0] Num of record info path: 0 I0702 11:32:34.043075 139935611332352 data_utils.py:836] [Dir 0] Number of chosen batches: 0 I0702 11:32:34.043182 139935611332352 data_utils.py:838] [Dir 0] Number of chosen files: 0 I0702 11:32:34.043281 139935611332352 data_utils.py:839] [] I0702 11:32:34.043379 139935611332352 data_utils.py:846] Total number of batches: 0 I0702 11:32:34.043897 139935611332352 data_utils.py:848] Total number of files: 0 I0702 11:32:34.044010 139935611332352 data_utils.py:849] [] I0702 11:32:34.044113 139935611332352 train_gpu.py:204] num of batches 0 I0702 11:32:34.044229 139935611332352 data_utils.py:555] Host 0 handles 0 files Traceback (most recent call last): File "train_gpu.py", line 328, in <module> tf.compat.v1.app.run() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/usr/local/lib/python3.5/dist-packages/absl/app.py", line 300, in run _run_main(main, args) File "/usr/local/lib/python3.5/dist-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "train_gpu.py", line 324, in main train("/gpu:0") File "train_gpu.py", line 212, in train train_set = train_input_fn(params) File "/home/ubuntu/xlnet/data_utils.py", line 868, in input_fn num_predict=num_predict) File "/home/ubuntu/xlnet/data_utils.py", line 757, in get_dataset bsz_per_core=bsz_per_core) File "/home/ubuntu/xlnet/data_utils.py", line 566, in parse_files_to_dataset dataset = tf.data.TFRecordDataset(dataset) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/data/ops/readers.py", line 335, in __init__ filenames, compression_type, buffer_size, num_parallel_reads) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/data/ops/readers.py", line 295, in __init__ filenames = _create_or_validate_filenames_dataset(filenames) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/data/ops/readers.py", line 50, in _create_or_validate_filenames_dataset "filenamesmust be atf.data.Datasetoftf.stringelements.") TypeError:filenamesmust be atf.data.Datasetoftf.string elements.` Can you help me to solve this case. data_utils.py make the tfrecords file in this directory '/home/ubuntu/xlnet/training/tfrecords'.
closed
2019-07-03T11:53:24Z
2019-07-15T19:28:52Z
https://github.com/zihangdai/xlnet/issues/111
[]
Bagdu
9
TheKevJames/coveralls-python
pytest
156
Support Mercurial
I created at PR #155. I've added a new environment variable called USE_HG, which if set will use mercurial. I mainly left the existing API mostly alone, except two things: 1. I renamed `.git_info` to `.dvcs_info`, which returns the same data structure. 2. I moved run_command to a new file called utilities. 2.b `run_command` now supports shell=True `subprocess.Popen`. I also didn't include remotes for hg, called paths in mercurial, but the concept in mercurial is slightly different, the docs for coveralls don't seem to support it, and I'm not sure what value it adds in any event. One last bit, I couldn't seem to make non-ascii chars work in the `.hg/hgrc`, so I removed that from the tests. The USE_HG flag should be documented, but there isn't guidance on how documentation should be added, so I left it alone for now. I'll add it prior to merge - I just would appreciate some guidance. One last note: `GIT_OR_HG` is all caps because initially I had it as a global var, but this was problematic with the test runner setting the variable on import (git first) and then subsequently failing. I'll change it to lower case whenever someone get's back to me on the documentation bit.
closed
2017-06-13T14:19:23Z
2018-08-25T10:50:23Z
https://github.com/TheKevJames/coveralls-python/issues/156
[]
eire1130
5
streamlit/streamlit
machine-learning
10,751
Support admonitions / alerts / callouts in markdown
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary Implement support for alert blocks (aka admonitions) within the Streamlit markdown flavour. ### Why? This is supported by many other markdown flavours as well. ### How? Unfortunately, there isn't one common syntax for this... we probably have to choose between one of these: [Github markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax): > [!NOTE] > This is a note alert block ``` > [!NOTE] > This is a note alert block ``` [PyMdown](https://facelessuser.github.io/pymdown-extensions/extensions/details/): ``` ??? note This is a note alert block ``` [Material for Mkdocs](https://squidfunk.github.io/mkdocs-material/reference/admonitions/) & [Python Markdown](https://python-markdown.github.io/extensions/admonition/): ``` !!! note This is a note alert block ``` [Docusaurus](https://docusaurus.io/docs/markdown-features/admonitions): ``` :::note This is a note alert block ::: ``` ### Additional Context - We could just reuse the [alert frontend components](https://docs.streamlit.io/develop/api-reference/status) that already exist in Streamlit - Remark/reype plugins: - [remark-github-beta-blockquote-admonitions](https://github.com/myl7/remark-github-beta-blockquote-admonitions) - [remark-github-admonitions-to-directives](https://github.com/incentro-ecx/remark-github-admonitions-to-directives) - [rehype-github-alerts](https://github.com/chrisweb/rehype-github-alerts)
open
2025-03-12T17:00:47Z
2025-03-12T17:03:22Z
https://github.com/streamlit/streamlit/issues/10751
[ "type:enhancement", "feature:markdown" ]
lukasmasuch
1
jina-ai/serve
machine-learning
6,186
Jina AI install error on Ubuntu 22.04 and Python 3.12 system
**Describe the bug** The error pops up when i try to install the jina-ai python package on my Ubuntu machine. Building GRPC wheel failure. [jina install error.txt](https://github.com/user-attachments/files/16538089/jina.install.error.txt) **Describe how you solve it** Not Solved yet --- <!-- Optional, but really help us locate the problem faster --> **Environment** Ubuntu 22.04 and Python 3.12 **Screenshots** The Entire error log is in the above txt file
closed
2024-08-08T05:38:43Z
2024-12-31T07:33:55Z
https://github.com/jina-ai/serve/issues/6186
[]
teetangh
15
microsoft/nlp-recipes
nlp
279
[ASK] Modify ReadMe for question_answering folder
### Description Update the notebook table with the correct naming of the notebooks and names. ### Other Comments **Principles of NLP Documentation** Each landing page at the folder level should have a ReadMe which explains - ○ Summary of what this folder offers. ○ Why and how it benefits users ○ As applicable - Documentation of using it, brief description etc **Scenarios folder:** ○ Root Scenario folder should have a summary on what value these example notebook provides. ○ Include a table with scenario name, description, algorithm, Dataset ○ Other instructions, Pre-req of running these notebooks ○ Each scenario folder should have a summary text explaining about the scenario, what utils its using. Any benchmark numbers if applicable. Explain any concept relevant to the scenario ○ Under each scenario folder there should be one Quick Start example notebook, name starting with "QuickStart: ..." and atleast one AML notebook **Example Notebooks Guiding Principles:** ○ We are providing recipes for solving NLP scenarios on Azure AI ○ We make it easier by providing Util packages ○ We provide example notebooks on how to use the utils for solving common NLP scenarios ○ Based on these principles above, all notebook examples should be using utils wherever applicable. Ex: If your example is doing classification using BERT, use the BERTSequenceClassifier instead of directly calling BertForSequenceClassification. Same with tokenization.
closed
2019-08-13T21:37:06Z
2019-08-19T14:16:23Z
https://github.com/microsoft/nlp-recipes/issues/279
[ "documentation", "release-blocker" ]
dipanjan77
0
apify/crawlee-python
web-scraping
717
Relax pydantic <2.10.0 constraint once issues solved
With pydantic 2.10.0 we currently get errors like: ``` pydantic.errors.PydanticUserError: `Configuration` is not fully defined; you should define `Any`, then call `Configuration.model_rebuild() ``` It was working in pydantic version 2.9.2 Relax <2.10.0 contraint in pyproject.toml once issues are solved in Pydantic Reproduce for example by running following unit test: https://github.com/apify/crawlee-python/blob/master/tests/unit/test_configuration.py#L7
closed
2024-11-21T10:33:35Z
2024-12-05T15:35:32Z
https://github.com/apify/crawlee-python/issues/717
[ "t-tooling", "adhoc" ]
Pijukatel
2
jacobgil/pytorch-grad-cam
computer-vision
184
Suggestion to add Zoom-CAM
Full paper: https://ieeexplore.ieee.org/abstract/document/9412980 GitHub Repo: https://github.com/X-Shi/Zoom-CAM Hi, I would like to suggest the addition of Zoom-CAM into this library. From the paper, the visualizations provided look promising.
closed
2021-12-22T03:27:20Z
2022-04-01T07:48:52Z
https://github.com/jacobgil/pytorch-grad-cam/issues/184
[]
plthon
1