repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
sequencelengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
rgerum/pylustrator
matplotlib
59
ModuleNotFoundError: No module named 'matplotlib.axes._subplots'
``` (biotite) ddalab@DP7820-WS:Analysis$ python -c 'import pylustrator' Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/ddalab/anaconda3/envs/biotite/lib/python3.11/site-packages/pylustrator/__init__.py", line 22, in <module> from .QtGuiDrag import initialize as start File "/home/ddalab/anaconda3/envs/biotite/lib/python3.11/site-packages/pylustrator/QtGuiDrag.py", line 33, in <module> from .ax_rasterisation import rasterizeAxes, restoreAxes File "/home/ddalab/anaconda3/envs/biotite/lib/python3.11/site-packages/pylustrator/ax_rasterisation.py", line 24, in <module> from matplotlib.axes._subplots import Axes ModuleNotFoundError: No module named 'matplotlib.axes._subplots'
closed
2023-02-25T21:24:30Z
2023-02-27T19:00:58Z
https://github.com/rgerum/pylustrator/issues/59
[]
hackerzone85
2
autokey/autokey
automation
16
UnicodeEncodeError: autokey-shell, autokey-gtk not starting
couldn't get the gtk version to start after some trouble with the AUR PKGBUILD (i ended up using the instructions from the wiki, i'm not sure if you maintain both) so i tried autokey-shell and produced this stack trace: http://pastebin.com/YpQ0SNJf
closed
2016-01-23T09:00:15Z
2016-11-11T06:33:21Z
https://github.com/autokey/autokey/issues/16
[]
covercash2
1
huggingface/datasets
computer-vision
7,061
Custom Dataset | Still Raise Error while handling errors in _generate_examples
### Describe the bug I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution. ``` def _generate_examples(self, filepaths): errors=[] id_ = 0 for filepath in filepaths: try: with open(filepath, 'r') as f: for line in f: json_obj = json.loads(line) yield id_, json_obj id_ += 1 except Exception as exc: logger.error(f"error occur at filepath: {filepath}") errors.append(error) ``` seems the logger.error is printed but still exception is raised the the run is exit. ``` Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841 ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl Traceback (most recent call last): File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples json_obj = json.loads(line) File "myenv/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "myenv/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3) Generating train split: 0 examples [00:06, ? examples/s]> RemoteTraceback: """ Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset """ The above exception was the direct cause of the following exception: โ”‚ โ”‚ โ”‚ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. โ”‚ โ”‚ py:1377 in <listcomp> โ”‚ โ”‚ โ”‚ โ”‚ 1374 โ”‚ โ”‚ โ”‚ โ”‚ if all(async_result.ready() for async_result in async_results) and queue โ”‚ โ”‚ 1375 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ break โ”‚ โ”‚ 1376 โ”‚ โ”‚ # we get the result in case there's an error to raise โ”‚ โ”‚ โฑ 1377 โ”‚ โ”‚ [async_result.get() for async_result in async_results] โ”‚ โ”‚ 1378 โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ .0 = <list_iterator object at 0x7f2cc1f0ce20> โ”‚ โ”‚ โ”‚ โ”‚ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ โ”‚ โ”‚ โ”‚ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 โ”‚ โ”‚ in get โ”‚ โ”‚ โ”‚ โ”‚ 768 โ”‚ โ”‚ if self._success: โ”‚ โ”‚ 769 โ”‚ โ”‚ โ”‚ return self._value โ”‚ โ”‚ 770 โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 771 โ”‚ โ”‚ โ”‚ raise self._value โ”‚ โ”‚ 772 โ”‚ โ”‚ โ”‚ 773 โ”‚ def _set(self, i, obj): โ”‚ โ”‚ 774 โ”‚ โ”‚ self._success, self._value = obj โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ”‚ โ”‚ โ”‚ โ”‚ timeout = None โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug same as above ### Expected behavior should handle error and continue reading remaining files ### Environment info python 3.9
open
2024-07-22T21:18:12Z
2024-09-09T14:48:07Z
https://github.com/huggingface/datasets/issues/7061
[]
hahmad2008
0
serengil/deepface
machine-learning
1,086
AttributeError: 'KerasHistory' object has no attribute 'layer'
I am running the following code but it keeps telling me that AttributeError: 'KerasHistory' object has no attribute 'layer', are there anything i can use to fix this issue. this is the code: TF_ENABLE_ONEDNN_OPTS=0 import cv2 import pandas as pd import keras from deepface import DeepFace as df cap = cv2.VideoCapture(0) cascade = cv2.CascadeClassifier(r"D:\FaceDetect\haarcascade_frontalface_default.xml") while True: face = [] ret, frame = cap.read() if cv2.waitKey(1) and (0XFF == ord("q")): break for (x,y,w,h) in cascade.detectMultiScale(frame,1.15,3): face.append(frame[x:x+w,y:y+h]) cv2.rectangle(frame,(x,y),(x+w,y+h),(255,255,255),1) for img in face: df.find(img,r"D:\FaceDetect\database","ArcFace") cv2.imshow("frame",frame) cap.release() cv2.destroyAllWindows() my deepface is in its latest version 0.086 this is the error message: Traceback (most recent call last): File "d:\FaceDetect\FaceDetect.py", line 16, in <module> df.find(img,r"D:\FaceDetect\database","ArcFace") File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\DeepFace.py", line 301, in find return recognition.find( ^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\modules\recognition.py", line 96, in find model: FacialRecognition = modeling.build_model(model_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\modules\modeling.py", line 46, in build_model model_obj[model_name] = model() ^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\basemodels\ArcFace.py", line 54, in __init__ self.model = load_model() ^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\basemodels\ArcFace.py", line 80, in load_model base_model = ResNet34() ^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\basemodels\ArcFace.py", line 130, in ResNet34 model = training.Model(img_input, x, name="ResNet34") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\trackable\base.py", line 204, in _method_wrapper result = method(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\keras\engine\functional.py", line 116, in __init__ self._init_graph_network(inputs, outputs) File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\trackable\base.py", line 204, in _method_wrapper result = method(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\keras\engine\functional.py", line 152, in _init_graph_network self._validate_graph_inputs_and_outputs() File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\keras\engine\functional.py", line 694, in _validate_graph_inputs_and_outputs layer = x._keras_history.layer ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'KerasHistory' object has no attribute 'layer' PS C:\Users\ASUS> ^C PS C:\Users\ASUS> & C:/Users/ASUS/AppData/Local/Microsoft/WindowsApps/python3.12.exe d:/FaceDetect/FaceDetect.py 2024-03-10 16:51:33.062190: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-03-10 16:51:33.441281: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-03-10 16:51:37.024612: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "d:\FaceDetect\FaceDetect.py", line 17, in <module> df.find(img,r"D:\FaceDetect\database","ArcFace") File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\DeepFace.py", line 301, in find return recognition.find( ^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\modules\recognition.py", line 96, in find model: FacialRecognition = modeling.build_model(model_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\modules\modeling.py", line 46, in build_model model_obj[model_name] = model() ^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\basemodels\ArcFace.py", line 54, in __init__ self.model = load_model() ^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\basemodels\ArcFace.py", line 80, in load_model base_model = ResNet34() ^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\basemodels\ArcFace.py", line 130, in ResNet34 model = training.Model(img_input, x, name="ResNet34") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\trackable\base.py", line 204, in _method_wrapper result = method(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\keras\engine\functional.py", line 116, in __init__ self._init_graph_network(inputs, outputs) File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\trackable\base.py", line 204, in _method_wrapper result = method(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\keras\engine\functional.py", line 152, in _init_graph_network self._validate_graph_inputs_and_outputs() File "C:\Users\ASUS\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\tensorflow\python\keras\engine\functional.py", line 694, in _validate_graph_inputs_and_outputs layer = x._keras_history.layer ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'KerasHistory' object has no attribute 'layer'
closed
2024-03-10T09:54:33Z
2024-03-10T13:48:09Z
https://github.com/serengil/deepface/issues/1086
[ "dependencies" ]
FabioCanavarro
10
healthchecks/healthchecks
django
592
Telegram Channel Support
Now the telegram bot can only send messages to users or groups, but I want to post the message to the channel.
closed
2021-12-23T02:23:50Z
2021-12-29T18:40:49Z
https://github.com/healthchecks/healthchecks/issues/592
[]
AuroraDysis
2
ckan/ckan
api
8,126
Editor User Permissions Issue: Dataset Created but 404 Response (API)
## CKAN version 2.10.4 ## Describe the bug I am encountering a 404 error when attempting to create a new dataset using an editor user account. The dataset is created successfully, but the response code indicates the resource cannot be found. This issue does not occur when using a sysadmin account (API token). ### Steps to reproduce 1. In Postman, create a POST request to: {{host}}/api/3/action/package_create 2. Set the "Authorization" header to: {{token}} 3. Include the following JSON body in the request: ```json { "tag_string": [ "test" ], "name": "doi-test", "author": "Flavio Francisco (Royal Netherlands Institute for Sea Research)", "subject": "Earth and Environmental Sciences", "author_email": "user@nioz.nl", "owner_org": "nioz", "private": true, "doi": "10.33591/nioz/7b.b.9g", "license_id": "CC0-1.0", "maintainer": "Flavio", "maintainer_email": "user@nioz.nl", "notes": "This is a test", "title": "DOI TEST", "url": "https://doi.org/10.33591/nioz/7b.b.9g", "extras": [], "creators": "[{\"firstname\":\"Flavio\",\"lastname\":\"Francisco\",\"orcid\":null,\"affiliation\":\"Royal Netherlands Institute for Sea Research\",\"iscorrespondingauthor\":true,\"contactemailaddress\":\"user@nioz.nl\",\"formatted\":\"Flavio Francisco - Royal Netherlands Institute for Sea Research - ORCID: - Corresponding Author - user@nioz.nl\"}]", "dataset_persistent_id": "DOI:10.33591/nioz/7b.b.9g", "distribution_date": "2024-03-19", "deposit_date": "2024-03-19", "depositor": "Flavio Francisco (user@nioz.nl)", "others_id": "DOI:10.33591/nioz/7b.b.9g; DAS:84989278-89a6-4e50-8b9d-e3a76f4b265c", "contributor": "NIOZ Royal Netherlands Institute for Sea Research in cooperation with Utrecht University", "distributor": "Research Data Management(NIOZ Royal Netherlands Institute for Sea Research)" } ``` ### Expected behavior The request should successfully create a new dataset and return a response code of 200 (OK) or 201 (Created). ### Additional details N/A
open
2024-03-20T10:54:04Z
2024-06-02T22:00:37Z
https://github.com/ckan/ckan/issues/8126
[]
flaviofrancisco
3
slackapi/bolt-python
fastapi
1,229
How is the "app_installed_team_id" in a slack request payload set?
I have a question about how app_installed_team_id is set in the slack request payload. According to the source code mentioned here, [https://github.com/slackapi/bolt-python/blob/v1.20.1/slack_bolt/request/internals.py#L90](https://github.com/slackapi/bolt-python/blob/v1.20.1/slack_bolt/request/internals.py#L90), the team_id is extracted based on what is present in the payload. I have a situation where I have a user that is a member of two workspaces in an organization. An app is installed in both workspaces by the same user, i.e. separate installation records are created with different team_ids. When I receive an event like _app_home_open_ the payload will extract the team_id from the first authorization record, which in my case ends up being the correct team_id of the workspace that the app home tab is being loaded in. When I start an interaction, like clicking a button in the home tab, it results in an action payload and the team_id is extracted from the ["view"]["app_installed_team_id"] which has the value of the other workspace, being the workspace that the app was first installed in. Why is it sending that value instead of the team_id of the workspace I am in? This results in the other team_id being set in the context and used by the client. I can override this, but am curious why this is the case and is it the expected behaviour? _TL;DR: when an app is installed in multiple workspaces in an org by the same user the app_installed_team_id seems to always be the team_id of the first workspace the app was installed in. Depending on the action being handled, the team_id that is being set in the context may not always be the team_id of the workspace you are using, resulting in the webclient making requests against the wrong workspace unless you manually override it._ ### Reproducible in: #### The `slack_bolt` version slack_bolt==1.20.1 slack_sdk==3.27.1 #### Python runtime version Python 3.12.3 #### Steps to reproduce: 1. Create two workspaces in an org and install an app in both workspaces by the same user. 2. Open app home tab in the second workspace and review the app_installed_team_id in the request body. ### Expected result: Expected the app_installed_team_id to be the team_id of the workspace the action was initiated in. ### Actual result: It sets the team_id to the first workspace that the app was installed into. ####Logs: ``` { "type": "block_actions", "user": { <<redacted>> "team_id": "T07LF8EHX7V" }, "api_app_id": "A02FZN4GRGA", <<redacted>> "team": { "id": "T07LF8EHX7V", <<redacted>> "enterprise_id": "E07L8N9AQES", <<redacted>> }, <<redacted>> "is_enterprise_install": false, "view": { <<redacted>> "app_installed_team_id": "T07M45HCWF2", } ``` I would expect the app_installed_team_id to equal T07LF8EHX7V, not T07M45HCWF2.
closed
2024-12-18T22:35:33Z
2025-01-10T01:52:38Z
https://github.com/slackapi/bolt-python/issues/1229
[ "question" ]
RubberDuckyToyFactory
3
Yorko/mlcourse.ai
matplotlib
789
AI Agent ไธญๆ–‡ๆŠ€ๆœฏไบคๆต็พค
![Image](https://github.com/user-attachments/assets/b4ac7fe1-68cc-4da2-b320-7e5b92bb96f9)
open
2025-03-10T14:19:29Z
2025-03-14T06:57:00Z
https://github.com/Yorko/mlcourse.ai/issues/789
[]
aiqubits
0
deezer/spleeter
tensorflow
128
[Discussion] anyway to change bitrate of the output files?
Whenever I use songs from the base FLAC/WAV, no matter what, the stems always come out as a really low bit rate. I've tried messing with a few config files and as far as I know there's no command to change it. Would there be a specific way/file to modify to do this?
closed
2019-11-23T05:42:19Z
2019-11-25T14:15:50Z
https://github.com/deezer/spleeter/issues/128
[ "question", "RTMP" ]
Waffled-II
1
ultralytics/ultralytics
deep-learning
19,793
The recall is very low. How can it be improved?
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question I am using the YOLOv11l model for training on a detection dataset. The current issue is that the mAP is around 0.6, but the recall is only about 0.3. Are there any parameters I can modify to improve the detection recall rate? ### Additional here is my trauining code: from ultralytics import YOLO import os import warnings warnings.filterwarnings("ignore", category=UserWarning, module="matplotlib") print(os.getcwd()) if __name__ == '__main__': # model = YOLO(r'yolov11l.yaml') # build a new model from YAML # model.load(r'yolo11l.pt') model = YOLO("yolo11l.pt") # load a pretrained model (recommended for training) model.train(data=r'data.yaml', imgsz=960, epochs=800, batch=6, workers=8, optimizer='AdamW', patience=0, device='1', conf=0.1, multi_scale=True, lr0=0.001 ) the others parameters: # Ultralytics YOLO ๐Ÿš€, AGPL-3.0 license # Default training settings and hyperparameters for medium-augmentation COCO training task: detect # (str) YOLO task, i.e. detect, segment, classify, pose, obb mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark # Train settings ------------------------------------------------------------------------------------------------------- model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml data: '../../data.yaml' # (str, optional) path to data file, i.e. coco8.yaml epochs: 300 # (int) number of epochs to train for time: # (float, optional) number of hours to train for, overrides epochs if supplied patience: 100 # (int) epochs to wait for no observable improvement for early stopping of training batch: 4 # (int) number of images per batch (-1 for AutoBatch) imgsz: 640 # (int | list) input images size as int for train and val modes, or list[h,w] for predict and export modes save: True # (bool) save train checkpoints and predict results save_period: 40 # (int) Save checkpoint every x epochs (disabled if < 1) cache: False # (bool) True/ram, disk or False. Use cache for data loading device: 2 # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu workers: 1 # (int) number of worker threads for data loading (per RANK if DDP) project: # (str, optional) project name name: # (str, optional) experiment name, results saved to 'project/name' directory exist_ok: False # (bool) whether to overwrite existing experiment pretrained: True # (bool | str) whether to use a pretrained model (bool) or a model to load weights from (str) optimizer: auto # (str) optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto] verbose: True # (bool) whether to print verbose output seed: 0 # (int) random seed for reproducibility deterministic: True # (bool) whether to enable deterministic mode single_cls: False # (bool) train multi-class data as single-class rect: False # (bool) rectangular training if mode='train' or rectangular validation if mode='val' cos_lr: False # (bool) use cosine learning rate scheduler close_mosaic: 0 # (int) disable mosaic augmentation for final epochs (0 to disable) resume: False # (bool) resume training from last checkpoint amp: True # (bool) Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check fraction: 1.0 # (float) dataset fraction to train on (default is 1.0, all images in train set) profile: False # (bool) profile ONNX and TensorRT speeds during training for loggers freeze: None # (int | list, optional) freeze first n layers, or freeze list of layer indices during training multi_scale: False # (bool) Whether to use multiscale during training # Segmentation overlap_mask: True # (bool) merge object masks into a single image mask during training (segment train only) mask_ratio: 4 # (int) mask downsample ratio (segment train only) # Classification dropout: 0.0 # (float) use dropout regularization (classify train only) # Val/Test settings ---------------------------------------------------------------------------------------------------- val: True # (bool) validate/test during training split: val # (str) dataset split to use for validation, i.e. 'val', 'test' or 'train' save_json: False # (bool) save results to JSON file save_hybrid: False # (bool) save hybrid version of labels (labels + additional predictions) conf: 0.2 # (float, optional) object confidence threshold for detection (default 0.25 predict, 0.001 val) iou: 0.6 # (float) intersection over union (IoU) threshold for NMS max_det: 300 # (int) maximum number of detections per image half: False # (bool) use half precision (FP16) dnn: False # (bool) use OpenCV DNN for ONNX inference plots: True # (bool) save plots and images during train/val # Predict settings ----------------------------------------------------------------------------------------------------- source: # (str, optional) source directory for images or videos vid_stride: 1 # (int) video frame-rate stride stream_buffer: False # (bool) buffer all streaming frames (True) or return the most recent frame (False) visualize: False # (bool) visualize model features augment: False # (bool) apply image augmentation to prediction sources agnostic_nms: False # (bool) class-agnostic NMS classes: # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3] retina_masks: False # (bool) use high-resolution segmentation masks embed: # (list[int], optional) return feature vectors/embeddings from given layers # Visualize settings --------------------------------------------------------------------------------------------------- show: False # (bool) show predicted images and videos if environment allows save_frames: False # (bool) save predicted individual video frames save_txt: False # (bool) save results as .txt file save_conf: False # (bool) save results with confidence scores save_crop: False # (bool) save cropped images with results show_labels: True # (bool) show prediction labels, i.e. 'person' show_conf: True # (bool) show prediction confidence, i.e. '0.99' show_boxes: True # (bool) show prediction boxes line_width: # (int, optional) line width of the bounding boxes. Scaled to image size if None. # Export settings ------------------------------------------------------------------------------------------------------ format: torchscript # (str) format to export to, choices at https://docs.ultralytics.com/modes/export/#export-formats keras: False # (bool) use Kera=s optimize: False # (bool) TorchScript: optimize for mobile int8: False # (bool) CoreML/TF INT8 quantization dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes simplify: True # (bool) ONNX: simplify model using `onnxslim` opset: # (int, optional) ONNX: opset version workspace: None # (float, optional) TensorRT: workspace size (GiB), `None` will let TensorRT auto-allocate memory nms: False # (bool) CoreML: add NMS # Hyperparameters ------------------------------------------------------------------------------------------------------ lr0: 0.01 # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3) lrf: 0.01 # (float) final learning rate (lr0 * lrf) momentum: 0.937 # (float) SGD momentum/Adam beta1 weight_decay: 0.0005 # (float) optimizer weight decay 5e-4 warmup_epochs: 3.0 # (float) warmup epochs (fractions ok) warmup_momentum: 0.8 # (float) warmup initial momentum warmup_bias_lr: 0.1 # (float) warmup initial bias lr box: 7.5 # (float) box loss gain cls: 0.5 # (float) cls loss gain (scale with pixels) dfl: 1.5 # (float) dfl loss gain pose: 12.0 # (float) pose loss gain kobj: 1.0 # (float) keypoint obj loss gain nbs: 64 # (int) nominal batch size hsv_h: 0.015 # (float) image HSV-Hue augmentation (fraction) hsv_s: 0.7 # (float) image HSV-Saturation augmentation (fraction) hsv_v: 0.4 # (float) image HSV-Value augmentation (fraction) degrees: 5.0 # (float) image rotation (+/- deg) translate: 0.1 # (float) image translation (+/- fraction) scale: 0.2 # (float) image scale (+/- gain) shear: 0.0 # (float) image shear (+/- deg) perspective: 0.0 # (float) image perspective (+/- fraction), range 0-0.001 flipud: 0.0 # (float) image flip up-down (probability) fliplr: 0.5 # (float) image flip left-right (probability) bgr: 0.0 # (float) image channel BGR (probability) mosaic: 0.0 # (float) image mosaic (probability) mixup: 0.0 # (float) image mixup (probability) copy_paste: 0.0 # (float) segment copy-paste (probability) copy_paste_mode: "flip" # (str) the method to do copy_paste augmentation (flip, mixup) auto_augment: randaugment # (str) auto augmentation policy for classification (randaugment, autoaugment, augmix) erasing: 0.4 # (float) probability of random erasing during classification training (0-0.9), 0 means no erasing, must be less than 1.0. crop_fraction: 1.0 # (float) image crop fraction for classification (0.1-1), 1.0 means no crop, must be greater than 0. # Custom config.yaml --------------------------------------------------------------------------------------------------- cfg: # (str, optional) for overriding defaults.yaml # Tracker settings ------------------------------------------------------------------------------------------------------ tracker: botsort.yaml # (str) tracker type, choices=[botsort.yaml, bytetrack.yaml]
open
2025-03-20T06:55:07Z
2025-03-20T20:55:54Z
https://github.com/ultralytics/ultralytics/issues/19793
[ "question", "detect" ]
LiangYong1216
4
vitalik/django-ninja
rest-api
511
[BUG]
I had trouble to use Ninja Forms, the form is not working with the PUT method, but only POST ```py @router.post("/profile/edit", response={200: dict, 400: dict}) def edit_profile(request: Request, profile: EditProfileSchema = Form(...)): print(profile.dict()) ``` result: `{'username': 'test', 'fullname': 'test', 'delete_image': True, 'bio': '', 'size': ''}` this code works only if i leave the post method, if put is set this is te result: `{'username': None, 'fullname': None, 'delete_image': None, 'bio': None, 'size': None}` Versions: - Python version: 3.10.3 - Django version: 4.0.6 - Django-Ninja version: 0.19.1
closed
2022-07-22T00:31:45Z
2022-09-08T15:02:08Z
https://github.com/vitalik/django-ninja/issues/511
[]
RiccardoCherchi
2
zappa/Zappa
flask
1,351
Scheduled function truncated at 63 characters and fails to invoke
if I have a scheduled function with a name longer than 63 characters, then the name will be truncated in the CloudWatch event name/ARN: ``` { "production": { ... "events": [{ "function": "my_module.my_submodule.my_really_long_and_descriptive_function_name", "expressions": ["rate(1 day)"] }], ... } } ``` Event rule: `arn:aws:events:eu-west-2:000000000000:rule/-my_module.my_submodule.my_really_long_and_descriptive_function_` This results in the following exception when the event is handled by the lambda: ``` AttributeError: module 'my_module.my_submodule' has no attribute 'my_really_long_and_descriptive_function_' ``` ## Context <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> <!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.8/3.9/3.10/3.11/3.12 --> It looks like the `whole_function` value is parsed out of the event ARN here: https://github.com/zappa/Zappa/blob/39f75e76d28c1a08d4de6501e6f794fe988cbc98/zappa/handler.py#L410 Since the ARNs are limited in length, the long module path gets truncated to 63 characters (possibly because of the leading `-` making 64 total). It looks like the full module and function path remains non-truncated in the description of the event rule. ## Expected Behavior It should invoke the non-truncated function, or should refuse to deploy with handler functions that are too long. ## Actual Behavior It throws an exception and the scheduled task never executes. ## Possible Fix Either: 1. Have the handler read the non-truncated `whole_function` value from the event description. This might require an extra AWS API call that existing deployments may or may not have permission to perform. 2. During deployment, a mapping of truncated names to full names could be created and embedded in the deployed app bundle, then referenced when handling events. 3. Raise an error (early) during deployment if a handler function name is too long and would result in truncation. It would be better to explicitly fail during deployment than to have guaranteed failures later on that might go unnoticed. ## Steps to Reproduce 1. Create a scheduled function whose fully qualified handler is longer than 63 characters. 2. Deploy. 3. Observe the error logs for the `AttributeError` above. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: 0.56.1 * Operating System and Python version: Amazon Linux (lambda), Python 3.9
closed
2024-09-18T11:18:37Z
2024-09-30T07:31:03Z
https://github.com/zappa/Zappa/issues/1351
[ "duplicate" ]
eviltwin
2
xonsh/xonsh
data-science
5,753
Inconsisted quoting behavior on Windows vs. macOS
## Current Behavior <!--- For general xonsh issues, please try to replicate the failure using `xonsh --no-rc --no-env`. Short, reproducible code snippets are highly appreciated. You can use `$XONSH_SHOW_TRACEBACK=1`, `$XONSH_TRACE_SUBPROC=2`, or `$XONSH_DEBUG=1` to collect more information about the failure. --> I can run the command below on both macOS and Windows: ```xsh conda install "python>=3.11" ``` However, the following variant only works on Windows: ```xsh conda install @(['"python>=3.11"']) ``` On macOS, I get the traceback pasted below. Traceback (if applicable): <details> ```xsh Trace run_subproc({'cmds': (['conda', 'install', '"python>=3.11"'],), 'captured': 'hiddenobject'}) Trace run_subproc({'cmds': (['/usr/local/Caskroom/miniconda/base/bin/conda', 'install', '"python>=3.11"'],), 'captured': 'hiddenobject'}) InvalidMatchSpec: Invalid spec '"python>=3.11"': Invalid version '3.11"': invalid character(s) Exception in thread {'cls': 'ProcProxyThread', 'name': 'Thread-186', 'func': FuncAlias({'name': 'conda', 'func': '_conda_main', 'return_what': 'result'}), 'alias': 'conda', 'pid': None} subprocess.CalledProcessError: Command '['/usr/local/Caskroom/miniconda/base/bin/conda', 'install', '"python>=3.11"']' returned non-zero exit status 1. subprocess.CalledProcessError: Command '['conda', 'install', '"python>=3.11"']' returned non-zero exit status 1. ``` </details> ## Expected Behavior <!--- What you expect and what is your real life use case. --> ## xonfig <details> ```xsh +-----------------------------+------------------------------------+ | xonsh | 0.19.0 | | Python | 3.11.11 | | PLY | 3.11 | | have readline | True | | prompt toolkit | 3.0.39 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.16.1 | | on posix | True | | on linux | False | | on darwin | True | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | | RC file 1 | /path/to/home/.config/xonsh/rc.xsh | | UPDATE_OS_ENVIRON | False | | XONSH_CAPTURE_ALWAYS | False | | XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines | | THREAD_SUBPROCS | True | | XONSH_CACHE_SCRIPTS | True | +-----------------------------+------------------------------------+ ``` </details> ## For community โฌ‡๏ธ **Please click the ๐Ÿ‘ reaction instead of leaving a `+1` or ๐Ÿ‘ comment**
open
2024-12-16T17:14:03Z
2025-01-18T19:38:35Z
https://github.com/xonsh/xonsh/issues/5753
[ "windows", "command-substitution", "argparse", "to-close-in-the-future", "edge-case" ]
auneri
8
horovod/horovod
tensorflow
3,504
hvd.DistributedOptimizer gradient accumulation doesn't clean up infinite gradient correctly
**Environment:** 1. Framework: (TensorFlow, Keras, PyTorch, MXNet) Keras 2. Framework version: 2.4 3. Horovod version: 2.3 4. MPI version: 5. CUDA version: 6. NCCL version: 7. Python version: 8. Spark / PySpark version: 9. Ray version: 10. OS and version: 11. GCC version: 12. CMake version: **Checklist:** 1. Did you search issues to find if somebody asked this question before? 2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? 3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? 4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? **Bug report:** We were training in TensorFlow [FP16 mixed precision](https://www.tensorflow.org/guide/mixed_precision) with keras `model.fit()` and with gradient accumulation/aggregation (`backward_pass_per_step` in `hvd.DistributedOptimizer`) and noticed that the [GradientAggregationHelperEager](https://github.com/horovod/horovod/blob/master/horovod/tensorflow/gradient_aggregation_eager.py#L8) doesn't work correctly with FP16 when the loss goes infinite. Details: It is kind of expected that at the very first 2-15 steps of the training, the gradient out of TF [LossScaleOptimizer](https://github.com/keras-team/keras/blob/v2.8.0/keras/mixed_precision/loss_scale_optimizer.py#L258-L844) is infinite (because the default initial loss scale factor is as large as `2**15`). Dynamic LSO can handle this gracefully, it just skips applying gradient of that step and divides the scale factor by half. However horovod GradientAggregationHelper will anyway add the infinite gradient up locally, and the infinite gradient will never be correctly cleaned up in [this way](https://github.com/horovod/horovod/blob/133ef0725253db83cfb82a4ed4003df76d189829/horovod/tensorflow/gradient_aggregation_eager.py#L119-L123): ``` def _clear_vars(self): self.counter.assign(0) for idx in self.locally_aggregated_grads.keys(): self.locally_aggregated_grads[idx].assign_add( -1 * self.locally_aggregated_grads[idx]) ``` as the result of adding inf value by its negative val will be NaN.
closed
2022-04-06T05:12:54Z
2022-04-15T17:39:00Z
https://github.com/horovod/horovod/issues/3504
[ "bug" ]
yundai424
0
wkentaro/labelme
computer-vision
527
Instance segmentation not working
SegmentationObjectPNG and SegmentationClassPNG have same type of images and not showing different colors for different instances. <img src=https://user-images.githubusercontent.com/55757328/71172201-9db37500-2285-11ea-9758-e8decca2be09.png width=30% > <img src=https://user-images.githubusercontent.com/55757328/71172208-a441ec80-2285-11ea-92f0-5c145f4059dc.png width=30%> Even though while labelling I labelled as classname-1, classname-2 The labels.txt file has only classname once as you have shown in the instance segmentation example. What could I be doing wrong? I really want different instances in different colors
closed
2019-12-19T12:03:40Z
2020-03-15T00:00:21Z
https://github.com/wkentaro/labelme/issues/527
[]
aditya-krish
2
horovod/horovod
tensorflow
3,535
Distributed validation with Keras and Tensorflow
**Is your feature request related to a problem? Please describe.** Referring to the Keras+Tensorflow example at https://github.com/horovod/horovod/blob/master/examples/tensorflow2/tensorflow2_keras_mnist.py, I'm considering distinct training and validation datasets by expanding l42 as follows: ``` train_dataset = train_dataset.repeat().shuffle(10000).batch(128) val_dataset = val_dataset.batch(128) ``` And the adapted ``model.fit`` call as follows: ``` mnist_model.fit(train_dataset, steps_per_epoch=500 // hvd.size(), callbacks=callbacks, validation_data=val_dataset, validation_steps=val_steps, epochs=24, verbose=verbose) ``` When running this code on a cluster, the model training phase uses all computation nodes as expected, but the validation phase operates only on the rank 0 node. When the validation set is pretty large, validation may take even more time than actual training. **Describe the solution you'd like** Ideally, validation would use all computation nodes under the hood. **Describe alternatives you've considered** I could not find any working solution which did not require abandonning Keras. As the optimizer object is the cornerstone for data distribution with Keras+Tensorflow, I have little idea on cues to investigate. [This page mentions a distributed validation step](http://www.idris.fr/eng/jean-zay/gpu/jean-zay-gpu-hvd-tf-multi-eng.html#distributed_validation), but I could not figure out a way to incorporate it in a working custom validation step made following [the official Tensorflow guidelines](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit#providing_your_own_evaluation_step).
open
2022-05-04T12:18:14Z
2022-05-04T12:18:14Z
https://github.com/horovod/horovod/issues/3535
[ "enhancement" ]
pbruneau
0
CTFd/CTFd
flask
2,667
Undeclared variable link in media library
**Environment**: - CTFd Version/Commit: bbf1ffc7aa7ce6c505f8cb6c9afd2da0ddd1c609 - Operating System: Linux (docker) - Web Browser and Version: Firefox 132 & Chrome 123 **What happened?** In the page editor, I am unable to insert a media link. In the console, it throws an error which indicates that the `link` variable is not defined. **What did you expect to happen?** To work ... **How to reproduce your issue** - Start a fresh instance of CTFd - Edit any page, then open the media library - Upload an image, then try to insert it - Nothing happen, an error is written into the console **Any associated stack traces or error logs** ![image](https://github.com/user-attachments/assets/ef21fc07-4568-44c5-bdf2-8b15d666430a)
closed
2024-11-25T19:05:44Z
2024-11-25T20:52:39Z
https://github.com/CTFd/CTFd/issues/2667
[]
adam-lebon
1
holoviz/panel
plotly
7,689
AttributeError when attempting to add PyComponent to column in FastListTemplate main/modal
### ALL software version info <details> <summary>Software Version Info</summary> ```plaintext Python == 3.11.11 (but it also happens on lesser versions, such as 3.10 -- not sure for higher versions)) panel==1.6.0 (but it also happens on lesser versions) bokeh==3.6.3 param==2.2.0 ``` </details> ### Description of expected behavior and the observed behavior In my full project, I am attempting to add a custom `PyComponent` to a column which I have in the modal of a `FastListTemplate`. When I append my component and then open the modal, I get the error shown below, and the modal doesn't open (as expected since an error occurred). If I open the modal first and then append, I get the error, and my component shows correctly with some style errors (not sure if that is reproduced in the example below). I was able to semi-work around this issue by directly inheriting from `pn.reactive.Reactive`, which implements `_process_param_change`. Then, I started getting other errors which may be seen by adding in `Reactive` to the `FeatureInput` inheritance below. (I already imported it at the top of the example.) The example I provide below is the minimal example I could get to reproduce the issue. The expected behavior is to be able to append a `PyComponent` to these columns and have it work with no errors. ### Complete, minimal, self-contained example code that reproduces the issue This is the `FeatureInput` example from the [`PyComponent` example page](https://panel.holoviz.org/how_to/custom_components/python/create_custom_widget.html), except I changed it to display in a `FastListTemplate` main or modal. Remember to test adding `Reactive` to the inheritance of `FeatureInput` as well. <details> ```python import panel as pn import param from panel.widgets.base import WidgetBase from panel.custom import PyComponent from panel.reactive import Reactive class FeatureInput(WidgetBase, PyComponent): """ The `FeatureInput` enables a user to select from a list of features and set their values. """ value = param.Dict( doc="The names of the features selected and their set values", allow_None=False ) features = param.Dict( doc="The names of the available features and their default values" ) selected_features = param.ListSelector( doc="The list of selected features" ) _selected_widgets = param.ClassSelector( class_=pn.Column, doc="The widgets used to edit the selected features" ) def __init__(self, **params): params["value"] = params.get("value", {}) params["features"] = params.get("features", {}) params["selected_features"] = params.get("selected_features", []) params["_selected_widgets"] = self.param._selected_widgets.class_() super().__init__(**params) self._selected_features_widget = pn.widgets.MultiChoice.from_param( self.param.selected_features, sizing_mode="stretch_width" ) def __panel__(self): return pn.Column(self._selected_features_widget, self._selected_widgets) @param.depends("features", watch=True, on_init=True) def _reset_selected_features(self): selected_features = [] for feature in self.selected_features.copy(): if feature in self.features.copy(): selected_features.append(feature) self.param.selected_features.objects = list(self.features) self.selected_features = selected_features @param.depends("selected_features", watch=True, on_init=True) def _handle_selected_features_change(self): org_value = self.value self._update_selected_widgets(org_value) self._update_value() def _update_value(self, *args): # pylint: disable=unused-argument new_value = {} for widget in self._selected_widgets: new_value[widget.name] = widget.value self.value = new_value def _update_selected_widgets(self, org_value): new_widgets = {} for feature in self.selected_features: value = org_value.get(feature, self.features[feature]) widget = self._new_widget(feature, value) new_widgets[feature] = widget self._selected_widgets[:] = list(new_widgets.values()) def _new_widget(self, feature, value): widget = pn.widgets.FloatInput( name=feature, value=value, sizing_mode="stretch_width" ) pn.bind(self._update_value, widget, watch=True) return widget features = { "Blade Length (m)": 73.5, "Cut-in Wind Speed (m/s)": 3.5, "Cut-out Wind Speed (m/s)": 25, "Grid Connection Capacity (MW)": 5, "Hub Height (m)": 100, "Rated Wind Speed (m/s)": 12, "Rotor Diameter (m)": 150, "Turbine Efficiency (%)": 45, "Water Depth (m)": 30, "Wind Speed (m/s)": 10, } _selected_features = ["Wind Speed (m/s)", "Rotor Diameter (m)"] _widget = FeatureInput( features=features, selected_features=_selected_features, width=500, ) ##### My code starts here ##### main_column = pn.Column() modal_column = pn.Column() flt = pn.template.FastListTemplate( main=[main_column], modal=[modal_column] ) def add_to_main(_): main_column.append(pn.FlexBox( pn.Column( "## Widget", _widget, ), pn.Column( "## Value", pn.pane.JSON(_widget.param.value, width=500, height=200), ), )) def add_to_modal(_): modal_column.append( pn.FlexBox( pn.Column( "## Widget", _widget, ), pn.Column( "## Value", pn.pane.JSON(_widget.param.value, width=500, height=200), ), ) ) def add_then_open(_): modal_column.append( pn.FlexBox( pn.Column( "## Widget", _widget, ), pn.Column( "## Value", pn.pane.JSON(_widget.param.value, width=500, height=200), ), ) ) flt.open_modal() def open_then_add(_): flt.open_modal() modal_column.append( pn.FlexBox( pn.Column( "## Widget", _widget, ), pn.Column( "## Value", pn.pane.JSON(_widget.param.value, width=500, height=200), ), ) ) main_column.append(pn.widgets.Button( name='add to main', on_click=add_to_main)) main_column.append(pn.widgets.Button( name='add to modal', on_click=add_to_modal)) main_column.append(pn.widgets.Button( name="add to modal, then open modal", on_click=add_then_open)) main_column.append(pn.widgets.Button( name="open modal, then add to modal", on_click=open_then_add)) flt.servable() ``` </details> ### Stack traceback and/or browser JavaScript console output Errors without inheriting from Reactive: <details> ```plaintext AttributeError: 'FeatureInput' object has no attribute '_process_param_change' 2025-02-06 19:21:37,267 WebSocket connection closed: code=1001, reason=None 2025-02-06 19:21:37,826 WebSocket connection opened 2025-02-06 19:21:37,826 ServerConnection created 2025-02-06 19:21:39,019 error handling message message: Message 'PATCH-DOC' content: {'events': [{'kind': 'MessageSent', 'msg_type': 'bokeh_event', 'msg_data': {'type': 'event', 'name': 'button_click', 'values': {'type': 'map', 'entries': [['model', {'id': 'p1289'}]]}}}]} error: AttributeError("'FeatureInput' object has no attribute '_process_param_change'") Traceback (most recent call last): File "/code/venv/lib/python3.11/site-packages/bokeh/server/protocol_handler.py", line 94, in handle work = await handler(message, connection) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/bokeh/server/session.py", line 94, in _needs_document_lock_wrapper result = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/bokeh/server/session.py", line 286, in _handle_patch message.apply_to_document(self.document, self) File "/code/venv/lib/python3.11/site-packages/bokeh/protocol/messages/patch_doc.py", line 104, in apply_to_document invoke_with_curdoc(doc, lambda: doc.apply_json_patch(self.payload, setter=setter)) File "/code/venv/lib/python3.11/site-packages/bokeh/document/callbacks.py", line 453, in invoke_with_curdoc return f() ^^^ File "/code/venv/lib/python3.11/site-packages/bokeh/protocol/messages/patch_doc.py", line 104, in <lambda> invoke_with_curdoc(doc, lambda: doc.apply_json_patch(self.payload, setter=setter)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/bokeh/document/document.py", line 391, in apply_json_patch DocumentPatchedEvent.handle_event(self, event, setter) File "/code/venv/lib/python3.11/site-packages/bokeh/document/events.py", line 244, in handle_event event_cls._handle_event(doc, event) File "/code/venv/lib/python3.11/site-packages/bokeh/document/events.py", line 279, in _handle_event cb(event.msg_data) File "/code/venv/lib/python3.11/site-packages/bokeh/document/callbacks.py", line 400, in trigger_event model._trigger_event(event) File "/code/venv/lib/python3.11/site-packages/bokeh/util/callback_manager.py", line 111, in _trigger_event self.document.callbacks.notify_event(cast(Model, self), event, invoke) File "/code/venv/lib/python3.11/site-packages/bokeh/document/callbacks.py", line 262, in notify_event invoke_with_curdoc(doc, callback_invoker) File "/code/venv/lib/python3.11/site-packages/bokeh/document/callbacks.py", line 453, in invoke_with_curdoc return f() ^^^ File "/code/venv/lib/python3.11/site-packages/bokeh/util/callback_manager.py", line 107, in invoke cast(EventCallbackWithEvent, callback)(event) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 580, in _server_event self._comm_event(doc, event) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 567, in _comm_event state._handle_exception(e) File "/code/venv/lib/python3.11/site-packages/panel/io/state.py", line 484, in _handle_exception raise exception File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 565, in _comm_event self._process_bokeh_event(doc, event) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 502, in _process_bokeh_event self._process_event(event) File "/code/venv/lib/python3.11/site-packages/panel/widgets/button.py", line 241, in _process_event self.clicks += 1 ^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 515, in _f instance_param.__set__(obj, val) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 517, in _f return f(self, obj, val) ^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameters.py", line 541, in __set__ super().__set__(obj,val) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 517, in _f return f(self, obj, val) ^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 1564, in __set__ obj.param._call_watcher(watcher, event) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2604, in _call_watcher self_._execute_watcher(watcher, (event,)) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/workspaces/dexter-ui/test.py", line 125, in add_to_modal modal_column.append( File "/code/venv/lib/python3.11/site-packages/panel/layout/base.py", line 474, in append self.objects = new_objects ^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 515, in _f instance_param.__set__(obj, val) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 517, in _f return f(self, obj, val) ^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/panel/viewable.py", line 1184, in __set__ super().__set__(obj, self._transform_value(val)) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 517, in _f return f(self, obj, val) ^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 1564, in __set__ obj.param._call_watcher(watcher, event) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2604, in _call_watcher self_._execute_watcher(watcher, (event,)) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 451, in _param_change applied &= self._apply_update(named_events, properties, model, ref) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 339, in _apply_update self._update_model(events, msg, root, model, doc, comm) File "/code/venv/lib/python3.11/site-packages/panel/layout/base.py", line 135, in _update_model state._views[ref][0]._preprocess(root, self, old_children) File "/code/venv/lib/python3.11/site-packages/panel/viewable.py", line 619, in _preprocess hook(self, root, changed, old_models) File "/code/venv/lib/python3.11/site-packages/panel/theme/base.py", line 150, in _apply_hooks self._reapply(changed, root, old_models, isolated=False, cache=cache, document=root.document) File "/code/venv/lib/python3.11/site-packages/panel/theme/base.py", line 138, in _reapply self._apply_modifiers(o, ref, self.theme, isolated, cache, document) File "/code/venv/lib/python3.11/site-packages/panel/theme/base.py", line 253, in _apply_modifiers cls._apply_params(viewable, mref, modifiers, document) File "/code/venv/lib/python3.11/site-packages/panel/theme/base.py", line 273, in _apply_params props = viewable._process_param_change(params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'FeatureInput' object has no attribute '_process_param_change' ``` </details> Errors when inheriting from Reactive and trying to add something in the UI: <details> ```plaintext 2025-02-06 19:52:36,522 WebSocket connection closed: code=1001, reason=None 2025-02-06 19:52:37,391 WebSocket connection opened 2025-02-06 19:52:37,392 ServerConnection created 2025-02-06 19:52:42,574 ERROR: panel.reactive - Callback failed for object named 'Selected features' changing property {'value': ['Rotor Diameter (m)', 'Wind Speed (m/s)', 'Blade Length (m)']} Traceback (most recent call last): File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 470, in _process_events self.param.update(**self_params) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2406, in update restore = dict(self_._update(arg, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2439, in _update self_._batch_call_watchers() File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2624, in _batch_call_watchers self_._execute_watcher(watcher, events) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/code/venv/lib/python3.11/site-packages/panel/param.py", line 526, in link_widget self.object.param.update(**{p_name: change.new}) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2406, in update restore = dict(self_._update(arg, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2439, in _update self_._batch_call_watchers() File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2624, in _batch_call_watchers self_._execute_watcher(watcher, events) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 767, in _sync_caller return function() ^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/depends.py", line 85, in _depends return func(*args, **kw) ^^^^^^^^^^^^^^^^^ File "/workspaces/dexter-ui/test.py", line 61, in _handle_selected_features_change self._update_value() File "/workspaces/dexter-ui/test.py", line 69, in _update_value self.value = new_value ^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 515, in _f instance_param.__set__(obj, val) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 517, in _f return f(self, obj, val) ^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 1564, in __set__ obj.param._call_watcher(watcher, event) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2604, in _call_watcher self_._execute_watcher(watcher, (event,)) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 451, in _param_change applied &= self._apply_update(named_events, properties, model, ref) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 339, in _apply_update self._update_model(events, msg, root, model, doc, comm) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 711, in _update_model super()._update_model(events, msg, root, model, doc, comm) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 371, in _update_model model_val = getattr(model, attr) ^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/bokeh/core/has_props.py", line 369, in __getattr__ self._raise_attribute_error_with_matches(name, properties) File "/code/venv/lib/python3.11/site-packages/bokeh/core/has_props.py", line 377, in _raise_attribute_error_with_matches raise AttributeError(f"unexpected attribute {name!r} to {self.__class__.__name__}, {text} attributes are {nice_join(matches)}") AttributeError: unexpected attribute 'value' to Column, possible attributes are align, aspect_ratio, auto_scroll_limit, children, context_menu, css_classes, css_variables, disabled, elements, flow_mode, height, height_policy, js_event_callbacks, js_property_callbacks, margin, max_height, max_width, min_height, min_width, name, resizable, scroll_button_threshold, scroll_index, scroll_position, sizing_mode, spacing, styles, stylesheets, subscribed_events, syncable, tags, view_latest, visible, width or width_policy 2025-02-06 19:52:42,578 Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x774f97e62390>>, <Task finished name='Task-1212' coro=<ServerSession.with_document_locked() done, defined at /code/venv/lib/python3.11/site-packages/bokeh/server/session.py:77> exception=AttributeError("unexpected attribute 'value' to Column, possible attributes are align, aspect_ratio, auto_scroll_limit, children, context_menu, css_classes, css_variables, disabled, elements, flow_mode, height, height_policy, js_event_callbacks, js_property_callbacks, margin, max_height, max_width, min_height, min_width, name, resizable, scroll_button_threshold, scroll_index, scroll_position, sizing_mode, spacing, styles, stylesheets, subscribed_events, syncable, tags, view_latest, visible, width or width_policy")>) Traceback (most recent call last): File "/code/venv/lib/python3.11/site-packages/tornado/ioloop.py", line 750, in _run_callback ret = callback() ^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/tornado/ioloop.py", line 774, in _discard_future_result future.result() File "/code/venv/lib/python3.11/site-packages/bokeh/server/session.py", line 98, in _needs_document_lock_wrapper result = await result ^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 517, in _change_coroutine state._handle_exception(e) File "/code/venv/lib/python3.11/site-packages/panel/io/state.py", line 484, in _handle_exception raise exception File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 515, in _change_coroutine self._change_event(doc) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 533, in _change_event self._process_events(events) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 470, in _process_events self.param.update(**self_params) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2406, in update restore = dict(self_._update(arg, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2439, in _update self_._batch_call_watchers() File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2624, in _batch_call_watchers self_._execute_watcher(watcher, events) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/code/venv/lib/python3.11/site-packages/panel/param.py", line 526, in link_widget self.object.param.update(**{p_name: change.new}) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2406, in update restore = dict(self_._update(arg, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2439, in _update self_._batch_call_watchers() File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2624, in _batch_call_watchers self_._execute_watcher(watcher, events) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 767, in _sync_caller return function() ^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/depends.py", line 85, in _depends return func(*args, **kw) ^^^^^^^^^^^^^^^^^ File "/workspaces/dexter-ui/test.py", line 61, in _handle_selected_features_change self._update_value() File "/workspaces/dexter-ui/test.py", line 69, in _update_value self.value = new_value ^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 515, in _f instance_param.__set__(obj, val) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 517, in _f return f(self, obj, val) ^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 1564, in __set__ obj.param._call_watcher(watcher, event) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2604, in _call_watcher self_._execute_watcher(watcher, (event,)) File "/code/venv/lib/python3.11/site-packages/param/parameterized.py", line 2586, in _execute_watcher watcher.fn(*args, **kwargs) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 451, in _param_change applied &= self._apply_update(named_events, properties, model, ref) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 339, in _apply_update self._update_model(events, msg, root, model, doc, comm) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 711, in _update_model super()._update_model(events, msg, root, model, doc, comm) File "/code/venv/lib/python3.11/site-packages/panel/reactive.py", line 371, in _update_model model_val = getattr(model, attr) ^^^^^^^^^^^^^^^^^^^^ File "/code/venv/lib/python3.11/site-packages/bokeh/core/has_props.py", line 369, in __getattr__ self._raise_attribute_error_with_matches(name, properties) File "/code/venv/lib/python3.11/site-packages/bokeh/core/has_props.py", line 377, in _raise_attribute_error_with_matches raise AttributeError(f"unexpected attribute {name!r} to {self.__class__.__name__}, {text} attributes are {nice_join(matches)}") AttributeError: unexpected attribute 'value' to Column, possible attributes are align, aspect_ratio, auto_scroll_limit, children, context_menu, css_classes, css_variables, disabled, elements, flow_mode, height, height_policy, js_event_callbacks, js_property_callbacks, margin, max_height, max_width, min_height, min_width, name, resizable, scroll_button_threshold, scroll_index, scroll_position, sizing_mode, spacing, styles, stylesheets, subscribed_events, syncable, tags, view_latest, visible, width or width_policy 2025-02-06 19:52:42,620 Dropping a patch because it contains a previously known reference (id='p1637'). Most of the time this is harmless and usually a result of updating a model on one side of a communications channel while it was being removed on the other end. 2025-02-06 19:52:42,621 Dropping a patch because it contains a previously known reference (id='p1638'). Most of the time this is harmless and usually a result of updating a model on one side of a communications channel while it was being removed on the other end. ``` </details>
open
2025-02-06T20:04:24Z
2025-03-11T14:33:24Z
https://github.com/holoviz/panel/issues/7689
[]
14lclark
0
microsoft/qlib
machine-learning
1,619
dump_bin.py dump_allๆ—ถ๏ผŒไธช่‚กไธบไฝ•่ฆ้‡‡็”จๆ—ฅๅކ็š„็ดขๅผ•๏ผŒๅฏผ่‡ดๅœ็‰Œๆ—ฅๆ— ๆ•ฐๆฎ็š„่ฎฐๅฝ•
ๅœจdump_bin.py dump_all็š„้€ป่พ‘ไธญ๏ผŒไผšๅฐ†ไธช่‚ก็š„็ดขๅผ•้‡‡็”จๆ—ฅๅކ็š„็ดขๅผ•๏ผŒๅฆ‚ไธ‹ใ€‚่ฟ™ๅฏผ่‡ดbinๆ•ฐๆฎไธญไธช่‚กๅœ็‰Œๆ—ฅๅ‡บ็Žฐๆ— ไปทๆ ผ๏ผŒๆ— ๆˆไบค้‡็š„่ฎฐๅฝ•๏ผŒไพ‹ๅฆ‚sz000001ไธญๅ›ฝๅนณๅฎ‰ๅœจ2014-7-15ๆ—ฅๅœ็‰Œ๏ผŒไฝ†binๆ•ฐๆฎไธญไป็„ถๅ‡บ็Žฐไธ€ๆก็ฉบ่ฎฐๅฝ•๏ผŒๅชๆœ‰ๆ—ฅๆœŸ๏ผŒๅ…ถไป–ๅญ—ๆฎต้ƒฝๆ˜ฏ็ฉบใ€‚ ๆˆ‘็š„้—ฎ้ข˜ๆ˜ฏ๏ผŒ่ฟ™ๆ ทๅค„็†ๅฏนไธๅฏน๏ผŸ่ฟ™็งๆฒกๆœ‰ไปทๆ ผ็š„่ฎฐๅฝ•ไผšๅ‚ไธŽ่ฎญ็ปƒๅ—๏ผŸๆˆ‘ไธชไบบ็š„ๆ„Ÿ่ง‰ๆ˜ฏๅœ็‰Œๆ—ฅbinไธญไธๅบ”่ฏฅๅญ˜ๅœจ็›ธๅ…ณ่ฎฐๅฝ•ใ€‚ def data_merge_calendar(self, df: pd.DataFrame, calendars_list: List[pd.Timestamp]) -> pd.DataFrame: # calendars calendars_df = pd.DataFrame(data=calendars_list, columns=[self.date_field_name]) calendars_df[self.date_field_name] = calendars_df[self.date_field_name].astype(np.datetime64) cal_df = calendars_df[ (calendars_df[self.date_field_name] >= df[self.date_field_name].min()) & (calendars_df[self.date_field_name] <= df[self.date_field_name].max()) ] # align index cal_df.set_index(self.date_field_name, inplace=True) df.set_index(self.date_field_name, inplace=True) r_df = df.reindex(cal_df.index) return r_df
closed
2023-08-04T10:03:05Z
2023-11-12T00:06:41Z
https://github.com/microsoft/qlib/issues/1619
[ "question", "stale" ]
quant2008
2
graphistry/pygraphistry
jupyter
210
[DOCS] requirements
An explicit hw/sw requirements doc may help. It can cover only the Python client side, and defer viz client + GPU server discussion to https://github.com/graphistry/graphistry-cli/blob/master/hardware-software.md . Would help w/ issues like https://github.com/graphistry/pygraphistry/issues/203 README.md is quite big, so probably a separate page w/ a README.md link to it
open
2021-02-08T17:18:50Z
2021-02-08T17:21:27Z
https://github.com/graphistry/pygraphistry/issues/210
[ "docs" ]
lmeyerov
0
browser-use/browser-use
python
169
Streamlit error while using Browesr-use
How do develop streamlit app using Browser-use? I am using simple code as below: import os import sys import asyncio from langchain_openai import ChatOpenAI from browser_use import Agent import streamlit as st os.environ['SSL_CERT_FILE'] = 'C:\\Users\\RSPRASAD\\AppData\\Local\\.certifi\\cacert.pem' os.environ['REQUESTS_CA_BUNDLE'] = 'C:\\Users\\RSPRASAD\\AppData\\Local\\.certifi\\cacert.pem' os.environ["OPENAI_API_KEY"] = 'my_api_key' llm = ChatOpenAI(base_url = 'https://models.inference.ai.azure.com', model='gpt-4o') agent = Agent( task='Go to google and then go to hindu.com and give me summary of 1 editorial. ', llm=llm, ) async def main(): await agent.run(max_steps=50) # agent.create_history_gif() asyncio.run(main()) But i encounter the below error: future: <Task finished name='Task-16155' coro=<Connection.run() done, defined at C:\Users\RSPRASAD\AppData\Local\anaconda3\envs\new_env\Lib\site-packages\playwright\_impl\_connection.py:272> exception=NotImplementedError()> Seems there is an issue of asynchronous behavior that Streamlit doesnt like and playwright cant function properly.
closed
2025-01-06T19:02:08Z
2025-03-12T10:01:51Z
https://github.com/browser-use/browser-use/issues/169
[]
ravi6389
1
darrenburns/posting
rest-api
188
customize font family
1. is there any way to use own font family in posting by changing the .scss file? is this restricted by textual for now? 2. ~~also not sure about the render code of $surface-darken-1 and $surface-lighten-1~~ Can you shed a light for these questions? After check the textual api, i got answer of question<2>, but really need help for custom font support.
closed
2025-02-15T09:07:07Z
2025-02-16T09:15:17Z
https://github.com/darrenburns/posting/issues/188
[]
zeyutt
1
2noise/ChatTTS
python
265
็”Ÿๆˆ่ฏญ้Ÿณ่ดจ้‡ไธ้”™๏ผŒไฝ†้€Ÿๅบฆๅคชๆ…ข ๆฒกๆณ•็”จ
OS: MX x86_64 Host: Z390 AORUS ELITE Kernel: 6.8.12-1-liquorix-amd64 CPU: Intel i7-9700K (8) @ 3.601GHz GPU: NVIDIA GeForce RTX 3090 Memory: 10768MiB / 64230MiB Cuda 11.8 ็”Ÿๆˆ12็ง’็š„่ฏญ้Ÿณ ้œ€่ฆ120็ง’ไปฅไธŠ๏ผŒ ๆ˜ฏไธๆ˜ฏๆˆ‘ๅ“ช้‡Œ่ฎพ็ฝฎ็š„ไธๅฏน๏ผŸ ![Screenshot_2024-06-05_19-46-38](https://github.com/2noise/ChatTTS/assets/5031611/0a304273-3d2e-4d17-867a-a55e8e324a98)
closed
2024-06-05T12:04:13Z
2024-09-09T04:01:21Z
https://github.com/2noise/ChatTTS/issues/265
[ "stale" ]
Yeeler
12
httpie/cli
rest-api
674
Follow doesn't work (on showing intermediary responses)
Per [ Showing intermediary redirect responses](https://httpie.org/doc#showing-intermediary-redirect-responses) documentation I should be able to follow redirects. However with the latest httpie on Ubuntu (Ubuntu bash on Windows 10), this doesn't work. On running: `http --follow --all httpbin.org/redirect/3` I get: ``` HTTPie 0.9.2 HTTPie data: /home/xxxxx/.httpie Requests 2.9.1 Pygments 2.1 Python 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] linux2 usage: http [--json] [--form] [--pretty {all,colors,format,none}] [--style STYLE] [--print WHAT] [--verbose] [--headers] [--body] [--stream] [--output FILE] [--download] [--continue] [--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH] [--auth USER[:PASS]] [--auth-type {basic,digest}] [--proxy PROTOCOL:PROXY_URL] [--follow] [--verify VERIFY] [--cert CERT] [--cert-key CERT_KEY] [--timeout SECONDS] [--check-status] [--ignore-stdin] [--help] [--version] [--traceback] [--debug] [METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]] http: error: unrecognized arguments: --all ```
closed
2018-04-27T05:44:35Z
2018-07-11T12:21:49Z
https://github.com/httpie/cli/issues/674
[]
viper25
1
BeastByteAI/scikit-llm
scikit-learn
30
Sentiment
Having Sentiment.py in https://github.com/iryna-kondr/scikit-llm/tree/main/skllm/datasets. would it be needed and essential ? @iryna-kondr @Nadav-Barak
closed
2023-06-05T23:05:39Z
2023-06-05T23:18:42Z
https://github.com/BeastByteAI/scikit-llm/issues/30
[]
Adesoji1
0
sammchardy/python-binance
api
618
Cryptos lack of Ranking number
Dear All, I have looked to get the Ranking number of any given symbol such as "BTC" 1 "ETH" 2. However, either API does not have such a function or it is not documented. Thanks.
open
2020-11-22T11:52:47Z
2020-11-23T10:06:43Z
https://github.com/sammchardy/python-binance/issues/618
[]
mmaxus35
3
frol/flask-restplus-server-example
rest-api
42
Trouble distinguishing flask-marshmallow problems from embedded hack
Hi - Thanks for a GREAT example! I am trying use it as the starting point for a small service I am writing. I do not seem to be able to get the custom validators and or pre and post request processing to work. However, I am not well versed in Marshmallow, so do not know if they are my problems or problems with the integration hack. Would you consider adding a field to teams with a custom validator (I think this is the marshmallow way) The one I am trying to add is a custom date field, simplest case - existing or string 'now'. The other things I am looking at integrating (once I have a bit more understanding about 'schemas' are: guid instead of or in addition to id fields jwt authentication (then I can separate out the oauth part)
closed
2016-12-21T14:11:35Z
2017-01-11T08:27:07Z
https://github.com/frol/flask-restplus-server-example/issues/42
[]
joshStillerman
4
stanfordnlp/stanza
nlp
1,294
Tokenizer doesn't respect combined_electra-large's max_length
**Describe the bug** When parsing a long text using the latest "combined_electra-large" model, I get the error: ``` Token indices sequence length is longer than the specified maximum sequence length for this model (630 > 512). Running this sequence through the model will result in indexing errors Exception in thread parse_chunks: Traceback (most recent call last): File "/home1/malouf/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038, in _bootstrap_inner self.run() File "/home1/malouf/batch/treebank/threadpipe.py", line 113, in run for tag, result in zip(tags, self.function(items)): File "/home1/malouf/batch/treebank/parse.py", line 125, in parse_chunks for doc_id, doc in zip( File "/home1/malouf/.pyenv/versions/treebank/lib/python3.11/site-packages/stanza/pipeline/core.py", line 456, in stream batch = self.bulk_process(batch, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home1/malouf/.pyenv/versions/treebank/lib/python3.11/site-packages/stanza/pipeline/core.py", line 433, in bulk_process return self.process(docs, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home1/malouf/.pyenv/versions/treebank/lib/python3.11/site-packages/stanza/pipeline/core.py", line 422, in process doc = process(doc) ^^^^^^^^^^^^ File "/home1/malouf/.pyenv/versions/treebank/lib/python3.11/site-packages/stanza/pipeline/processor. py", line 258, in bulk_process self.process(combined_doc) # annotations are attached to sentence objects ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home1/malouf/.pyenv/versions/treebank/lib/python3.11/site-packages/stanza/pipeline/pos_proces sor.py", line 84, in process batch.doc.set([doc.UPOS, doc.XPOS, doc.FEATS], [y for x in preds for y in x]) File "/home1/malouf/.pyenv/versions/treebank/lib/python3.11/site-packages/stanza/models/common/doc.p y", line 254, in set assert (to_token and self.num_tokens == len(contents)) or self.num_words == len(contents), \ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Contents must have the same length as the original file. ``` **Environment (please complete the following information):** - OS: MacOS 14.0 - Python version: Python 3.11.3 - Stanza version: 1.6.1 (and transformers 4.34.0)
open
2023-10-09T03:26:15Z
2024-05-24T15:23:12Z
https://github.com/stanfordnlp/stanza/issues/1294
[ "bug" ]
rmalouf
9
allenai/allennlp
nlp
5,262
allennlp.common.checks.ConfigurationError:
allennlp.common.checks.ConfigurationError: from_params was passed a `params` object that was not a `Params`. This probably indicates malformed parameters in a configuration file, where something that should have been a dictionary was actually a list, or something else. This happened when constructing an object of type <class 'allennlp.nn.initializers.InitializerApplicator'>.
closed
2021-06-15T09:17:00Z
2021-06-29T16:14:26Z
https://github.com/allenai/allennlp/issues/5262
[ "question", "stale" ]
vicky-abr
2
microsoft/nni
tensorflow
5,064
nni cannot use tensorboard
When I use NNI to conduct hyperparameter searching, I cannot use tensorboard. As I selected the experiment and click tensorboard button, it will quickly pop up a python window (fig.1), and then the tensorboard page shows error (fig.2). for the path in trail source code, I use log_dir = os.path.join(os.environ["NNI_OUTPUT_DIR"], 'tensorboard') Indeed, I can see tensorboard log file in each exp directory, but I cannot open the file via nni platform. However, when I open terminal, and use the tensorboard from there, open the file in the trail output dir, the tensorboard can work well. ![Screenshot 2022-08-12 173918](https://user-images.githubusercontent.com/76143149/184328582-880460da-6cc0-4c09-a016-fec9f2a1f385.jpg) ![Screenshot 2022-08-12 174009](https://user-images.githubusercontent.com/76143149/184328607-5c011589-4cd0-447e-889b-d76b26e89099.jpg) **Environment**: - NNI version: v2.8 - Training service (local|remote|pai|aml|etc): local - Client OS: windows - Server OS (for remote mode only): - Python version: 3.9.12 - PyTorch/TensorFlow version: pytorch - Is conda/virtualenv/venv used?: conda - Is running in Docker?: no **Configuration**: - Experiment config : hyperparameter tuning
closed
2022-08-12T09:48:59Z
2023-05-12T02:34:38Z
https://github.com/microsoft/nni/issues/5064
[ "bug", "user raised", "support", "tensorboard" ]
cehw
2
albumentations-team/albumentations
machine-learning
1,568
PadIfNeeded doesn't serialize position parameter
## ๐Ÿ› Bug The `PadIfNeeded` transform doesn't serialize the position parameter. ## To Reproduce Steps to reproduce the behavior: ```python import albumentations as A transform = A.PadIfNeeded(min_height=512, min_width=512, p=1, border_mode=0, value=[124, 116, 104], position="top_left") transform.to_dict() ``` Output: ``` {'__version__': '1.4.1', 'transform': {'__class_fullname__': 'PadIfNeeded', 'always_apply': False, 'p': 1, 'min_height': 512, 'min_width': 512, 'pad_height_divisor': None, 'pad_width_divisor': None, 'border_mode': 0, 'value': [124, 116, 104], 'mask_value': None}} ``` ## Expected behavior The position parameter should be included in the `dict`. ## Environment - Albumentations version: 1.4.1 - Python version: 3.8 - OS: Linux - How you installed albumentations (`conda`, `pip`, source): `pip`
closed
2024-03-06T22:11:21Z
2024-03-09T02:28:49Z
https://github.com/albumentations-team/albumentations/issues/1568
[ "bug", "good first issue" ]
margilt
2
supabase/supabase-py
fastapi
464
Handling the Password Reset for a user by Supabase itself.
**Is your feature request related to a problem? Please describe.** The function `supabase.auth.reset_password_email(email)` only sends a reset mail to the particular email address mentioned, but not actually handles the password reset of that account, unlike firebase. This function `reset_password_email()` should not only just send a password reset mail but also handle password reset with a unique form link specific to that user, so that the particular user can change the password. I hope that clarifies the problem. **Describe the solution you'd like** The function `reset_password_email(email)` should generate a unique supabase link which would contain a form to reset the password and this link should be sent to the particular mail as a password reset mail. This way supabase users wouldn't have to worry about password reset themselves. They just have to call this function and rest, we would handle. ***I would be more than happy to take up on this issue and contribute to the supabase community!*** **Describe alternatives you've considered** ***Firebase Authentication*** is a clear distinctive alternative for this. They have this function `send_password_reset_email()` that does the same thing described above. **Additional context** Do checkout this Loom:- [Firebase Password Reset Flow](https://www.loom.com/share/211759ff187e4a8f9dd5d14dd434c7e5)
closed
2023-06-14T11:37:08Z
2023-06-14T14:47:43Z
https://github.com/supabase/supabase-py/issues/464
[]
MBSA-INFINITY
2
hpcaitech/ColossalAI
deep-learning
5,391
[BUG]: Wrong import in ColossalAuto's meta_registry/binary_elementwise_ops.py
### ๐Ÿ› Describe the bug # Problem description The file `colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py` contains the following line: ```python from ..constants import BCAST_FUNC_OP ``` However, the file `colossalai/auto_parallel/meta_profiler/constants.py` which this import refers to does not contain any `BCAST_FUNC_OP`. This leads to an `ImportError` when running ColossalAuto in release 0.3.3 and newer. This constant can be found in the file `colossalai/auto_parallel/tensor_shard/constants.py`. The last commit to `colossalai/auto_parallel/meta_profiler/constants.py` (commit ID `079bf3cb`) removes the import of tensor_shard's `constants.py` from meta_profiler's `constants.py` (seemingly due to an automated refactoring). # Solution Since no other file in the `meta_registry` module uses constants from the `tensor_shard/constants.py` and to avoid automated removal of "unused" imports in the future, the import statement in question in above-mentioned `binary_elementwise_ops.py` could be changed to: ```python from colossalai.auto_parallel.tensor_shard.constants import BCAST_FUNC_OP ``` ### Environment - Python 3.8 - Torch 1.12.0 - no CUDA
closed
2024-02-20T06:29:51Z
2024-02-20T11:24:45Z
https://github.com/hpcaitech/ColossalAI/issues/5391
[ "bug" ]
stephankoe
0
mars-project/mars
numpy
2,560
Reduction over different columns of a single DataFrame can be merged
When calculating series aggregations like `execute(df.a.sum(),df.b.mean())`, aggregations over different columns can be merged as `df.agg({'a': 'sum', 'b': 'mean'})`. An optimizer can be added to reduce num of subtasks.
open
2021-10-28T08:55:47Z
2021-10-28T08:56:50Z
https://github.com/mars-project/mars/issues/2560
[ "type: enhancement", "mod: dataframe" ]
wjsi
0
pennersr/django-allauth
django
3,375
SAML and OIDC organization sso
I'm happy to see the recent commits of SAML into main, and the implementation appears great to me. I intend to put the main branch into my staging environment to test, give feedback, and contribute if that proves useful. I do have a couple questions, if you have the opportunity to answer: 1. How can I best assist you in completing this feature? Are there specific places that you'd like me to particularly pay attention to? 2. What's your feeling about extending the approach you're taking for saml to OpenID Connect? I'm wanting to provide OIDC as an alternative to SAML for SSO for the organizations we have as clients.
closed
2023-08-09T22:02:57Z
2023-08-10T07:52:23Z
https://github.com/pennersr/django-allauth/issues/3375
[]
ryanhiebert
1
scikit-learn/scikit-learn
machine-learning
30,425
Make sklearn.neighbors algorithms treat all samples as neighbors when `n_neighbors is None`/`radius is None`
### Describe the workflow you want to enable The proposed feature is that algorithms in `sklearn.neighbors`, when created with parameter `n_neighbors is None` or `radius is None`, treat all samples used for fitting (or all samples to which distances are `'precomputed'`) as neighbors of every sample for which prediction is requested. This makes sense when algorithm parameter `weights` is not `'uniform'` but `distance` or callable, distributing voting power among fitted samples unevenly. It expands which customized algorithms (that use distance-dependent voting) are available with scikit-learn API. ### Describe your proposed solution The solution: 1. allow the algorithm parameters `n_neighbors`/`radius` to have the value `None`; 2. allow the public method `KNeighborsMixin.kneighbors` to return ragged arrays instead of 2D arrays (for the case of working on graphs instead of dense matrices); 3. make routines that process indices/distances of neighbors of samples work with ragged arrays; 4. add the special case for the parameter being `None` in routines that find indices of neighbors of a sample. Examples of relevant code for k-neighbors algorithms: 1. `sklearn.neighbors._base._kneighbors_from_graph` Add special case to return a ragged array of indices of all non-zero elements in every row (an array per row, taken directly from `graph.indptr`). 1. `sklearn.neighbors._base.KNeighborsMixin._kneighbors_reduce_func` Add special case to produce `neigh_ind` from `numpy.arange(...)` instead of `numpy.argpartition(...)[...]`. 3. `sklearn.neighbors._base.KNeighborsMixin.kneighbors` In the end, where the false extra neighbor is removed for every sample, add case for a ragged array. 4. `sklearn.neighbors._base.KNeighborsMixin.kneighbors_graph` Add special case to forward results of `.kneighbors(...)` to output. 5. `sklearn.metrics._pairwise_distances_reduction` I don't comprehend Cython yet and have no ide what is going on there. Anyway, it's probable that the best case would be to deem the Cython implementation unsuitable for the case discussed (as it is already deemed for other conditions). ### Describe alternatives you've considered, if relevant The alternative is to build the application in such way that a k-neighbors estimator is instantiated only when the size of dataset is known, setting `n_neighbors` to this size. This possibly causes unwarranted overhead cost for operations that do seek n neighbors, and, as I understand, this is not the conventional way to employ estimators. ### Additional context When I started investigating the necessary changes, I didn't realize there will be so much code to rewrite because of graphs. I have not found a way to do what I originally needed to do (with dense matrices of precomputed distances) without modifying library's code deeply. I have also not found this feature proposed earlier. I see why this idea is not so very good, and this issue can tell others.
closed
2024-12-07T13:29:05Z
2024-12-19T14:02:54Z
https://github.com/scikit-learn/scikit-learn/issues/30425
[ "New Feature" ]
asrelo
3
miguelgrinberg/python-socketio
asyncio
351
wss://url:port is not an accepted origin.
Recently I've started to see the following error when connecting using the `socket.io-client-swift`. Connecting using `socket.io-client-java` works fine. ``` wss://myip:port is not an accepted origin. ``` Using: ``` Name: Flask-SocketIO Version: 4.2.1 --- Name: python-engineio Version: 3.9.3 ```
closed
2019-09-10T21:23:46Z
2019-11-17T19:11:33Z
https://github.com/miguelgrinberg/python-socketio/issues/351
[ "question" ]
ffleandro
1
OpenVisualCloud/CDN-Transcode-Sample
dash
33
ffplay and VLC can't play x265 clips with HTTP which is generated in xcode server
**Describe the bug** Using 1 xcode server + 1 cdn server to do H264/MPEG2 transcoding to x265, the transcoding is ok and the output file index.m3u8 or index.mpd is also generated with normal size, but while using ffplay or VLC to play the x265 clips with HLS or DASH, there will be no output in screen and error log is printed by ffplay. **Command line and Log information from FFPLAY:** command: ffplay http://host_ip:port_id/hls/4kCamera/index.m3u8 Log: ffplay version 4.1 Copyright (c) 2003-2018 the FFmpeg developers built with gcc 7 (Ubuntu 7.3.0-27ubuntu1~18.04) configuration: libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 libavformat 58. 20.100 / 58. 20.100 libavdevice 58. 5.100 / 58. 5.100 libavfilter 7. 40.101 / 7. 40.101 libswscale 5. 3.100 / 5. 3.100 libswresample 3. 3.100 / 3. 3.100 [hls,applehttp @ 0x7f803c000b80] Opening 'http://host_ip:port_id/hls/4kCamera/0.ts' for reading [NULL @ 0x7f803c025dc0] PPS id out of range: 0 [hevc @ 0x7f803c027a00] PPS id out of range: 0 [hevc @ 0x7f803c027a00] Error parsing NAL unit #1. [NULL @ 0x7f803c025dc0] PPS id out of range: 0 [hevc @ 0x7f803c027a00] PPS id out of range: 0 [hevc @ 0x7f803c027a00] Error parsing NAL unit #1. [NULL @ 0x7f803c025dc0] PPS id out of range: 0 [hevc @ 0x7f803c027a00] PPS id out of range: 0 [hevc @ 0x7f803c027a00] Error parsing NAL unit #1. [NULL @ 0x7f803c025dc0] PPS id out of range: 0 [hevc @ 0x7f803c027a00] PPS id out of range: 0 [hevc @ 0x7f803c027a00] Error parsing NAL unit #1. [NULL @ 0x7f803c025dc0] PPS id out of range: 0 [hevc @ 0x7f803c027a00] PPS id out of range: 0 **NOTE:** If streaming output x265 stream with RTMP during transcoding, then use ffplay to playback with RTMP, the output is ok but has about 15s delay; If play with VLC also no output in screen. **To Reproduce** Steps to reproduce the behavior: 1. Run transcode command in xcode server: ffmpeg -re -stream_loop -1 -i /var/www/Mpeg2_1080.m2v -c:v libx265 -s 4096x2160 -f flv rtmp://ngnix_id:port_id/hls/4kCamera 2. Run ffplay (with HEVC enabled patch) command in another Linux machine: ffplay http://host_ip:port_id/hls/4kCamera/index.m3u8 or play with VLC on your windows PC. **Expected behavior** Video should be played smoothly with FFPLAY or VLC, no error and no delay.
closed
2019-04-28T02:28:49Z
2019-05-30T08:37:36Z
https://github.com/OpenVisualCloud/CDN-Transcode-Sample/issues/33
[]
liweki
1
BeanieODM/beanie
pydantic
1,110
[BUG] ODM model can't get datetime timezone
**Describe the bug** When saving and loading datetime type information using ODM, timezone information disappears. **To Reproduce** - model ```python class TestDocs(Document, extra="allow", populate_by_name=True): id: Indexed(str) = Field(..., alias="_id") # type: ignore created: Optional[datetime] = None modified: Optional[datetime] = None ``` **Expected behavior** When saving, information should be saved to mongodb and loaded according to the timezone information of the datetime in the model. **Additional context** When imported, it is loaded without timezone information.
closed
2025-01-21T05:25:29Z
2025-02-26T21:38:03Z
https://github.com/BeanieODM/beanie/issues/1110
[]
DotJM
3
microsoft/nni
tensorflow
4,938
when I run quantization_speedup.py in /examples/tutorials, get erros like this:
IndexError Traceback (most recent call last) /home/chenwz/code/pycharm/test2.ipynb Cell 6' in <cell line: 4>() [2](vscode-notebook-cell://ssh-remote%2Bchenwz/home/chenwz/code/pycharm/test2.ipynb#ch0000005vscode-remote?line=1) input_shape = (32, 1, 28, 28) [3](vscode-notebook-cell://ssh-remote%2Bchenwz/home/chenwz/code/pycharm/test2.ipynb#ch0000005vscode-remote?line=2) engine = ModelSpeedupTensorRT(model, input_shape, config=calibration_config, batchsize=32) ----> [4](vscode-notebook-cell://ssh-remote%2Bchenwz/home/chenwz/code/pycharm/test2.ipynb#ch0000005vscode-remote?line=3) engine.compress() [5](vscode-notebook-cell://ssh-remote%2Bchenwz/home/chenwz/code/pycharm/test2.ipynb#ch0000005vscode-remote?line=4) test_trt(engine) File ~/.conda/envs/nni-wenze/lib/python3.8/site-packages/nni/compression/pytorch/quantization_speedup/integrated_tensorrt.py:291, in ModelSpeedupTensorRT.compress(self) 288 assert self.input_shape is not None 290 # Convert pytorch model to onnx model and save onnx model in onnx_path --> 291 _, self.onnx_config = fonnx.torch_to_onnx(self.model, self.config, input_shape=self.input_shape, 292 model_path=self.onnx_path, input_names=self.input_names, output_names=self.output_names) 294 if self.calib_data_loader is not None: 295 assert self.calibrate_type is not None File ~/.conda/envs/nni-wenze/lib/python3.8/site-packages/nni/compression/pytorch/quantization_speedup/frontend_to_onnx.py:144, in torch_to_onnx(model, config, input_shape, model_path, input_names, output_names) 142 # Load onnx model 143 model_onnx = onnx.load(model_path) --> 144 model_onnx, onnx_config = unwrapper(model_onnx, index2name, config) 145 onnx.save(model_onnx, model_path) 147 onnx.checker.check_model(model_onnx) File ~/.conda/envs/nni-wenze/lib/python3.8/site-packages/nni/compression/pytorch/quantization_speedup/frontend_to_onnx.py:82, in unwrapper(model_onnx, index2name, config) 80 mul_nd = model_onnx.graph.node[idx-1] ... ---> 82 index = int(onnx.numpy_helper.to_array(const_nd.attribute[0].t)) 83 if index != -1: 84 name = index2name[index] IndexError: list index (0) out of range **Environment**: - NNI version:2.7 - Training service (local|remote|pai|aml|etc): - Client OS: - Server OS (for remote mode only): - Python version:3.8 - onnx version:1.11.0 - PyTorch/TensorFlow version:1.11.0 - Is conda/virtualenv/venv used?: - Is running in Docker?: **Configuration**: - Experiment config (remember to remove secrets!): - Search space: **Log message**: - nnimanager.log: - dispatcher.log: - nnictl stdout and stderr: <!-- Where can you find the log files: LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout --> **How to reproduce it?**:
open
2022-06-15T03:37:15Z
2022-07-15T06:58:24Z
https://github.com/microsoft/nni/issues/4938
[ "bug", "quantize" ]
Shining-Tears
4
waditu/tushare
pandas
1,006
ๅปบ่ฎฎๅขžๅŠ ๆˆช้ขๆ•ฐๆฎ่Žทๅ–ๅŠŸ่ƒฝ
็›ฎๅ‰ๅช็œ‹ๅˆฐdaily_basic()ๆœ‰้ƒจๅˆ†ๅŠŸ่ƒฝ๏ผŒไฝ†ๆ˜ฏไธฅ้‡ไธๅ…จใ€‚ๅปบ่ฎฎๅขžๅŠ ไธ“้—จ็š„ๆˆช้ขๆ•ฐๆฎ่Žทๅ–ๅŠŸ่ƒฝ๏ผŒ่พ“ๅ…ฅๅ‚ๆ•ฐๅฏไปฅๅŒ…ๆ‹ฌ๏ผˆ่‚ก็ฅจไปฃ็ list๏ผŒไบคๆ˜“ๆ—ฅๆœŸ๏ผŒๆˆช้ขๆ•ฐๆฎๅฆ‚k็บฟๆ•ฐๆฎใ€่ดขๅŠกๆŒ‡ๆ ‡็ญ‰๏ผ‰ใ€‚็ฑปไผผwind็š„wss()ใ€‚ ๅปบ่ฎฎid๏ผšpbzysb@163.com
open
2019-04-12T06:22:56Z
2019-04-13T11:53:12Z
https://github.com/waditu/tushare/issues/1006
[]
acehwong
1
microsoft/unilm
nlp
711
UniLM v2 checkpoint
**Describe** I would like to try out the UniLM v2 model but am unable to find it. Is the checkpoint available (here or from HuggingFace)? Thank you!
closed
2022-05-09T18:40:51Z
2022-05-10T13:40:29Z
https://github.com/microsoft/unilm/issues/711
[]
natuan
2
pyeve/eve
flask
697
patch_internal with USRA complains about missing items in a list with data_relation set
This is on 0.5.3. I'm trying to use patch_internal to maintain a list of "invites" for a given "event". Suppose the "event" belongs to User 1, and it contains a list (named "inviteIds") referencing "invites" 1, 2, and 3 belonging to Users 1, 2 and 3 - respectively with USRA. Since User 1 has no direct access to the invites owned by 2 and 3, I wanted a way to allow User 1 to delete their invites: by simply removing the Id's from the "inviteId" list -- which triggers a hook that does the deleting. Items in "inviteIds" are ObjectIds that have a data-relation to the collection "eventInvites" patch_internal complains about "value '55e64ced4018db0bd0d13fe2' must exist in resource 'eventInvites', field '_id'." (along with every other invite that isn't owned by the event owner). I have a feeling that authentication/USRA isn't fully disabled for patch_internal when performing data validation against its schema..
closed
2015-09-02T01:41:57Z
2018-05-18T18:19:27Z
https://github.com/pyeve/eve/issues/697
[ "stale" ]
kenmaca
1
deezer/spleeter
tensorflow
404
[Discussion] Will higher cpu cores and ram speed up spleeter?
<!-- Please respect the title [Discussion] tag. --> I currently have a quad core with 8gb ram and it gets stuck with spleeter. Will using for example an 8 core cpu with 16gb ram make it faster or it wont make a difference?
closed
2020-05-29T04:20:18Z
2020-06-06T14:26:12Z
https://github.com/deezer/spleeter/issues/404
[ "question" ]
Scylla2020
7
vitalik/django-ninja
django
549
Whether django-ninja has a caching mechanism, and whether django's cache configuration is valid for ninja.
Whether django-ninja has a caching mechanism, and whether django's cache configuration is valid for ninja.
open
2022-09-01T09:34:43Z
2024-07-03T06:21:18Z
https://github.com/vitalik/django-ninja/issues/549
[]
ddzyx
5
google-research/bert
tensorflow
815
Will BERT learn from ERNIE 2.0?
ERNIE 2.0 is a new state of the art language model that has a major innovation: Continual learning. I hope researchers take the great ideas from others instead of working in isolation. So will BERT 2.0 take the good ideas from ERNIE 2.0 and implement continual leaning ? https://www.infoq.com/news/2019/08/Baidu-OpenSources-ERNIE/ The paper: https://www.semanticscholar.org/paper/ERNIE-2-.-0-%3A-A-CONTINUAL-PRE-TRAINING-FRAMEWORK-Sun/90251aa6225fcd5687542eab5819db18afb6a20f
open
2019-08-21T01:11:40Z
2019-08-21T01:11:40Z
https://github.com/google-research/bert/issues/815
[]
LifeIsStrange
0
paperless-ngx/paperless-ngx
django
7,809
[BUG] Cannot access bottom items on left pane
### Description I cannot access the bottom items on left pane (dashboard, documents, saved views, manage...). Moving the mouse circle only moves the center part with documents. ### Steps to reproduce 1. Open Paperless on any view. 2. Move mouse over the left pane. 3. Try to move the mouse wheel to access not visible part of the left pane. 4. Only the center part appears. ### Webserver logs ```bash - ``` ### Browser logs _No response_ ### Paperless-ngx version Actually, I cannot even access this part to see the version ### Host OS Docker on Windows 11 ### Installation method Docker - official image ### System status _No response_ ### Browser tried Brave and Edge ### Configuration changes _No response_ ### Please confirm the following - [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [X] I have already searched for relevant existing issues and discussions before opening this report. - [X] I have updated the title field above with a concise description.
closed
2024-09-30T15:30:45Z
2024-10-31T03:10:36Z
https://github.com/paperless-ngx/paperless-ngx/issues/7809
[ "duplicate" ]
akumiszcza
2
postmanlabs/httpbin
api
411
/status/100 will crash
Going to http://httpbin.org/status/100 with show a Heroku error page.
closed
2017-12-11T16:38:27Z
2018-04-26T17:51:16Z
https://github.com/postmanlabs/httpbin/issues/411
[]
ROMTypo
1
fastapi-users/fastapi-users
asyncio
102
Dependabot can't resolve your Python dependency files
Dependabot can't resolve your Python dependency files. As a result, Dependabot couldn't update your dependencies. The error Dependabot encountered was: ``` ERROR: ERROR: Could not find a version that matches pymdown-extensions<6.3,>=6.2,>=6.3 Tried: 1.0.0, 1.0.0, 1.0.1, 1.0.1, 1.1, 1.1, 1.2, 1.2, 1.3, 1.3, 1.4, 1.4, 1.5, 1.5, 1.6, 1.6, 1.6.1, 1.6.1, 1.7, 1.7, 1.8, 1.8, 2.0, 2.0, 3.0, 3.0, 3.1, 3.1, 3.2, 3.2, 3.2.1, 3.2.1, 3.3, 3.3, 3.4, 3.4, 3.5, 3.5, 4.0, 4.0, 4.1, 4.1, 4.2, 4.2, 4.3, 4.3, 4.4, 4.4, 4.5, 4.5, 4.5.1, 4.5.1, 4.6, 4.6, 4.7, 4.7, 4.8, 4.8, 4.9, 4.9, 4.9.1, 4.9.1, 4.9.2, 4.9.2, 4.10, 4.10, 4.10.1, 4.10.1, 4.10.2, 4.10.2, 4.11, 4.11, 4.12, 4.12, 5.0, 5.0, 6.0, 6.0, 6.1, 6.1, 6.2, 6.2, 6.2.1, 6.2.1, 6.3, 6.3 There are incompatible versions in the resolved dependencies. [pipenv.exceptions.ResolutionFailure]: req_dir=requirements_dir [pipenv.exceptions.ResolutionFailure]: File "/usr/local/.pyenv/versions/3.7.6/lib/python3.7/site-packages/pipenv/utils.py", line 726, in resolve_deps [pipenv.exceptions.ResolutionFailure]: req_dir=req_dir, [pipenv.exceptions.ResolutionFailure]: File "/usr/local/.pyenv/versions/3.7.6/lib/python3.7/site-packages/pipenv/utils.py", line 480, in actually_resolve_deps [pipenv.exceptions.ResolutionFailure]: resolved_tree = resolver.resolve() [pipenv.exceptions.ResolutionFailure]: File "/usr/local/.pyenv/versions/3.7.6/lib/python3.7/site-packages/pipenv/utils.py", line 395, in resolve [pipenv.exceptions.ResolutionFailure]: raise ResolutionFailure(message=str(e)) [pipenv.exceptions.ResolutionFailure]: pipenv.exceptions.ResolutionFailure: ERROR: ERROR: Could not find a version that matches pymdown-extensions<6.3,>=6.2,>=6.3 [pipenv.exceptions.ResolutionFailure]: Tried: 1.0.0, 1.0.0, 1.0.1, 1.0.1, 1.1, 1.1, 1.2, 1.2, 1.3, 1.3, 1.4, 1.4, 1.5, 1.5, 1.6, 1.6, 1.6.1, 1.6.1, 1.7, 1.7, 1.8, 1.8, 2.0, 2.0, 3.0, 3.0, 3.1, 3.1, 3.2, 3.2, 3.2.1, 3.2.1, 3.3, 3.3, 3.4, 3.4, 3.5, 3.5, 4.0, 4.0, 4.1, 4.1, 4.2, 4.2, 4.3, 4.3, 4.4, 4.4, 4.5, 4.5, 4.5.1, 4.5.1, 4.6, 4.6, 4.7, 4.7, 4.8, 4.8, 4.9, 4.9, 4.9.1, 4.9.1, 4.9.2, 4.9.2, 4.10, 4.10, 4.10.1, 4.10.1, 4.10.2, 4.10.2, 4.11, 4.11, 4.12, 4.12, 5.0, 5.0, 6.0, 6.0, 6.1, 6.1, 6.2, 6.2, 6.2.1, 6.2.1, 6.3, 6.3 [pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again. Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation. Hint: try $ pipenv lock --pre if it is a pre-release dependency. ERROR: ERROR: Could not find a version that matches pymdown-extensions<6.3,>=6.2,>=6.3 Tried: 1.0.0, 1.0.0, 1.0.1, 1.0.1, 1.1, 1.1, 1.2, 1.2, 1.3, 1.3, 1.4, 1.4, 1.5, 1.5, 1.6, 1.6, 1.6.1, 1.6.1, 1.7, 1.7, 1.8, 1.8, 2.0, 2.0, 3.0, 3.0, 3.1, 3.1, 3.2, 3.2, 3.2.1, 3.2.1, 3.3, 3.3, 3.4, 3.4, 3.5, 3.5, 4.0, 4.0, 4.1, 4.1, 4.2, 4.2, 4.3, 4.3, 4.4, 4.4, 4.5, 4.5, 4.5.1, 4.5.1, 4.6, 4.6, 4.7, 4.7, 4.8, 4.8, 4.9, 4.9, 4.9.1, 4.9.1, 4.9.2, 4.9.2, 4.10, 4.10, 4.10.1, 4.10.1, 4.10.2, 4.10.2, 4.11, 4.11, 4.12, 4.12, 5.0, 5.0, 6.0, 6.0, 6.1, 6.1, 6.2, 6.2, 6.2.1, 6.2.1, 6.3, 6.3 There are incompatible versions in the resolved dependencies. ['Traceback (most recent call last):\n', ' File "/usr/local/.pyenv/versions/3.7.6/lib/python3.7/site-packages/pipenv/utils.py", line 501, in create_spinner\n yield sp\n', ' File "/usr/local/.pyenv/versions/3.7.6/lib/python3.7/site-packages/pipenv/utils.py", line 649, in venv_resolve_deps\n c = resolve(cmd, sp)\n', ' File "/usr/local/.pyenv/versions/3.7.6/lib/python3.7/site-packages/pipenv/utils.py", line 539, in resolve\n sys.exit(c.return_code)\n', 'SystemExit: 1\n'] ``` If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it. [View the update logs](https://app.dependabot.com/accounts/frankie567/update-logs/22892841).
closed
2020-02-10T04:50:47Z
2020-02-11T04:50:40Z
https://github.com/fastapi-users/fastapi-users/issues/102
[]
dependabot-preview[bot]
0
deezer/spleeter
deep-learning
198
Error when freezing model
<!-- PLEASE READ THIS CAREFULLY : - Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed - First check FAQ from wiki to see if your problem is not already known --> ## Description gives error when I'm trying to freez model <!-- Give us a clear and concise description of the bug you are reporting. --> ## Step to reproduce <!-- Indicates clearly steps to reproduce the behavior: --> 1. checked using `tensorflow 1.14` and `tensorflow 1.15` 2. using output_node_names as `output_node_names = "save_1/restore_all"` 3. Got `Input 0 of node import/save_1/AssignVariableOp was passed float from import/batch_normalization/beta:0 incompatible with expected resource.` error ## Output ```bash Share what your terminal says when you run the script (as well as what you would expect). ``` ## Environment <!-- Fill the following table --> | | | | ----------------- | ------------------------------- | | OS | MacOS | Installation type | pip ## Additional context <!-- Add any other context about the problem here, references, cites, etc.. -->
closed
2019-12-26T09:09:03Z
2020-01-27T16:46:01Z
https://github.com/deezer/spleeter/issues/198
[ "bug", "invalid", "model", "training" ]
waqasakram117
1
opengeos/streamlit-geospatial
streamlit
90
Can't open app
closed
2022-10-26T04:16:57Z
2022-10-29T17:04:42Z
https://github.com/opengeos/streamlit-geospatial/issues/90
[]
haizhupan
0
sherlock-project/sherlock
python
2,434
False positive for: (several websites)
### Additional info Consistent: - Cults3D - GNOME VCS - LibraryThing - Mydramalist - NationStates Nation - NationStates Region - ProductHunt Inconsistent: - AllMyLinks (4/10) - Twitter/X (4/10) - HackerEarth (6/10) - TorrentGalaxy (2/10) - Reddit (2/10) Usernames tried: - [own username] - extremelymostlyfake - mostlylikelyfake - mostlymaybefake - extremelymostlyfakeu - extremelymostlyfakeus - audienceatm12 - devicemare03 - w3av3n3sc4f3 - w3av3n3sc4f Regions used to test: - Hungary - Turkey - USA - Sweden ### Code of Conduct - [x] I agree to follow this project's Code of Conduct
open
2025-03-15T00:05:54Z
2025-03-15T00:05:54Z
https://github.com/sherlock-project/sherlock/issues/2434
[ "false positive" ]
forestbitter
0
OFA-Sys/Chinese-CLIP
computer-vision
31
ๆ˜ฏๅฆๅฏไปฅไฝœไธบstable diffusion็š„text encoder?
ๅฐ่ฏ•ๅฐ†Chinese CLIPไฝœไธบstable diffusion็š„text encoder๏ผŒไฝ†ๆ˜ฏไธ€็›ด็”Ÿๆˆ็บฏ้ป‘ๅ›พๅƒ๏ผˆๅฎ‰ๅ…จๆฃ€ๆŸฅๅทฒ็ปๅ…ณ้—ญ๏ผ‰๏ผŒๆˆ‘ๆƒณ้—ฎไธ‹ๆ˜ฏๅฆๅฏไปฅไฝœไธบsd็š„text encoderๅ‘ข๏ผŸๅฎ˜ๆ–นๆ˜ฏๅฆๅš่ฟ‡ๆต‹่ฏ•ใ€‚
open
2022-12-13T10:42:05Z
2024-08-15T07:44:32Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/31
[]
zhaop-l
9
jumpserver/jumpserver
django
14,341
[Question] ๆœฌๅœฐ็”จๅŸบไบŽๆบ็ ๅฏๅŠจ็š„jumpserver core็ป„ไปถ๏ผŒ้ป˜่ฎค็”จๆˆทๆ— ๆณ•็™ป้™†
### Product Version v4.20 ### Product Edition - [X] Community Edition - [ ] Enterprise Edition - [ ] Enterprise Trial Edition ### Installation Method - [ ] Online Installation (One-click command installation) - [ ] Offline Package Installation - [ ] All-in-One - [ ] 1Panel - [ ] Kubernetes - [ ] Source Code ### Environment Information mac python 3.11 mysql ๏ผŒredis ไฝฟ็”จdocker ### ๐Ÿค” Question Description ๆˆ‘ๅœจๆœฌๅœฐไปฃ็ ไป“ๅบ“้‡Œ๏ผŒๆ‰ง่กŒ` python ./apps/manage migrate` ่ฟ›่กŒไบ†่กจ็ป“ๆž„่ฟ็งป๏ผŒ็„ถๅŽไฝฟ็”จ `python ./apps/manage.py runserver`ๅ‘ฝไปคๅฏๅŠจ้กน็›ฎ๏ผŒ็™ปๅฝ•็•Œ้ขๆญฃๅธธๅฑ•็คบ๏ผŒไฝ†ๆ˜ฏๆˆ‘ไฝฟ็”จadmin่ฟ™ไธช้ป˜่ฎค็”จๆˆทๅŽป็™ป้™†๏ผŒๅฏ†็ admin๏ผŒๅด็™ปๅฝ•ไธ่ฟ›ๅŽป๏ผŒ่ฏท้—ฎ๏ผŒ่ฟ˜้œ€่ฆๆทปๅŠ ไป€ไนˆ้…็ฝฎ๏ผŒๆˆ–่€…้œ€่ฆๅ…ถไฝ™็š„็ป„ไปถ่ฟ›่กŒ้…ๅˆๅ—๏ผŸ ่ฟ™ๆ˜ฏๆˆ‘็š„้…็ฝฎๆ–‡ไปถ๏ผš > // SECURITY WARNING: keep the secret key used in production secret! > // ๅŠ ๅฏ†ๅฏ†้’ฅ ็”Ÿไบง็Žฏๅขƒไธญ่ฏทไฟฎๆ”นไธบ้šๆœบๅญ—็ฌฆไธฒ๏ผŒ่ฏทๅ‹ฟๅค–ๆณ„, ๅฏไฝฟ็”จๅ‘ฝไปค็”Ÿๆˆ > // $ cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 49;echo > SECRET_KEY: abcdefg > > // SECURITY WARNING: keep the bootstrap token used in production secret! > // ้ข„ๅ…ฑไบซToken cocoๅ’Œguacamole็”จๆฅๆณจๅ†ŒๆœๅŠก่ดฆๅท๏ผŒไธๅœจไฝฟ็”จๅŽŸๆฅ็š„ๆณจๅ†ŒๆŽฅๅ—ๆœบๅˆถ > BOOTSTRAP_TOKEN: abcdefg > > // Development env open this, when error occur display the full process track, Production disable it > // DEBUG ๆจกๅผ ๅผ€ๅฏDEBUGๅŽ้‡ๅˆฐ้”™่ฏฏๆ—ถๅฏไปฅ็œ‹ๅˆฐๆ›ดๅคšๆ—ฅๅฟ— > DEBUG: true > > // DEBUG, INFO, WARNING, ERROR, CRITICAL can set. See https://docs.djangoproject.com/en/1.10/topics/logging/ > // ๆ—ฅๅฟ—็บงๅˆซ > LOG_LEVEL: DEBUG > // LOG_DIR: > > // Session expiration setting, Default 1 hour, Also set expired on on browser close > // ๆต่งˆๅ™จSession่ฟ‡ๆœŸๆ—ถ้—ด๏ผŒ้ป˜่ฎค 1 ๅฐๆ—ถ, ไนŸๅฏไปฅ่ฎพ็ฝฎๆต่งˆๅ™จๅ…ณ้—ญๅˆ™่ฟ‡ๆœŸ > SESSION_COOKIE_AGE: 3600 > SESSION_EXPIRE_AT_BROWSER_CLOSE: false > > // Database setting, Support sqlite3, mysql, postgres .... > // ๆ•ฐๆฎๅบ“่ฎพ็ฝฎ > // See https://docs.djangoproject.com/en/1.10/ref/settings/#databases > > // SQLite setting: > // ไฝฟ็”จๅ•ๆ–‡ไปถsqliteๆ•ฐๆฎๅบ“ > // DB_ENGINE: sqlite3 > // DB_NAME: > // MySQL or postgres setting like: > // ไฝฟ็”จ PostgreSQL ไฝœไธบๆ•ฐๆฎๅบ“ > DB_ENGINE: mysql > DB_HOST: 127.0.0.1 > DB_PORT: 3306 > DB_USER: root > DB_PASSWORD: '123456' > DB_NAME: jumpserver > > // When Django start it will bind this host and port > // ./manage.py runserver 127.0.0.1:8080 > // ่ฟ่กŒๆ—ถ็ป‘ๅฎš็ซฏๅฃ > HTTP_BIND_HOST: 0.0.0.0 > HTTP_LISTEN_PORT: 8080 > WS_LISTEN_PORT: 8070 > > // Use Redis as broker for celery and web socket > // Redis้…็ฝฎ > REDIS_HOST: 127.0.0.1 > REDIS_PORT: 6379 > REDIS_PASSWORD: '123456' > REDIS_DB_CELERY: 3 > REDIS_DB_CACHE: 4 > > // LDAP/AD settings > // LDAP ๆœ็ดขๅˆ†้กตๆ•ฐ้‡ > // AUTH_LDAP_SEARCH_PAGED_SIZE: 1000 > // > // ๅฎšๆ—ถๅŒๆญฅ็”จๆˆท > // ๅฏ็”จ / ็ฆ็”จ > // AUTH_LDAP_SYNC_IS_PERIODIC: True > // ๅŒๆญฅ้—ด้š” (ๅ•ไฝ: ๆ—ถ) (ไผ˜ๅ…ˆ๏ผ‰ > // AUTH_LDAP_SYNC_INTERVAL: 12 > // Crontab ่กจ่พพๅผ > // AUTH_LDAP_SYNC_CRONTAB: * 6 * * * > // > // LDAP ็”จๆˆท็™ปๅฝ•ๆ—ถไป…ๅ…่ฎธๅœจ็”จๆˆทๅˆ—่กจไธญ็š„็”จๆˆทๆ‰ง่กŒ LDAP Server ่ฎค่ฏ > AUTH_LDAP_USER_LOGIN_ONLY_IN_USERS: False > // > // LDAP ่ฎค่ฏๆ—ถๅฆ‚ๆžœๆ—ฅๅฟ—ไธญๅ‡บ็Žฐไปฅไธ‹ไฟกๆฏๅฐ†ๅ‚ๆ•ฐ่ฎพ็ฝฎไธบ 0 (่ฏฆๆƒ…ๅ‚่ง๏ผšhttps://www.python-ldap.org/en/latest/faq.html) > // In order to perform this operation a successful bind must be completed on the connection > // AUTH_LDAP_OPTIONS_OPT_REFERRALS: -1 > > // OTP settings > // OTP/MFA ้…็ฝฎ > // OTP_VALID_WINDOW: 0 > // OTP_ISSUER_NAME: JumpServer > > // ๅฏ็”จๅฎšๆ—ถไปปๅŠก > // PERIOD_TASK_ENABLED: True > > // ๆ˜ฏๅฆๅผ€ๅฏ Luna ๆฐดๅฐ > // SECURITY_WATERMARK_ENABLED: False > > // ๆต่งˆๅ™จๅ…ณ้—ญ้กต้ขๅŽ๏ผŒไผš่ฏ่ฟ‡ๆœŸ > // SESSION_EXPIRE_AT_BROWSER_CLOSE: False > > // ๆฏๆฌก api ่ฏทๆฑ‚๏ผŒsession ็ปญๆœŸ > // SESSION_SAVE_EVERY_REQUEST: True > > // ไป…ๅ…่ฎธ็”จๆˆทไปŽๆฅๆบๅค„็™ปๅฝ• > // ONLY_ALLOW_AUTH_FROM_SOURCE: False > > // ไป…ๅ…่ฎธๅทฒๅญ˜ๅœจ็š„็”จๆˆท็™ปๅฝ•๏ผŒไธๅ…่ฎธ็ฌฌไธ‰ๆ–น่ฎค่ฏๅŽ๏ผŒ่‡ชๅŠจๅˆ›ๅปบ็”จๆˆท > // ONLY_ALLOW_EXIST_USER_AUTH: False ### Expected Behavior _No response_ ### Additional Information _No response_
closed
2024-10-22T08:32:52Z
2024-11-28T08:30:06Z
https://github.com/jumpserver/jumpserver/issues/14341
[ "๐Ÿค” Question" ]
yxxchange
1
TarrySingh/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
matplotlib
11
The Jupyter Notebooks Links are not found
the following links from Theano to Keras Models Not found - http://nbviewer.jupyter.org/github/leriomaggio/deep-learning-keras-tensorflow/blob/master/1.2%20Introduction%20-%20Theano.ipynb - http://nbviewer.jupyter.org/github/leriomaggio/deep-learning-keras-tensorflow/blob/master/1.3%20Introduction%20-%20Keras.ipynb - http://nbviewer.jupyter.org/github/leriomaggio/deep-learning-keras-tensorflow/blob/master/1.4%20(Extra)%20A%20Simple%20Implementation%20of%20ANN%20for%20MNIST.ipynb - http://nbviewer.jupyter.org/github/leriomaggio/deep-learning-keras-tensorflow/blob/master/2.1%20Supervised%20Learning%20-%20ConvNets.ipynb - http://nbviewer.jupyter.org/github/leriomaggio/deep-learning-keras-tensorflow/blob/master/2.2.1%20Supervised%20Learning%20-%20ConvNet%20HandsOn%20Part%20I.ipynb - http://nbviewer.jupyter.org/github/leriomaggio/deep-learning-keras-tensorflow/blob/master/2.2.2%20Supervised%20Learning%20-%20ConvNet%20HandsOn%20Part%20II.ipynb - http://nbviewer.jupyter.org/github/leriomaggio/deep-learning-keras-tensorflow/blob/master/2.3%20Supervised%20Learning%20-%20Famous%20Models%20with%20Keras.ipynb
open
2018-11-17T08:17:16Z
2018-11-17T08:26:59Z
https://github.com/TarrySingh/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials/issues/11
[]
navidalvee
0
dynaconf/dynaconf
fastapi
640
Migrate to Github Actions
Move from Azure Pipelines to Github Actions - Keep the same job matrix - Move the tag/release process to GHA
closed
2021-08-12T23:35:54Z
2021-09-08T18:16:27Z
https://github.com/dynaconf/dynaconf/issues/640
[ "Not a Bug", "RFC", "HIGH" ]
rochacbruno
0
mljar/mljar-supervised
scikit-learn
739
error in docker installation
``` > [15/25] RUN pip install mljar-supervised: #0 0.914 Collecting mljar-supervised #0 0.926 Downloading mljar-supervised-1.1.9.tar.gz (127 kB) #0 0.974 Preparing metadata (setup.py): started #0 1.285 Preparing metadata (setup.py): finished with status 'done' #0 1.294 Requirement already satisfied: numpy>=1.19.5 in /usr/local/lib/python3.8/dist-packages (from mljar-supervised) (1.24.4) #0 1.296 Requirement already satisfied: pandas>=2.0.0 in /usr/local/lib/python3.8/dist-packages (from mljar-supervised) (2.0.3) #0 1.542 Collecting scipy<=1.11.4,>=1.6.1 (from mljar-supervised) #0 1.546 Downloading scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (58 kB) #0 1.785 Collecting scikit-learn>=1.0 (from mljar-supervised) #0 1.793 Downloading scikit_learn-1.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) #0 1.855 Collecting xgboost>=2.0.0 (from mljar-supervised) #0 1.860 Downloading xgboost-2.1.0-py3-none-manylinux_2_28_x86_64.whl.metadata (2.1 kB) #0 1.905 Collecting lightgbm>=3.0.0 (from mljar-supervised) #0 1.910 Downloading lightgbm-4.5.0-py3-none-manylinux_2_28_x86_64.whl.metadata (17 kB) #0 2.051 Collecting catboost>=0.24.4 (from mljar-supervised) #0 2.056 Downloading catboost-1.2.5-cp38-cp38-manylinux2014_x86_64.whl.metadata (1.2 kB) #0 2.097 Collecting joblib>=1.0.1 (from mljar-supervised) #0 2.102 Downloading joblib-1.4.2-py3-none-any.whl.metadata (5.4 kB) #0 2.128 Collecting tabulate>=0.8.7 (from mljar-supervised) #0 2.133 Downloading tabulate-0.9.0-py3-none-any.whl.metadata (34 kB) #0 2.336 Collecting matplotlib>=3.2.2 (from mljar-supervised) #0 2.340 Downloading matplotlib-3.7.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.metadata (5.7 kB) #0 2.384 Collecting dtreeviz>=2.2.2 (from mljar-supervised) #0 2.389 Downloading dtreeviz-2.2.2-py3-none-any.whl.metadata (2.4 kB) #0 2.460 Collecting shap>=0.42.1 (from mljar-supervised) #0 2.466 Downloading shap-0.44.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (24 kB) #0 2.505 Collecting seaborn>=0.11.1 (from mljar-supervised) #0 2.510 Downloading seaborn-0.13.2-py3-none-any.whl.metadata (5.4 kB) #0 2.582 Collecting wordcloud>=1.8.1 (from mljar-supervised) #0 2.587 Downloading wordcloud-1.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.4 kB) #0 2.613 Collecting category_encoders>=2.2.2 (from mljar-supervised) #0 2.624 Downloading category_encoders-2.6.3-py2.py3-none-any.whl.metadata (8.0 kB) #0 2.709 Collecting optuna>=2.7.0 (from mljar-supervised) #0 2.714 Downloading optuna-3.6.1-py3-none-any.whl.metadata (17 kB) #0 2.746 INFO: pip is looking at multiple versions of mljar-supervised to determine which version is compatible with other requirements. This could take a while. #0 2.747 Collecting mljar-supervised #0 2.752 Downloading mljar-supervised-1.1.8.tar.gz (127 kB) #0 2.798 Preparing metadata (setup.py): started #0 3.097 Preparing metadata (setup.py): finished with status 'done' #0 3.106 Collecting mljar-scikit-plot>=0.3.8 (from mljar-supervised) #0 3.117 Downloading mljar-scikit-plot-0.3.10.tar.gz (25 kB) #0 3.139 Preparing metadata (setup.py): started #0 3.332 Preparing metadata (setup.py): finished with status 'error' #0 3.339 error: subprocess-exited-with-error #0 3.339 #0 3.339 ร— python setup.py egg_info did not run successfully. #0 3.339 โ”‚ exit code: 1 #0 3.339 โ•ฐโ”€> [6 lines of output] #0 3.339 Traceback (most recent call last): #0 3.339 File "<string>", line 2, in <module> #0 3.339 File "<pip-setuptools-caller>", line 34, in <module> #0 3.339 File "/tmp/pip-install-gq9olotk/mljar-scikit-plot_ddca47a3e5b3486eb05e8139f91ea0d8/setup.py", line 3, in <module> #0 3.339 from setuptools.command.test import test as TestCommand #0 3.339 ModuleNotFoundError: No module named 'setuptools.command.test' #0 3.339 [end of output] #0 3.339 #0 3.339 note: This error originates from a subprocess, and is likely not a problem with pip. #0 3.347 error: metadata-generation-failed #0 3.347 #0 3.347 ร— Encountered error while generating package metadata. #0 3.347 โ•ฐโ”€> See above for output. #0 3.347 #0 3.347 note: This is an issue with the package mentioned above, not pip. #0 3.347 hint: See above for details. ------ failed to solve: executor failed running [/bin/sh -c pip install mljar-supervised]: exit code: 1 ```
closed
2024-07-29T07:24:09Z
2024-09-25T07:46:28Z
https://github.com/mljar/mljar-supervised/issues/739
[]
pplonski
1
ultralytics/yolov5
machine-learning
13,036
๐Ÿš€ Feature Request: Simplified Method for Changing Label Names in YOLOv5 Model
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests. ### Description Background Many users have reported issues with misspelled label names in their trained YOLOv5 models. Often, they are unaware of how to update these labels easily and end up retraining the entire model, which is impractical and time-consuming. Issues such as [#12156](https://github.com/ultralytics/yolov5/issues/12156) and [#3577](https://github.com/ultralytics/yolov5/issues/3577) highlight the need for a straightforward solution to update label names in an existing YOLOv5 model. Proposed Solution I propose adding a guide and utility script to the YOLOv5 documentation that explains how to change label names without retraining the model. This solution leverages PyTorch to load, update, and save the model with new label names. Implementation Guide The following Python script demonstrates how to change the label names of a YOLOv5 model: ```python import torch # Load your trained YOLOv5 model model = torch.load("path/to/best.pt") # Define New Label Names model['model'].names = ["name1", "name2", ...] # Save the updated model torch.save(model, "path/to/update_best.pt") ``` Adding this guide to the YOLOv5 documentation will greatly benefit users who face issues with label name errors. It addresses a common problem and provides a practical, efficient solution. Thank you for considering this feature request. ### Use case _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
closed
2024-05-21T10:04:31Z
2024-10-20T19:46:32Z
https://github.com/ultralytics/yolov5/issues/13036
[ "enhancement", "Stale" ]
osalhi-kali
3
amidaware/tacticalrmm
django
1,522
internal server error 500 - conversion between UTF8 and SQL_ASCII is not supported
**Server Info (please complete the following information):** - OS: [Ubuntu 20.04, Debian 10, Debian 11] - Browser: [chrome, edge] - RMM Version (0.15.11): latest **Installation Method:** - [ ] Standard **Agent Info (please complete the following information):** - Agent version (2.4.8): - Agent OS: [Win 11 22H2] **Describe the bug** After clean standard install , generated winOS agent and added my first 2 computers. Double clicked on any of the clients and a window for editing is shown , but when I try so save ( no matter if anything is changed or not) it shows "internal server error 500" and nothing is saved. In django_debug.log found this error : django.db.utils.NotSupportedError: conversion between UTF8 and SQL_ASCII is not supported LINE 1: ...12:09:34.131767+00:00'::timestamptz, "services" = '[{"pid": ... Tried with clean install on debian 10 and 11, Ubuntu 20.04 - problem is the same on all systems **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error
open
2023-05-27T12:18:42Z
2023-05-28T15:14:45Z
https://github.com/amidaware/tacticalrmm/issues/1522
[]
gogo0618
5
mwaskom/seaborn
pandas
3,338
Is there a way to set tick label rotations using the objects interface?
In the new objects interface, is there a way to set, say, the xticklabel rotation? I read the [seaborn.objects.Plot.theme](https://seaborn.pydata.org/generated/seaborn.objects.Plot.theme.html#) and am not sure if it belongs here and is not implemented yet. Thank you.
closed
2023-04-24T06:14:38Z
2023-04-25T21:39:44Z
https://github.com/mwaskom/seaborn/issues/3338
[]
frfeng
3
jupyter-incubator/sparkmagic
jupyter
744
[BUG] Executing pyspark notebook cells with magics in Intellij PyCharm would fail
**Describe the bug** While this may be due to PyCharm's auto-insertion of leading blank line for every cell, we should note that the regular python kernel wouldn't suffer from the same. So the pyspark kernel may be too restrictive on some execution environment? See https://youtrack.jetbrains.com/issue/PY-52486 **To Reproduce** See https://youtrack.jetbrains.com/issue/PY-52486 **Expected behavior** Cells to be executed successfully **Screenshots** See https://youtrack.jetbrains.com/issue/PY-52486 **Versions:** - SparkMagic: 0.19.1 - Livy ? - Spark 3.1.2
open
2022-01-12T00:16:42Z
2022-01-12T00:16:42Z
https://github.com/jupyter-incubator/sparkmagic/issues/744
[]
winston-zillow
0
RobertCraigie/prisma-client-py
asyncio
370
Potentially unnecessary binary files on generate
## Bug description In a basic Docker container: ``` FROM python:3.8 WORKDIR /app COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt COPY schema.prisma schema.prisma RUN prisma generate COPY . . CMD ["python3", "main.py"] ``` I was inspecting the Docker container file size using `dive`. Looking in the `/tmp` folder it looks like there may be some unused binaries. It appears that the binaries used are in `/tmp/prisma/binaries/engines/<version>` according to [constants.py](https://github.com/RobertCraigie/prisma-client-py/blob/main/src/prisma/binaries/constants.py#L41). There is another folder `/tmp/prisma-binaries` with 114MB of binaries with the same name, but is missing the `prisma-cli-linux` but includes the others, e.g. `prisma-query-engine-debian-openssl-1.1.x`. If I remove this folder, the prisma query engine still appears to work. It may be that the prisma generate and binary download process is downloading duplicate files. ## How to reproduce ``` docker build --platform linux/amd64 -t test-prisma dive test-prisma ``` ## Environment & setup - OS: MacOS Docker building for linux/amd64 - Database: - Python version: 3.8 - Prisma version: prisma==0.6.4
closed
2022-04-22T07:10:40Z
2022-12-03T17:03:44Z
https://github.com/RobertCraigie/prisma-client-py/issues/370
[ "kind/improvement", "level/advanced", "priority/medium", "topic: binaries" ]
danfang
1
JaidedAI/EasyOCR
deep-learning
835
Variable length images
Can we train EasyOCR on variable length images?
open
2022-08-27T14:38:22Z
2022-08-28T09:20:06Z
https://github.com/JaidedAI/EasyOCR/issues/835
[]
sameearif88
2
pytest-dev/pytest-django
pytest
1,073
django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
I've docker-compose configuration for django and postgres, it works fine. However, when I'm trying to run pytest inside a django container it fails with an error: ```shell pytest apps/service/tests/test_api.py::TestCreate::test_new ====================================================================== test session starts ======================================================================= platform linux -- Python 3.8.18, pytest-7.4.1, pluggy-1.3.0 django: settings: project.settings.local (from env) rootdir: /app/code configfile: pytest.ini plugins: mock-3.11.1, django-4.5.2, Faker-19.6.1, celery-4.4.2 collected 1 item apps/service/tests/test_api.py E [100%] ============================================================================= ERRORS ============================================================================= __________________________________________________ ERROR at setup of TestCreate.test_new __________________________________________________ self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0xffff71cedf70> @async_unsafe def ensure_connection(self): """Guarantee that a connection to the database is established.""" if self.connection is None: with self.wrap_database_errors: > self.connect() /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:219: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:200: in connect self.connection = self.get_new_connection(conn_params) /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:187: in get_new_connection connection = Database.connect(**conn_params) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ dsn = 'dbname=postgres', connection_factory = None, cursor_factory = None, kwargs = {'database': 'postgres'}, kwasync = {} def connect(dsn=None, connection_factory=None, cursor_factory=None, **kwargs): """ Create a new database connection. The connection parameters can be specified as a string: conn = psycopg2.connect("dbname=test user=postgres password=secret") or using a set of keyword arguments: conn = psycopg2.connect(database="test", user="postgres", password="secret") Or as a mix of both. The basic connection parameters are: - *dbname*: the database name - *database*: the database name (only as keyword argument) - *user*: user name used to authenticate - *password*: password used to authenticate - *host*: database host address (defaults to UNIX socket if not provided) - *port*: connection port number (defaults to 5432 if not provided) Using the *connection_factory* parameter a different class or connections factory can be specified. It should be a callable object taking a dsn argument. Using the *cursor_factory* parameter, a new default cursor factory will be used by cursor(). Using *async*=True an asynchronous connection will be created. *async_* is a valid alias (for Python versions where ``async`` is a keyword). Any other keyword parameter will be passed to the underlying client library: the list of supported parameters depends on the library version. """ kwasync = {} if 'async' in kwargs: kwasync['async'] = kwargs.pop('async') if 'async_' in kwargs: kwasync['async_'] = kwargs.pop('async_') if dsn is None and not kwargs: raise TypeError('missing dsn and no parameters') dsn = _ext.make_dsn(dsn, **kwargs) > conn = _connect(dsn, connection_factory=connection_factory, **kwasync) E psycopg2.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory E Is the server running locally and accepting connections on that socket? /usr/local/lib/python3.8/site-packages/psycopg2/__init__.py:127: OperationalError The above exception was the direct cause of the following exception: self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0xffff7f398d30> @contextmanager def _nodb_cursor(self): try: > with super()._nodb_cursor() as cursor: /usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:301: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/contextlib.py:113: in __enter__ return next(self.gen) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:620: in _nodb_cursor with conn.cursor() as cursor: /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:259: in cursor return self._cursor() /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:235: in _cursor self.ensure_connection() /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:219: in ensure_connection self.connect() /usr/local/lib/python3.8/site-packages/django/db/utils.py:90: in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:219: in ensure_connection self.connect() /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:200: in connect self.connection = self.get_new_connection(conn_params) /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:187: in get_new_connection connection = Database.connect(**conn_params) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ dsn = 'dbname=postgres', connection_factory = None, cursor_factory = None, kwargs = {'database': 'postgres'}, kwasync = {} def connect(dsn=None, connection_factory=None, cursor_factory=None, **kwargs): """ Create a new database connection. The connection parameters can be specified as a string: conn = psycopg2.connect("dbname=test user=postgres password=secret") or using a set of keyword arguments: conn = psycopg2.connect(database="test", user="postgres", password="secret") Or as a mix of both. The basic connection parameters are: - *dbname*: the database name - *database*: the database name (only as keyword argument) - *user*: user name used to authenticate - *password*: password used to authenticate - *host*: database host address (defaults to UNIX socket if not provided) - *port*: connection port number (defaults to 5432 if not provided) Using the *connection_factory* parameter a different class or connections factory can be specified. It should be a callable object taking a dsn argument. Using the *cursor_factory* parameter, a new default cursor factory will be used by cursor(). Using *async*=True an asynchronous connection will be created. *async_* is a valid alias (for Python versions where ``async`` is a keyword). Any other keyword parameter will be passed to the underlying client library: the list of supported parameters depends on the library version. """ kwasync = {} if 'async' in kwargs: kwasync['async'] = kwargs.pop('async') if 'async_' in kwargs: kwasync['async_'] = kwargs.pop('async_') if dsn is None and not kwargs: raise TypeError('missing dsn and no parameters') dsn = _ext.make_dsn(dsn, **kwargs) > conn = _connect(dsn, connection_factory=connection_factory, **kwasync) E django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory E Is the server running locally and accepting connections on that socket? /usr/local/lib/python3.8/site-packages/psycopg2/__init__.py:127: OperationalError During handling of the above exception, another exception occurred: self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0xffff71593ca0> @async_unsafe def ensure_connection(self): """Guarantee that a connection to the database is established.""" if self.connection is None: with self.wrap_database_errors: > self.connect() /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:219: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:200: in connect self.connection = self.get_new_connection(conn_params) /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:187: in get_new_connection connection = Database.connect(**conn_params) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ dsn = 'dbname=test_project', connection_factory = None, cursor_factory = None, kwargs = {'database': 'test_project'}, kwasync = {} def connect(dsn=None, connection_factory=None, cursor_factory=None, **kwargs): """ Create a new database connection. The connection parameters can be specified as a string: conn = psycopg2.connect("dbname=test user=postgres password=secret") or using a set of keyword arguments: conn = psycopg2.connect(database="test", user="postgres", password="secret") Or as a mix of both. The basic connection parameters are: - *dbname*: the database name - *database*: the database name (only as keyword argument) - *user*: user name used to authenticate - *password*: password used to authenticate - *host*: database host address (defaults to UNIX socket if not provided) - *port*: connection port number (defaults to 5432 if not provided) Using the *connection_factory* parameter a different class or connections factory can be specified. It should be a callable object taking a dsn argument. Using the *cursor_factory* parameter, a new default cursor factory will be used by cursor(). Using *async*=True an asynchronous connection will be created. *async_* is a valid alias (for Python versions where ``async`` is a keyword). Any other keyword parameter will be passed to the underlying client library: the list of supported parameters depends on the library version. """ kwasync = {} if 'async' in kwargs: kwasync['async'] = kwargs.pop('async') if 'async_' in kwargs: kwasync['async_'] = kwargs.pop('async_') if dsn is None and not kwargs: raise TypeError('missing dsn and no parameters') dsn = _ext.make_dsn(dsn, **kwargs) > conn = _connect(dsn, connection_factory=connection_factory, **kwasync) E psycopg2.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory E Is the server running locally and accepting connections on that socket? /usr/local/lib/python3.8/site-packages/psycopg2/__init__.py:127: OperationalError The above exception was the direct cause of the following exception: request = <SubRequest '_django_db_marker' for <Function test_new>> @pytest.fixture(autouse=True) def _django_db_marker(request) -> None: """Implement the django_db marker, internal to pytest-django.""" marker = request.node.get_closest_marker("django_db") if marker: > request.getfixturevalue("_django_db_helper") /usr/local/lib/python3.8/site-packages/pytest_django/plugin.py:465: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.8/site-packages/pytest_django/fixtures.py:122: in django_db_setup db_cfg = setup_databases( /usr/local/lib/python3.8/site-packages/django/test/utils.py:179: in setup_databases connection.creation.create_test_db( /usr/local/lib/python3.8/site-packages/django/db/backends/base/creation.py:57: in create_test_db self._create_test_db(verbosity, autoclobber, keepdb) /usr/local/lib/python3.8/site-packages/django/db/backends/base/creation.py:191: in _create_test_db with self._nodb_cursor() as cursor: /usr/local/lib/python3.8/contextlib.py:113: in __enter__ return next(self.gen) /usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:319: in _nodb_cursor with conn.cursor() as cursor: /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:259: in cursor return self._cursor() /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:235: in _cursor self.ensure_connection() /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:219: in ensure_connection self.connect() /usr/local/lib/python3.8/site-packages/django/db/utils.py:90: in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:219: in ensure_connection self.connect() /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py:200: in connect self.connection = self.get_new_connection(conn_params) /usr/local/lib/python3.8/site-packages/django/utils/asyncio.py:33: in inner return func(*args, **kwargs) /usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:187: in get_new_connection connection = Database.connect(**conn_params) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ dsn = 'dbname=test_project', connection_factory = None, cursor_factory = None, kwargs = {'database': 'test_project'}, kwasync = {} def connect(dsn=None, connection_factory=None, cursor_factory=None, **kwargs): """ Create a new database connection. The connection parameters can be specified as a string: conn = psycopg2.connect("dbname=test user=postgres password=secret") or using a set of keyword arguments: conn = psycopg2.connect(database="test", user="postgres", password="secret") Or as a mix of both. The basic connection parameters are: - *dbname*: the database name - *database*: the database name (only as keyword argument) - *user*: user name used to authenticate - *password*: password used to authenticate - *host*: database host address (defaults to UNIX socket if not provided) - *port*: connection port number (defaults to 5432 if not provided) Using the *connection_factory* parameter a different class or connections factory can be specified. It should be a callable object taking a dsn argument. Using the *cursor_factory* parameter, a new default cursor factory will be used by cursor(). Using *async*=True an asynchronous connection will be created. *async_* is a valid alias (for Python versions where ``async`` is a keyword). Any other keyword parameter will be passed to the underlying client library: the list of supported parameters depends on the library version. """ kwasync = {} if 'async' in kwargs: kwasync['async'] = kwargs.pop('async') if 'async_' in kwargs: kwasync['async_'] = kwargs.pop('async_') if dsn is None and not kwargs: raise TypeError('missing dsn and no parameters') dsn = _ext.make_dsn(dsn, **kwargs) > conn = _connect(dsn, connection_factory=connection_factory, **kwasync) E django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory E Is the server running locally and accepting connections on that socket? /usr/local/lib/python3.8/site-packages/psycopg2/__init__.py:127: OperationalError --------------------------------------------------------------------- Captured stderr setup ---------------------------------------------------------------------- /usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:304: RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead. warnings.warn( ==================================================================== short test summary info ===================================================================== ERROR apps/service/tests/test_api.py::TestCreate::test_new - django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory ======================================================================== 1 error in 5.25s ======================================================================== ``` This is actually true, there is no such file in django container, it's inside the psql container, but I don't understand what to do with this error. I checked that test database created. I can fix this error if add this fixture in `conftest.py` ```python import pytest @pytest.fixture() def django_db_setup(): pass ``` But using this approach it using my actual database, not the "test_*" one. I don't think that this is docker-compose problem because other than tests it works fine.
open
2023-09-15T03:39:25Z
2023-09-15T05:19:47Z
https://github.com/pytest-dev/pytest-django/issues/1073
[]
karambaq
1
vitalik/django-ninja
django
1,306
[BUG] JSON payload not parsed when using upload with extra fields
**Describe the bug** Ninja fails to parse the request body correctly and throws a validation error, contrary to the instructions under [Upload files with extra fields](https://django-ninja.dev/guides/input/file-params/). **Versions (please complete the following information):** - Python version: 3.11.7 - Django version: 5.1.1 - Django-Ninja version: 1.3.0 - Pydantic version: 2.9.2 Relevant schema ```python class FileMetadata(Schema): posts: List[int] = [] visibility: str = "public" ``` API Code ```python @files_router.post( "/", response={200: FileDetails}, tags=["files"], auth=JWTAuth(permissions=StaffOnly) ) def create_file(request: HttpRequest, metadata: FileMetadata, upload: NinjaFile[UploadedFile]): """ Creates a file with or without post associations. """ try: if metadata.visibility == "public": stored_name = PublicStorage().save(upload.name, upload.file) url = PublicStorage().url(stored_name) else: stored_name = PrivateStorage().save(upload.name, upload.file) url = PrivateStorage().url(stored_name) upload = File.objects.create( location=url, name=stored_name, content_type=upload.content_type, charset=upload.charset, size=upload.size, visibility=metadata.visibility, ) if metadata: upload.posts.set(metadata.posts) return upload except Exception as err: logger.error("Error creating file", error=err) raise HttpError(500, "Fail to create file") from err ``` Request body ``` -----------------------------291645760626718691221248293984 Content-Disposition: form-data; name="upload"; filename="business card front.pdf" Content-Type: application/pdf file stuff %%EOF -----------------------------291645760626718691221248293984 Content-Disposition: form-data; name="metadata"; filename="blob" Content-Type: application/json {"posts":[2,4],"visibility":"public"} -----------------------------291645760626718691221248293984-- ``` `Content-Type` on the request is set to `multipart/form-data; boundary=---------------------------291645760626718691221248293984` Response ```json { "detail": [ { "type": "missing", "loc": [ "body", "metadata" ], "msg": "Field required" } ] } ```
open
2024-09-27T21:17:32Z
2024-10-03T13:30:05Z
https://github.com/vitalik/django-ninja/issues/1306
[]
matt0x6F
8
explosion/spaCy
data-science
13,422
Converting into exe file through pyinstaller-> spacy cannot find factory for 'curated transformer'
``` import spacy import spacy_curated_transformers # import spacy_transformers import curated_transformers import spacy_alignments import spacy_legacy import spacy_loggers import spacy_pkuseg import os nlp = spacy.load(os.getcwd()+'\\en_core_web_trf-3.7.3') x= input() doc= nlp(x) result =[] for sent in doc.sents: result.append(sent.text) print(result) ``` I wanted to turn the above code into exe file. However, [valueerror: [e002] can't find factory for 'curated transformer' for language english (en)] error occurs... I used pyinstaller to convert it into exe file. In the pyinstaller, I included spacy, spacy_curated_transformers, curated_transformers into the hidden import. I wonder how to make this executable file configure the curated transformer factory... Please help me. ![screenshot](https://github.com/explosion/spaCy/assets/101243964/921a567a-1d5c-49ac-bfae-09bf13c1e4c6) ## My Environment * Operating System: Windows 11 * Python Version Used: 3.11.8 * spaCy Version Used: 3.7.4 * Environment Information:
closed
2024-04-09T05:19:53Z
2024-04-09T10:08:54Z
https://github.com/explosion/spaCy/issues/13422
[ "install", "feat / transformer" ]
estherkim083
1
aws/aws-sdk-pandas
pandas
2,435
RedshiftDataApi: Support temporary credentials auth via IAM
**Is your idea related to a problem? Please describe.** I love the convenience of using `wr.data_api.redshift.read_sql_query()` to fetch data from a redshift cluster using temporary credentials, without having to worry about VPCs and network accessibility. Currently, the authentication methods accepted in the `wr.data_api.redshift.RedshiftDataApi` class are restricted to either an explicit db user name or a link to the secrets manager, and [fails if neither is passed](https://github.com/aws/aws-sdk-pandas/blob/906b4d2/awswrangler/data_api/redshift.py#L96) by the user. The underlying redshift-data -> executeStatement api call however [falls back to IAM](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html ) if neither is given, which i'd like to make use of in the wrangler calls as well. A direct mapping to IAM users allows us to easier implement role based access control, as database users would be directly related to the roles already set up for the specific teams. **Describe the solution you'd like** If neither a `db_user` nor `secret_arn` are given, the `RedshiftDataApi` class does not throw an error, but pass on neither, which causes the `executeStatement` api call to use `getTemporaryCredentialsWithIAM` instead of `getTemporaryCredentials`. Alternatively, a `use_iam` flag (or similar) could be implemented if that's preferable. Would you be willing to accept/merge a PR that changes this behaviour?
closed
2023-08-18T10:43:55Z
2023-10-13T10:16:46Z
https://github.com/aws/aws-sdk-pandas/issues/2435
[ "enhancement" ]
theister
2
pydata/pandas-datareader
pandas
547
Unable to get data from google.
Df= web.DataReader('SPY', data_source='google') Df=Df[['Open','High','Low','Close']] /Users/vikas/anaconda3/lib/python3.6/site-packages/pandas_datareader/google/daily.py:40: UnstableAPIWarning: The Google Finance API has not been stable since late 2017. Requests seem to fail at random. Failure is especially common when bulk downloading. warnings.warn(UNSTABLE_WARNING, UnstableAPIWarning) Traceback (most recent call last): File "<ipython-input-2-329433d59ce6>", line 1, in <module> Df= web.DataReader('SPY', data_source='google') File "/Users/vikas/anaconda3/lib/python3.6/site-packages/pandas_datareader/data.py", line 315, in DataReader session=session).read() File "/Users/vikas/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py", line 206, in read params=self._get_params(self.symbols)) File "/Users/vikas/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py", line 84, in _read_one_data out = self._read_url_as_StringIO(url, params=params) File "/Users/vikas/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py", line 95, in _read_url_as_StringIO response = self._get_response(url, params=params) File "/Users/vikas/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py", line 155, in _get_response raise RemoteDataError(msg) RemoteDataError: Unable to read URL: https://finance.google.com/finance/historical?q=SPY&startdate=Jan+01%2C+2010&enddate=Jun+23%2C+2018&output=csv Response Text: b'<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"/><title>Sorry...</title><style> body { font-family: verdana, arial, sans-serif; background-color: #fff; color: #000; }</style></head><body><div><table><tr><td><b><font face=sans-serif size=10><font color=#4285f4>G</font><font color=#ea4335>o</font><font color=#fbbc05>o</font><font color=#4285f4>g</font><font color=#34a853>l</font><font color=#ea4335>e</font></font></b></td><td style="text-align: left; vertical-align: bottom; padding-bottom: 15px; width: 50%"><div style="border-bottom: 1px solid #dfdfdf;">Sorry...</div></td></tr></table></div><div style="margin-left: 4em;"><h1>We\'re sorry...</h1><p>... but your computer or network may be sending automated queries. To protect our users, we can\'t process your request right now.</p></div><div style="margin-left: 4em;">See <a href="https://support.google.com/websearch/answer/86640">Google Help</a> for more information.<br/><br/></div><div style="text-align: center; border-top: 1px solid #dfdfdf;"><a href="https://www.google.com">Google Home</a></div></body></html>'
closed
2018-06-23T03:50:12Z
2018-09-12T07:56:52Z
https://github.com/pydata/pandas-datareader/issues/547
[]
Vikas125
4
flaskbb/flaskbb
flask
384
Dose gevent WSGI really worked?
make sure gevent effected, only use WSGIServer without any monkey patch
closed
2017-12-25T15:20:05Z
2018-04-15T07:47:49Z
https://github.com/flaskbb/flaskbb/issues/384
[]
KyRionY
4
ets-labs/python-dependency-injector
asyncio
466
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
Hi, Thank you so much for a wonderful project. I am using dependency injection in a Django side project of mine. I notice a big problem with the way the container initialize. As you state in the example section for Django, we initiate the container in `__init__.py` at **project level.** ``` from .di_containers import DIContainer from . import settings di_container = DIContainer() -> create dependency before other app load ... ``` https://python-dependency-injector.ets-labs.org/examples/django.html However, if one of your provider (Ex: UserService needs User model) requires model to do some works, django will throw **django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.** I am not so sure how to solve it? I am using Singleton in my container (required). Not sure if it that the reason? ``` class DIContainer(containers.DeclarativeContainer): user_service = providers.Singleton(UserService) ``` ``` from api_core.apps.user.models import User <- error cause by this import @inject class UserService: def __init__(self): self.test = 'test' ``` **Update**: I solve the problem with local import. But still I have to import locally multiple places in different function. The main root cause still because of initialization of the container in `__init__.py` at project level. Let say we move it to a custom app, which register at the end of INSTALLED_APP list, we still have a problem how to wire them to the other app. I don't know if you familiar with ReactiveX. I think this package can help solve the problem of wiring https://github.com/ReactiveX/RxPY. We could have an subscriber at each AppConfig in `def ready: ...`. Then when we initiate the container, it will trigger a signal to tell them, ok it is safe for you guys to wire now ?
open
2021-06-15T04:30:11Z
2023-04-10T06:02:31Z
https://github.com/ets-labs/python-dependency-injector/issues/466
[ "bug" ]
vuhi
4
jina-ai/clip-as-service
pytorch
112
bert-as-service
run bert-as-service๏ผŒresponse command not found ๏ผŒwhy๏ผŸ
closed
2018-12-10T07:53:04Z
2018-12-11T01:50:22Z
https://github.com/jina-ai/clip-as-service/issues/112
[]
jiezouguihuafu
1
s3rius/FastAPI-template
fastapi
192
Taskiq Mypy validation error
After initializing blank project with Taskiq Mypy gives an validation error ```python Format with Black........................................................Passed isort....................................................................Passed Check with Flake8........................................................Passed Validate types with MyPy.................................................Failed - hook id: mypy - exit code: 1 project_name\tkq.py:9: error: Incompatible types in assignment (expression has type "InMemoryBroker", variable has type "ZeroMQBroker") [assignment] broker = InMemoryBroker() ^~~~~~~~~~~~~~~~ Found 1 error in 1 file (checked 44 source files) ``` Initial `project_name/tkq.py` ```python import taskiq_fastapi from taskiq import InMemoryBroker, ZeroMQBroker from project_name.settings import settings broker = ZeroMQBroker() if settings.environment.lower() == "pytest": broker = InMemoryBroker() taskiq_fastapi.init( broker, "project_name.web.application:get_app", ) ```
open
2023-10-02T12:57:29Z
2023-10-02T14:17:59Z
https://github.com/s3rius/FastAPI-template/issues/192
[]
RoyalGoose
1
MycroftAI/mycroft-core
nlp
2,148
Can't get pairing code in a VM
On a Dell Vostro 3000 Series I have a freeBSD 13.0 CURRENT host with a Peppermint OS (Debian, 5.0.X) guest, running MyCroft on it. Everything's fine, except that I don't get any pairing code for the device. We already tried the curl command via konsole but it seems MC doesn't recognize the online prescense of the device in the api. Paired, Unpaired, again trying to pair, nothing works. Matheus from the chat told me to open a new issue with this screenshot. ![Screenshot_20190608_105416](https://user-images.githubusercontent.com/43870491/59144946-564a5300-89dd-11e9-80f1-06e99f5dd4c0.png) Maybe M. doesn't want freeBSD hosts? What'ya think?
closed
2019-06-08T09:06:30Z
2020-09-22T08:00:30Z
https://github.com/MycroftAI/mycroft-core/issues/2148
[]
vanbreukelingen
8
sergree/matchering
numpy
17
Give us a ๐Ÿ‘
Hi, if you find our app and library useful, please give it a ๐Ÿ‘ **[here](https://github.com/vinta/awesome-python/pull/1480)**. https://github.com/vinta/awesome-python/pull/1480 Thanks!
closed
2020-02-14T10:54:09Z
2020-04-08T12:59:03Z
https://github.com/sergree/matchering/issues/17
[ "help wanted" ]
sergree
1
jumpserver/jumpserver
django
15,052
[Bug] docker้ƒจ็ฝฒ๏ผŒๅ†…็ฝ‘็ฉฟ้€่ฎฟ้—ฎ๏ผŒ่ฎฐๅฝ•็š„็™ปๅฝ•ๅŸŽๅธ‚้ƒฝๆ˜ฏๅฑ€ๅŸŸ็ฝ‘
### ไบงๅ“็‰ˆๆœฌ v4.6.0 ### ็‰ˆๆœฌ็ฑปๅž‹ - [x] ็คพๅŒบ็‰ˆ - [ ] ไผไธš็‰ˆ - [ ] ไผไธš่ฏ•็”จ็‰ˆ ### ๅฎ‰่ฃ…ๆ–นๅผ - [ ] ๅœจ็บฟๅฎ‰่ฃ… (ไธ€้”ฎๅ‘ฝไปคๅฎ‰่ฃ…) - [ ] ็ฆป็บฟๅŒ…ๅฎ‰่ฃ… - [ ] All-in-One - [x] 1Panel - [ ] Kubernetes - [ ] ๆบ็ ๅฎ‰่ฃ… ### ็Žฏๅขƒไฟกๆฏ ubuntu๏ผŒ1panel๏ผŒdocker ### ๐Ÿ› ็ผบ้™ทๆ่ฟฐ ้€š่ฟ‡1panel็š„docker้ƒจ็ฝฒ๏ผŒๅ†…็ฝ‘็ฉฟ้€่ฎฟ้—ฎ๏ผŒ่ฎฐๅฝ•็š„็™ปๅฝ•ๅŸŽๅธ‚้ƒฝๆ˜ฏๅฑ€ๅŸŸ็ฝ‘๏ผŒ่ฎฐๅฝ•็š„็™ปๅฝ•ๅœฐๅ€้ƒฝๆ˜ฏๅฑ€ๅŸŸ็ฝ‘๏ผŒๆ— ๆณ•่ฟฝๆบฏ ### ๅค็Žฐๆญฅ้ชค ่ฎฟ้—ฎ ### ๆœŸๆœ›็ป“ๆžœ _No response_ ### ่กฅๅ……ไฟกๆฏ _No response_ ### ๅฐ่ฏ•่ฟ‡็š„่งฃๅ†ณๆ–นๆกˆ _No response_
open
2025-03-17T14:44:00Z
2025-03-21T10:46:27Z
https://github.com/jumpserver/jumpserver/issues/15052
[ "๐Ÿ› Bug", "โณ Pending feedback" ]
yzl321905
1
matplotlib/mplfinance
matplotlib
314
`tz_localize=True` fails to localize datetimes in `tlines` and `alines` specifications (and in the future also in xlim specification?)
Hi, I tried to implement the tlines using mplfinance, and generate the trendline data as per the example, however, I still get this error: ValueError: tlines date pair (2021-01-11 09:35:00-05:00,2021-01-11 10:20:00-05:00) too close, or wrong order, or out of range! df date range: [2021-01-11 09:35:00+00:00 , 2021-01-11 10:20:00+00:00] Please, if anyone can advise, It will be much appreciated. Regards.
open
2021-01-12T13:04:49Z
2021-01-12T16:53:46Z
https://github.com/matplotlib/mplfinance/issues/314
[ "bug", "question" ]
ventek
4
pydata/xarray
numpy
9,179
Difference in time coordinate values in xarray tutorial dataset loaded with numpy v2
### What happened? Time coordinate values are significantly different (second precision) if numpy v2 in the environment for the xarray "air temperature" tutorial dataset. This leads to discrepancies and errors in selection by date strings. Numpy v2.0.0 ``` array(['2013-01-01T00:02:06.757437440', '2013-01-01T05:59:27.234179072', '2013-01-01T11:56:47.710920704', ..., '2014-12-31T05:58:10.831327232', '2014-12-31T11:55:31.308068864', '2014-12-31T18:02:01.540624384'], dtype='datetime64[ns]') ``` Numpy v1.26.4 ``` array(['2013-01-01T00:00:00.000000000', '2013-01-01T06:00:00.000000000', '2013-01-01T12:00:00.000000000', ..., '2014-12-31T06:00:00.000000000', '2014-12-31T12:00:00.000000000', '2014-12-31T18:00:00.000000000'], dtype='datetime64[ns]') ``` ### What did you expect to happen? I expect time coordinates to be identical for different numpy versions ### Minimal Complete Verifiable Example ```Python #mamba create -n xarray2024.6.0 xarray ipython pooch netCDF4 numpy>2 import xarray as xr ds = xr.tutorial.load_dataset("air_temperature") print(ds.time.values) dates = ['2013-07-09', '2013-10-11', '2013-12-24'] ds.sel(time=dates) ``` ### MVCE confirmation - [X] Minimal example โ€” the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [X] Complete example โ€” the example is self-contained, including all data and the text of any traceback. - [X] Verifiable example โ€” the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [X] New issue โ€” a search of GitHub Issues suggests this is not a duplicate. - [X] Recent environment โ€” the issue occurs with the latest version of xarray and its dependencies. ### Relevant log output ```Python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[21], line 2 1 dates = ['2013-07-09', '2013-10-11', '2013-12-24'] ----> 2 ds.sel(time=dates) File ~/miniforge3/envs/xarray2024.6.0/lib/python3.12/site-packages/xarray/core/dataset.py:3126, in Dataset.sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 3058 """Returns a new dataset with each array indexed by tick labels 3059 along the specified dimension(s). 3060 (...) 3123 3124 """ 3125 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") -> 3126 query_results = map_index_queries( 3127 self, indexers=indexers, method=method, tolerance=tolerance 3128 ) 3130 if drop: 3131 no_scalar_variables = {} File ~/miniforge3/envs/xarray2024.6.0/lib/python3.12/site-packages/xarray/core/indexing.py:192, in map_index_queries(obj, indexers, method, tolerance, **indexers_kwargs) 190 results.append(IndexSelResult(labels)) 191 else: --> 192 results.append(index.sel(labels, **options)) 194 merged = merge_sel_results(results) 196 # drop dimension coordinates found in dimension indexers 197 # (also drop multi-index if any) 198 # (.sel() already ensures alignment) File ~/miniforge3/envs/xarray2024.6.0/lib/python3.12/site-packages/xarray/core/indexes.py:801, in PandasIndex.sel(self, labels, method, tolerance) 799 indexer = get_indexer_nd(self.index, label_array, method, tolerance) 800 if np.any(indexer < 0): --> 801 raise KeyError(f"not all values found in index {coord_name!r}") 803 # attach dimension names and/or coordinates to positional indexer 804 if isinstance(label, Variable): KeyError: "not all values found in index 'time'" ``` ### Anything else we need to know? https://github.com/xarray-contrib/xarray-tutorial/issues/271 ### Environment <details> INSTALLED VERSIONS ------------------ commit: None python: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ] python-bits: 64 OS: Darwin OS-release: 23.5.0 machine: arm64 processor: arm byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.3 libnetcdf: 4.9.2 xarray: 2024.6.0 pandas: 2.2.2 numpy: 2.0.0 scipy: None netCDF4: 1.7.1 pydap: None h5netcdf: None h5py: None zarr: None cftime: 1.6.4 nc_time_axis: None iris: None bottleneck: None dask: None distributed: None matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 70.1.1 pip: 24.0 conda: None pytest: None mypy: None IPython: 8.25.0 sphinx: None </details>
closed
2024-06-26T20:10:12Z
2024-06-28T08:18:56Z
https://github.com/pydata/xarray/issues/9179
[ "bug" ]
scottyhq
2
plotly/dash
dash
2,588
enable termination of 'outdated' long callbacks when page changes
**Is your feature request related to a problem? Please describe.** Long callbacks run for all Figures on all of my application's pages. When I switch between pages quickly the tasks in the backend Celery queue execute in-order regardless of old requests being outdated. Ex: If I'm on page 1, and I navigate to page 2, then quickly to page 3 then page 4, the long callbacks for figures corresponding to pages 2, then 3, then 4 will run in order, slowing the delivery of Figures pertinent to page 4. However, this isn't the case if I refresh a single page multiple times. If I'm on page 2 and I refresh the page 5 times, the long callbacks for the Figures associated with page 2 don't execute 5 times in order- the first 4 'render requests' are 'revoked' in the Celery queue, with the final request completing. **Describe the solution you'd like** I'd like a page switch to trigger the termination of newly 'outdated' long callbacks in the backend. **Describe alternatives you've considered** In the universe of Dash objects, I considered implementing a 'Sentry' that watches the URL and triggers a 'cancel' function that's bound to all long callbacks, cancelling them via the 'cancelable' interface of callbacks. In the Dash execution graph, however, I don't think I can guarantee that this callback would always precede the scheduling of new, up-to-date callbacks, so I don't think this is a reasonable solution. **Additional context** Here's an idea for a patch: The dash_renderer frontend is aware of when it's going to try to execute a callback for a job that is still running. If a to-be-called callback's output list matches the output list of the job that the frontend is waiting for, it issues an 'oldJob' ID to the backend via the request headers. [Source](https://github.com/plotly/dash/blob/a7a12d180e16eac0a84b88e2b8dc8f7c7601cbaf/dash/dash-renderer/src/actions/callbacks.ts#L662) In the backend, receiving these 'oldJob' ID's triggers that job's termination in the Celery backend. [Source](https://github.com/plotly/dash/blob/a7a12d180e16eac0a84b88e2b8dc8f7c7601cbaf/dash/_callback.py#L363) It's evident that the frontend is doing job bookkeeping, tracking the 'output' list of jobs that have been scheduled in the backend but that haven't returned for the frontend. If the frontend could also track the page that the job is intended for, and compare that to the window.location.pathname when the job cleanup is already happening, the 'oldJob' param could be set and the backend could clean up any currently running long callbacks for the previous page. A parameter could be set in the Dash python object to enable and disable this feature- it'd generally NOT be mutually exclusive of memoization because cancellations wouldn't always happen but it'd be more conservative, and memoization wouldn't always have an opportunity to happen.
open
2023-07-06T22:15:41Z
2024-08-13T19:35:01Z
https://github.com/plotly/dash/issues/2588
[ "feature", "P3" ]
JamesKunstle
16
vaexio/vaex
data-science
2,125
[BUG-REPORT] Python application won't shut down after calling Vaex df.sum or df.unique
**Description** I am building a Python FastApi application that uses Vaex. I noticed that when either function df.sum or df.unique is called I can no longer terminate the application by pressing ctrl + c. Here are examples of how I use them: ``` df.unique(axis, return_inverse=False, dropna=True, dropnan=True, dropmissing=True, progress=False, selection=None, axis=None, delay=False, array_type="python", ) ``` and `x = int(df.sum(f"sid__{s}"))` How do I fix this so I am able to manually terminate the application without having to kill the console? **Software information** - Vaex version (`import vaex; vaex.__version__)`: 4.9.1 - OS: Windows
open
2022-07-22T14:12:32Z
2022-08-04T13:58:10Z
https://github.com/vaexio/vaex/issues/2125
[]
abf7d
3
xonsh/xonsh
data-science
5,127
Unexpected exception while updating completions
<!--- Provide a general summary of the issue in the Title above --> When I set $UPDATE_COMPLETIONS_ON_KEYPRESS = True and type for instance /usr/bin/ls -a in terminal, following exception is thrown: "Exception [Errno 13] Permission denied: '/usr/bin/ls.json'" <!--- If you have a question along the lines of "How do I do this Bash command in xonsh" please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html If you don't find an answer there, please do open an issue! --> ## xonfig <details> ``` +------------------+----------------------+ | xonsh | 0.13.4 | | Python | 3.8.10 | | PLY | 3.11 | | have readline | True | | prompt toolkit | 3.0.36 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.14.0 | | on posix | True | | on linux | True | | distro | ubuntu | | on wsl | False | | on darwin | False | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | | RC file 1 | /home/ralis/.xonshrc | +------------------+----------------------+ ``` </details> ## Expected Behavior <!--- Tell us what should happen --> The warning should be either more subtle or no completion suggestions should be shown. ## Current Behavior <!--- Tell us what happens instead of the expected behavior --> Huge multi-line error is printed. <!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`. On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` --> ### Traceback (if applicable) <details> ``` Unhandled exception in event loop: File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/buffer.py", line 1939, in new_coroutine await coroutine(*a, **kw) File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/buffer.py", line 1763, in async_completer async for completion in async_generator: File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/completion/base.py", line 326, in get_completions_async async for completion in completer.get_completions_async( File "/home/ralis/.local/lib/python3.8/site-packages/prompt_toolkit/completion/base.py", line 202, in get_completions_async for item in self.get_completions(document, complete_event): File "/usr/local/lib/python3.8/dist-packages/xonsh/ptk_shell/completer.py", line 58, in get_completions completions, plen = self.completer.complete( File "/usr/local/lib/python3.8/dist-packages/xonsh/completer.py", line 121, in complete return self.complete_from_context( File "/usr/local/lib/python3.8/dist-packages/xonsh/completer.py", line 272, in complete_from_context for comp in self.generate_completions( File "/usr/local/lib/python3.8/dist-packages/xonsh/completer.py", line 233, in generate_completions for comp in res: File "/usr/local/lib/python3.8/dist-packages/xonsh/completers/man.py", line 137, in completions for desc, opts in _parse_man_page_options(cmd).items(): File "/usr/local/lib/python3.8/dist-packages/xonsh/completers/man.py", line 121, in _parse_man_page_options path.write_text(json.dumps(options)) File "/usr/lib/python3.8/pathlib.py", line 1255, in write_text with self.open(mode='w', encoding=encoding, errors=errors) as f: File "/usr/lib/python3.8/pathlib.py", line 1222, in open return io.open(self, mode, buffering, encoding, errors, newline, File "/usr/lib/python3.8/pathlib.py", line 1078, in _opener return self._accessor.open(self, flags, mode) Exception [Errno 13] Permission denied: '/usr/bin/ls.json' ``` </details> ## Steps to Reproduce <!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! --> ```xsh $UPDATE_COMPLETIONS_ON_KEYPRESS = True /usr/bin/ls - # exception after typing ``` ## For community โฌ‡๏ธ **Please click the ๐Ÿ‘ reaction instead of leaving a `+1` or ๐Ÿ‘ comment**
closed
2023-04-24T20:26:34Z
2024-04-10T05:48:47Z
https://github.com/xonsh/xonsh/issues/5127
[ "good first issue", "completion", "priority-high" ]
ralisv
3
pallets-eco/flask-sqlalchemy
sqlalchemy
622
Very poor performance of Pagination.{pages,iter_pages}
The performance of those two methods is really really bad when you have thousands of pages. In my case, with 100k pages, painting the paginator takes over 300 ms. `Pagination.iter_pages` is slow by itself. `Pagination.pages` is not that slow but it's called once for every page from `Pagination.iter_pages` when you consume the iterator, and calling it 100k times really adds up. Python is a very slow language and calling the same method so many times is undesirable even if [the method seems simple](https://github.com/mitsuhiko/flask-sqlalchemy/blob/381653489af8ee8aa26340a7c3fe0a0dced64258/flask_sqlalchemy/__init__.py#L325-L332). The low-hanging fruit here is caching the result of `Pagination.pages`. That fixes more than half of the problem (only the call to `Pagination.iter_pages` remains). I did so with this subclass, which might be helpful to someone. (I'm far from an expert in Python, so this may hurt your eyes) ```python class MyPagination(Pagination): def __init__(self, query, page, per_page, total, items): super().__init__(query, page, per_page, total, items) self._pages = super().pages @property def pages(self): return self._pages ``` But of course a better solution is desirable.
closed
2018-05-16T14:33:47Z
2022-10-03T00:21:59Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/622
[ "pagination" ]
wodim
2
fbdesignpro/sweetviz
data-visualization
138
How to show chinese in ASSOCIATIONS
Hi,I have just tried to use sweetviz to do EDA. And I have a problem that in ASSOCIATIONS it doesn't support chinese, and I got this: ![Uploading image.pngโ€ฆ]() Is there anything I can do to solve it?Thanks!
closed
2023-03-27T10:18:45Z
2023-08-20T12:23:13Z
https://github.com/fbdesignpro/sweetviz/issues/138
[]
nealcha
2
jupyterlab/jupyter-ai
jupyter
1,263
Enforce strict version matches between `jupyter-ai` and `jupyter-ai-magics`
### Problem We currently lack an automated way to enforce a strict version match between `jupyter-ai-magics` and `jupyter-ai` when installed via `pip`. Fixes in `jupyter-ai` sometimes require downstream changes in `jupyter-ai-magics`, so users may need to update both packages to receive a bug fix. However, this is not done automatically by `pip`, which is really confusing to the end user. This has caused several bugs: - #1172 - #1253 - At least one more. Please feel free to comment below if you find another. ### Proposed Solution - Update `bump-version.sh` to somehow bump the version pin of `jupyter-ai-magics`. - Assert this in the release process somehow to prevent broken releases if this script gets broken. ### Additional context We do enforce this in our Conda Forge releases because that process is manual anyways. However, we should be doing this for PyPI releases too.
closed
2025-02-26T19:08:53Z
2025-03-20T21:27:17Z
https://github.com/jupyterlab/jupyter-ai/issues/1263
[ "enhancement", "scope:settings", "scope:releaser" ]
dlqqq
3
floodsung/Deep-Learning-Papers-Reading-Roadmap
deep-learning
74
Paper [36] link broken
The link for the paper "Sequence to sequence learning with neural networks" [36] appears to be broken. Can you fix. Thanks.
open
2017-10-16T20:56:00Z
2017-10-23T12:24:43Z
https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap/issues/74
[]
d-henderson
2
MilesCranmer/PySR
scikit-learn
448
[BUG]: Julia interface fails on conda environments
### What happened? I get the following error with a fresh install in a fresh conda environment with pysr installed with pip. ```python ImportError: cannot import name 'Main' from 'julia' (C:\Users\ilyao\miniforge3\envs\pysr_std\Lib\site-packages\julia\__init__.py) ``` ### Version 0.16.3 ### Operating System Windows ### Package Manager Other (specify below) ### Interface IPython Terminal ### Relevant log output ```shell >>> model.fit(X, y) Compiling Julia backend... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\ilyao\miniforge3\envs\pysr_std\Lib\site-packages\pysr\sr.py", line 1970, in fit self._run(X, y, mutated_params, weights=weights, seed=seed) File "C:\Users\ilyao\miniforge3\envs\pysr_std\Lib\site-packages\pysr\sr.py", line 1625, in _run Main = init_julia(self.julia_project, julia_kwargs=julia_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ilyao\miniforge3\envs\pysr_std\Lib\site-packages\pysr\julia_helpers.py", line 216, in init_julia from julia import Main as _Main ImportError: cannot import name 'Main' from 'julia' (C:\Users\ilyao\miniforge3\envs\pysr_std\Lib\site-packages\julia\__init__.py) ``` ### Extra Info My julia version is 1.9.3, installed with juliaup. In the fresh environment I can get pyjulia working but only with the lower level interface ```python import julia julia.core.Julia() # works from julia import Main #fails ```
closed
2023-10-26T16:50:26Z
2023-10-30T23:54:54Z
https://github.com/MilesCranmer/PySR/issues/448
[ "bug" ]
IlyaOrson
12
deepfakes/faceswap
machine-learning
1,207
ImportError: numpy.core.multiarray failed to import
D:\faceswap-master>python faceswap.py extract -i D:\faceswap-master\src\han_li.mp4 -o D:\faceswap-master\faces Setting Faceswap backend to AMD No GPU detected. Switching to CPU mode 01/29/2022 22:23:02 INFO Log level set to: INFO 01/29/2022 22:23:02 WARNING No GPU detected. Switching to CPU mode 01/29/2022 22:23:02 INFO Switching backend to CPU. Using Tensorflow for CPU operations. RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd 01/29/2022 22:23:06 ERROR Got Exception on main handler: Traceback (most recent call last): File "D:\faceswap-master\lib\cli\launcher.py", line 180, in execute_script script = self._import_script() File "D:\faceswap-master\lib\cli\launcher.py", line 46, in _import_script module = import_module(mod) File "C:\Users\44626\AppData\Local\Programs\Python\Python37\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "D:\faceswap-master\scripts\extract.py", line 14, in <module> from scripts.fsmedia import Alignments, PostProcess, finalize File "D:\faceswap-master\scripts\fsmedia.py", line 18, in <module> from lib.face_filter import FaceFilter as FilterFunc File "D:\faceswap-master\lib\face_filter.py", line 7, in <module> from lib.vgg_face import VGGFace File "D:\faceswap-master\lib\vgg_face.py", line 15, in <module> from fastcluster import linkage File "C:\Users\44626\AppData\Local\Programs\Python\Python37\lib\site-packages\fastcluster.py", line 37, in <module> from _fastcluster import linkage_wrap, linkage_vector_wrap ImportError: numpy.core.multiarray failed to import 01/29/2022 22:23:06 CRITICAL An unexpected crash has occurred. Crash report written to 'D:\faceswap-master\crash_report.2022.01.29.222305067754.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting ----------------------------------- win10 no gpu my pip list๏ผš absl-py 0.15.0 astunparse 1.6.3 cached-property 1.5.2 cachetools 4.2.4 certifi 2021.10.8 cffi 1.15.0 charset-normalizer 2.0.10 clang 5.0 colorama 0.4.4 cycler 0.11.0 enum34 1.1.10 fastcluster 1.2.4 ffmpy 0.2.3 flatbuffers 1.12 gast 0.3.3 google-auth 1.35.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 grpcio 1.43.0 h5py 2.10.0 idna 3.3 imageio 2.14.1 imageio-ffmpeg 0.4.5 importlib-metadata 4.10.1 joblib 1.1.0 Keras 2.2.4 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.2 kiwisolver 1.3.2 Markdown 3.3.6 matplotlib 3.2.2 numpy 1.19.4 nvidia-ml-py 11.495.46 oauthlib 3.1.1 opencv-python 4.5.5.62 opt-einsum 3.3.0 Pillow 9.0.0 pip 21.3.1 plaidml 0.7.0 plaidml-keras 0.7.0 protobuf 3.19.4 psutil 5.9.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.21 pyparsing 3.0.7 python-dateutil 2.8.2 pywin32 303 PyYAML 6.0 requests 2.27.1 requests-oauthlib 1.3.0 rsa 4.8 scikit-learn 1.0.2 scipy 1.7.3 setuptools 60.5.0 six 1.15.0 tensorboard 2.2.2 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.2.3 tensorflow-estimator 2.2.0 termcolor 1.1.0 threadpoolctl 3.0.0 tqdm 4.62.3 typing-extensions 3.7.4.3 urllib3 1.26.8 Werkzeug 2.0.2 wheel 0.37.1 wrapt 1.12.1 zipp 3.7.0
closed
2022-01-29T14:30:24Z
2022-05-15T01:22:24Z
https://github.com/deepfakes/faceswap/issues/1207
[]
Odimmsun
3
fastapi/sqlmodel
fastapi
21
How to make a "timestamp with time zone"?
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options ๐Ÿ‘† ### Example Code ```python import datetime from typing import List, Optional import sqlmodel class AuthUser(sqlmodel.SQLModel, table=True): __tablename__ = 'auth_user' id: Optional[int] = sqlmodel.Field(default=None, primary_key=True) password: str = sqlmodel.Field(max_length=128) last_login: datetime.datetime ``` ### Description I'm trying to make the `last_login` field become a "timestamp with time zone" field in Postgres. With the above code, it is a "timestamp without time zone". ### Operating System Linux, macOS ### Operating System Details _No response_ ### SQLModel Version 0.0.3 ### Python Version Python 3.8.1 ### Additional Context _No response_
closed
2021-08-25T21:32:00Z
2021-08-26T19:55:37Z
https://github.com/fastapi/sqlmodel/issues/21
[ "question" ]
typeshige
3
deepset-ai/haystack
pytorch
8,086
Remove references to the removed `DynamicPromptBuilder` and `DynamicChatPromptBuilder` components
Removed in https://github.com/deepset-ai/haystack/pull/8085. - [x] Docs - [x] Tutorials - [x] Cookbooks - [x] Integrations
closed
2024-07-25T14:15:50Z
2024-08-04T21:37:22Z
https://github.com/deepset-ai/haystack/issues/8086
[ "breaking change", "type:documentation", "P1", "2.x" ]
shadeMe
0
xonsh/xonsh
data-science
4,756
distribute wheel files with new releases
<!--- Provide a general summary of the issue in the Title above --> <!--- If you have a question along the lines of "How do I do this Bash command in xonsh" please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html If you don't find an answer there, please do open an issue! --> It will help fasten install time during CI for dependant projects. ## Steps to Reproduce <!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! --> ## For community โฌ‡๏ธ **Please click the ๐Ÿ‘ reaction instead of leaving a `+1` or ๐Ÿ‘ comment**
closed
2022-04-15T06:15:04Z
2022-05-10T15:39:44Z
https://github.com/xonsh/xonsh/issues/4756
[ "development" ]
jnoortheen
4
holoviz/panel
plotly
6,850
Interactivity tutorial returns error when first cell is run.
Page: https://panel.holoviz.org/tutorials/intermediate/interactivity.html Result of pressing Play on first cell: pyodide.ffi.JsException: NetworkError: Failed to execute 'send' on 'XMLHttpRequest': Failed to load 'https://assets.holoviz.org/panel/tutorials/turbines.csv.gz'. The file downloads fine when the link is entered in a browser. Content of first cell: import panel as pn import pandas as pd pn.extension("tabulator") data_url = 'https://assets.holoviz.org/panel/tutorials/turbines.csv.gz' turbines = pn.cache(pd.read_csv)(data_url) cols = pn.widgets.MultiChoice( options=turbines.columns.to_list(), value=['p_name', 't_state', 't_county', 'p_year', 't_manu', 'p_cap'], width=500, height=100, name='Columns' )
closed
2024-05-18T02:45:25Z
2024-05-21T14:34:39Z
https://github.com/holoviz/panel/issues/6850
[ "type: docs", "more info needed" ]
Coderambling
3
public-apis/public-apis
api
4,094
Apis
open
2024-12-30T07:46:50Z
2025-02-10T16:13:33Z
https://github.com/public-apis/public-apis/issues/4094
[]
Bakoorii
2
jupyter-incubator/sparkmagic
jupyter
343
Can't Connect
Guessing these will turn out to be newbie issues but I've installed sparkmagic and can't get it to connect to Spark. I installed sparkmagic and followed the instructions to enable widgetsnbextension. My environment is: **Distro**: centos 7 **Python**: 3.6.0 ``` $ pip freeze | egrep "jupyter|sparkmagic" -e git+git@github.com:jupyter-incubator/sparkmagic.git@ef55831fea0a45686f844b4f0efb4d05ea658a26#egg=autovizwidget&subdirectory=autovizwidget -e git+git@github.com:jupyter-incubator/sparkmagic.git@ef55831fea0a45686f844b4f0efb4d05ea658a26#egg=hdijupyterutils&subdirectory=hdijupyterutils jupyter==1.0.0 jupyter-client==4.4.0 jupyter-console==5.0.0 jupyter-core==4.2.1 sparkmagic==0.11.2 ``` Running the following code produces the error shown: ``` %%configure -f {"executorMemory": "1000M", "executorCores": 4} Current session configs: {'executorMemory': '1000M', 'executorCores': 4, 'kind': 'pyspark'} --output---> An internal error was encountered. Please file an issue at https://github.com/jupyter-incubator/sparkmagic Error: '>=' not supported between instances of 'NoneType' and 'int' ``` and this, with a PySpark kernel, produces: ``` print ('hi') x = sc.parallelize ([ 0, 8, 232, 9 ]) โ€‹--output---> The code failed because of a fatal error: '>=' not supported between instances of 'NoneType' and 'int'. Some things to try: a) Make sure Spark has enough available resources for Jupyter to create a Spark context. b) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly. c) Restart the kernel. ``` Output at the command line is: ``` $ jupyter notebook --ip=0.0.0.0 [I 22:36:01.173 NotebookApp] Serving notebooks from local directory: /projects/stars/app/greendatatranslator/src/greentranslator [I 22:32:50.681 NotebookApp] 0 active kernels [I 22:32:50.681 NotebookApp] The Jupyter Notebook is running at: http://0.0.0.0:8888/?token=7e4ac4e27b0b1602ab3c9db94addde895e7b5876b1823a11 [I 22:32:50.682 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [W 22:32:50.682 NotebookApp] No web browser found: could not locate runnable browser. [C 22:32:50.682 NotebookApp] Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://0.0.0.0:8888/?token=7e4ac4e27b0b1602ab3c9db94addde895e7b5876b1823a11 [W 22:32:59.283 NotebookApp] 404 GET /api/kernels/b0d889f3-7939-4494-8e85-b4d7a8002174/channels?session_id=8A08DAF5C8EF4ECF96897C3E150F3DD5 (152.54.4.37): Kernel does not exist: b0d889f3-7939-4494-8e85-b4d7a8002174 [W 22:32:59.364 NotebookApp] 404 GET /api/kernels/b0d889f3-7939-4494-8e85-b4d7a8002174/channels?session_id=8A08DAF5C8EF4ECF96897C3E150F3DD5 (152.54.4.37) 104.07ms referer=None [W 22:33:16.252 NotebookApp] Replacing stale connection: b0d889f3-7939-4494-8e85-b4d7a8002174:8A08DAF5C8EF4ECF96897C3E150F3DD5 [I 22:33:21.725 NotebookApp] Kernel started: db52791d-0d69-407c-9bea-3dbd4310755b [I 22:35:22.239 NotebookApp] Saving file at /Untitled.ipynb ``` What am I missing?
closed
2017-04-11T02:48:13Z
2017-04-22T00:34:37Z
https://github.com/jupyter-incubator/sparkmagic/issues/343
[]
stevencox
2
ultralytics/ultralytics
pytorch
19,424
Failed to export yolonas-m
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component _No response_ ### Bug updated ultralytics to latest one, and trying following from ultralytics import NAS model = NAS('yolo_nas_s.pt') model.export(format='onnx') the export fails ### Environment >>> from ultralytics import NAS >>> model = NAS('yolo_nas_s.pt') The console stream is logged into /home/memryx/sg_logs/console.log [2025-02-25 11:32:31] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it Downloading https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo_nas_s.pt to 'yolo_nas_s.pt'... 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 83.3M/83.3M [00:03<00:00, 26.1MB/s] >>> model.export(format='onnx') python3.10/site-packages/torch/nn/modules/module.py", line 1928, in __getattr__ raise AttributeError( AttributeError: 'YoloNAS_S' object has no attribute 'args' ### Minimal Reproducible Example from ultralytics import NAS model = NAS('yolo_nas_s.pt') model.export(format='onnx') ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
closed
2025-02-25T16:33:49Z
2025-02-26T12:09:50Z
https://github.com/ultralytics/ultralytics/issues/19424
[ "bug", "fixed", "exports" ]
poppyzy
4
davidteather/TikTok-Api
api
807
_ The URL signature calculation of signature uses get_ acrawler. Py inside_ get_ Acrawler?
I see a lot of garbled code in it, and the JS code can't work normally
closed
2022-01-26T15:40:16Z
2023-08-08T22:21:34Z
https://github.com/davidteather/TikTok-Api/issues/807
[]
wuliao6688
5
dask/dask
pandas
11,123
New CI failure showing up in fsspec
I am seeing the following consistently in fsspec's downstream CI runs: ``` FAILED ../../../micromamba/envs/test_env/lib/python3.9/site-packages/dask/bytes/tests/test_s3.py::test_parquet_append[pyarrow] - AssertionError: DataFrame are different DataFrame shape mismatch [left]: (2000, 4) [right]: (1000, 4) ```
closed
2024-05-15T14:32:54Z
2024-05-29T21:27:34Z
https://github.com/dask/dask/issues/11123
[ "needs triage" ]
martindurant
5
deeppavlov/DeepPavlov
tensorflow
933
Refactor Bert Classifier to output int classes instead of numpy.Int64
I have a code with a custom config for Bert Paraphraser: ``` DATA_PATH = BASE_PATH + "data/" para_config = { "dataset_reader": { "class_name": "csv_reader", "data_path": DATA_PATH, "do_lower_case": False, "delimiter": ",", "train_ds_fname": "micro_train.csv", "valid_ds_fname": "micro_valid.csv", "test_ds_fname": "micro_test.csv" }, "dataset_iterator": { "class_name": "siamese_iterator", "seed": 243, "len_valid": 500 }, "chainer": { "in": ["text_a", "text_b"], "in_y": ["y"], "pipe": [ { "class_name": "bert_preprocessor", "vocab_file": "{DOWNLOADS_PATH}/bert_models/rubert_cased_L-12_H-768_A-12_v1/vocab.txt", "do_lower_case": False, "max_seq_length": 160, "in": ["text_a", "text_b"], "out": ["bert_features"] }, { "class_name": "bert_classifier", "n_classes": 2, "one_hot_labels": False, "bert_config_file": "{DOWNLOADS_PATH}/bert_models/rubert_cased_L-12_H-768_A-12_v1/bert_config.json", "pretrained_bert": "{DOWNLOADS_PATH}/bert_models/rubert_cased_L-12_H-768_A-12_v1/bert_model.ckpt", "save_path": "{MODELS_PATH}/paraphraser_rubert/model_rubert", "load_path": "{MODELS_PATH}/paraphraser_rubert/model_rubert", "keep_prob": 0.5, "optimizer": "tf.train:AdamOptimizer", "learning_rate": 2e-05, "learning_rate_drop_patience": 3, "learning_rate_drop_div": 2.0, "in": ["bert_features"], "in_y": ["y"], "out": ["predictions"] } ], "out": ["predictions"] }, "train": { "batch_size": 32, "train_metrics": ["acc"], "metrics": ["acc"], "validation_patience": 7, "val_every_n_batches": 2, "log_every_n_batches": 1, "tensorboard_log_dir": "{MODELS_PATH}/paraphraser_rubert/logs", "show_examples": True, }, "metadata": { "variables": { "ROOT_PATH": "~/.deeppavlov", "DOWNLOADS_PATH": "{ROOT_PATH}/downloads", "MODELS_PATH": "{ROOT_PATH}/models", "DATA_PATH": DATA_PATH }, "requirements": [ "{DEEPPAVLOV_PATH}/requirements/tf.txt", "{DEEPPAVLOV_PATH}/requirements/bert_dp.txt" ], "download": [ { "url": "http://files.deeppavlov.ai/deeppavlov_data/bert/rubert_cased_L-12_H-768_A-12_v1.tar.gz", "subdir": "{DOWNLOADS_PATH}/bert_models" }, { "url": "http://files.deeppavlov.ai/deeppavlov_data/classifiers/paraphraser_rubert_v0.tar.gz", "subdir": "{ROOT_PATH}/models" } ] } } paraphraser_model = train_model(para_config) ``` The problem with this code is that it fails with error: ``` Traceback (most recent call last): File "runparaphraser_train.py", line 100, in <module> paraphraser_model = train_model(para_config) File "/home/alx/Cloud/dns/.venv3/lib/python3.6/site-packages/deeppavlov-0.4.0-py3.6.egg/deeppavlov/__init__.py", line 31, in train_model train_evaluate_model_from_config(config, download=download, recursive=recursive) File "/home/alx/Cloud/dns/.venv3/lib/python3.6/site-packages/deeppavlov-0.4.0-py3.6.egg/deeppavlov/core/commands/train.py", line 121, in train_evaluate_model_from_config trainer.train(iterator) File "/home/alx/Cloud/dns/.venv3/lib/python3.6/site-packages/deeppavlov-0.4.0-py3.6.egg/deeppavlov/core/trainers/nn_trainer.py", line 294, in train self.train_on_batches(iterator) File "/home/alx/Cloud/dns/.venv3/lib/python3.6/site-packages/deeppavlov-0.4.0-py3.6.egg/deeppavlov/core/trainers/nn_trainer.py", line 234, in train_on_batches self._validate(iterator) File "/home/alx/Cloud/dns/.venv3/lib/python3.6/site-packages/deeppavlov-0.4.0-py3.6.egg/deeppavlov/core/trainers/nn_trainer.py", line 178, in _validate print(json.dumps(report, ensure_ascii=False)) File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/usr/lib/python3.6/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python3.6/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/usr/lib/python3.6/json/encoder.py", line 180, in default o.__class__.__name__) TypeError: Object of type 'int64' is not JSON serializable ``` The code fails because BertClassifier returns numpy.int64 elements as predictions (https://github.com/deepmipt/DeepPavlov/blob/master/deeppavlov/models/bert/bert_classifier.py#L242), while in evaluation the code constructs a report as json and then prints it into output.... @mu-arkhipov helped me to resolve the problem in my case by adding following lines to BertClassifierModel.__call__ just before return: ``` if pred.ndim == 1: pred = [int(p) for p in pred] ``` this resolves the problem for my case only. Could anybody offer more general solution? @dilyararimovna
closed
2019-07-19T12:11:07Z
2023-07-07T09:13:06Z
https://github.com/deeppavlov/DeepPavlov/issues/933
[ "enhancement" ]
acriptis
2
mckinsey/vizro
plotly
313
Rename docs pages that include `_` to use `-`
Google doesn't recognise underscores as word separators when it indexes pages. So if we have a page called `first_dashboard` then Google will report that as `firstdashboard` to its algorithm. (If we had `first-dashboard` then it would go into the mix as `first dashboard` which earns more google juice for the keywords "dashboard"). [More explanation here](https://www.woorank.com/en/blog/underscores-in-urls-why-are-they-not-recommended) As we are at an early stage with Vizro, we can make some changes (and use RTD redirects to ensure we don't break anyone's links) that set the docs up for success later. SEO doesn't seem that important but every little helps. ## Solution 1. Rename pages 2. Set up redirects in readthedocs to redirect to newly hyphenated pages for external users who have bookmarks, and blog posts we can't update etc. 3. Change all existing internal linking within the docs to the new page names
closed
2024-02-15T12:14:45Z
2024-02-21T09:56:45Z
https://github.com/mckinsey/vizro/issues/313
[ "Docs :spiral_notepad:" ]
stichbury
2
tqdm/tqdm
pandas
827
Change colorhints
Iโ€™d like to use tqdm to show progress for an iterative algorithm with a max number of iterations. Since being interrupted then implies convergence, I would want to switch colorhints to show green if interrupted and red if it reaches total. Is that possible?
open
2019-10-23T06:18:54Z
2019-10-29T12:18:14Z
https://github.com/tqdm/tqdm/issues/827
[ "question/docs โ€ฝ", "p4-enhancement-future ๐Ÿงจ", "submodule-notebook ๐Ÿ““" ]
bdch1234
3
keras-team/keras
tensorflow
20,341
LSTM not supporting channels_first data_format
tensorflow version = 2.12 Keras version = 3.6 This code is working well ``` nbr_frame = 10 img_width = 180 img_height = 150 img_size = (img_height, img_width) input_shape = img_size + (3,) tf.keras.backend.set_image_data_format( 'channels_last' ) full_input_shape = (nbr_frame,) + input_shape print(full_input_shape) np.random.seed(1234) num_classes = 2 #vg19 = tf.keras.applications.vgg19.VGG19 #base_model = vg19(include_top=False,weights='imagenet',input_shape=(img_width, img_height,3)) base_model = tf.keras.applications.MobileNetV2( include_top=False, weights='imagenet', input_tensor=None, input_shape = input_shape, pooling=None, ) for layer in base_model.layers: layer.trainable = False base_model.summary() cnn = models.Sequential() cnn.add(base_model) cnn.add(layers.GlobalAveragePooling2D()) cnn.add(layers.Dropout(0.2)) base_model.trainable = False # define LSTM model model = models.Sequential() print(full_input_shape) model.add(layers.TimeDistributed(cnn, input_shape=full_input_shape)) model.add(layers.LSTM(nbr_frame, return_sequences=True)) model.add(layers.TimeDistributed(layers.Dense(nbr_frame, activation='relu'))) model.add(layers.Flatten()) model.add(layers.Dense(164, activation='relu', name="filter")) model.add(layers.Dropout(0.2)) model.add(layers.Dense(24, activation='sigmoid', name="filter2")) model.add(layers.Dropout(0.1)) model.add(layers.Dense(num_classes, activation="sigmoid", name="last")) rms = optimizers.RMSprop() metrics = [tf.keras.metrics.CategoricalAccuracy('accuracy', dtype=tf.float32)] loss = tf.keras.losses.CategoricalCrossentropy() model.compile( loss=loss, optimizer= rms, metrics=metrics ) model.summary() ``` But I am interested to change the set_image_data_format to `channels_first` by modifying : ``` input_shape = (3,) + img_size tf.keras.backend.set_image_data_format( 'channels_first' ) ``` I am getting this error ``` File ~/miniconda3/envs/ai/lib/python3.11/site-packages/keras/backend.py:6780, in bias_add(x, bias, data_format) 6778 if len(bias_shape) == 1: 6779 if data_format == "channels_first": -> 6780 return tf.nn.bias_add(x, bias, data_format="NCHW") 6781 return tf.nn.bias_add(x, bias, data_format="NHWC") 6782 if ndim(x) in (3, 4, 5): ValueError: Exception encountered when calling layer "lstm_9" (type LSTM). Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,40], [40]. Call arguments received by layer "lstm_9" (type LSTM): โ€ข inputs=tf.Tensor(shape=(None, 10, 1280), dtype=float32) โ€ข mask=None โ€ข training=None โ€ข initial_state=None ```
closed
2024-10-12T12:41:17Z
2024-10-12T12:52:20Z
https://github.com/keras-team/keras/issues/20341
[]
nassimus26
1
kymatio/kymatio
numpy
996
Need for new `test_data_1d.npz` post-#984
After discussion with @janden et al. today v0.4
closed
2023-02-16T17:40:45Z
2023-03-04T17:04:18Z
https://github.com/kymatio/kymatio/issues/996
[ "tests" ]
lostanlen
1