repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
sequencelengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
huggingface/datasets
numpy
7,225
Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result
### Describe the bug We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push. ### Steps to reproduce the bug A basic kotlin application: ``` val person = PersonIdent( "padmalcom", "padmalcom@sth.com" ) val cp = UsernamePasswordCredentialsProvider( "padmalcom", "mysecrettoken" ) val git = KGit.cloneRepository { setURI("https://huggingface.co/datasets/sth/images") setTimeout(60) setProgressMonitor(TextProgressMonitor()) setCredentialsProvider(cp) } FileOutputStream("./images/images.csv").apply { writeCsv(images) } git.add { addFilepattern("images.csv") } for (i in images) { FileUtils.copyFile( File("./files/${i.id}"), File("./images/${i.id + File(i.fileName).extension }") ) git.add { addFilepattern("${i.id + File(i.fileName).extension }") } } val revCommit = git.commit { author = person message = "Uploading images at " + LocalDateTime.now() .format(DateTimeFormatter.ISO_DATE_TIME) setCredentialsProvider(cp) } val push = git.push { setCredentialsProvider(cp) } ``` ### Expected behavior The git server is expected to return the Content-Type _application/x-git-receive-pack-result_. ### Environment info It is independent from the datasets library.
open
2024-10-14T14:33:06Z
2024-10-14T14:33:06Z
https://github.com/huggingface/datasets/issues/7225
[]
padmalcom
0
serengil/deepface
machine-learning
904
About a RGB format question of load_image(img)
Hi, I am greatly inspired from your work a lot, but I get something confused. In the function load_image(img) of the file deepface/commons/functions.py I found out that the image with url is to be converted to RGB (something like .convert("RGB")) However, for the image encoded as base64 or represented as filepath, I found nothing related to the conversion of RGB. For fear that getting the face with BGR format, should something like "img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)" be added additionally for either the section of base64 or file path? Thanks in advance
closed
2023-12-05T04:12:30Z
2023-12-08T22:23:08Z
https://github.com/serengil/deepface/issues/904
[ "enhancement" ]
nigue3025
2
ipython/ipython
data-science
14,320
The `metadata` link points to [`importlib.metadata`](https://docs.python.org/3/library/importlib.metadata.html#metadata) which is a totally different thing right? (or am I just very confused!?)
The `metadata` link points to [`importlib.metadata`](https://docs.python.org/3/library/importlib.metadata.html#metadata) which seem to be wrong. https://github.com/ipython/ipython/blob/8b1204b6c4489c786ada294088a3acef7bd53e90/docs/source/config/integrating.rst?plain=1#L134 https://ipython.readthedocs.io/en/stable/config/integrating.html#more-powerful-methods _Originally posted by @dhirschfeld in https://github.com/ipython/ipython/issues/14319#issuecomment-1925501991_
open
2024-02-04T11:19:34Z
2024-02-04T11:19:34Z
https://github.com/ipython/ipython/issues/14320
[]
Carreau
0
nvbn/thefuck
python
1,424
Unhandled `apt` "Packages were downgraded and -y was used without --allow-downgrades" error
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we can to fix that. Actually, maybe we already have, so first thing to do is to update The Fuck and see if the bug is still there. --> <!-- If it is (sorry again), check if the problem has not already been reported and if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with the following basic information: --> The output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0 and Bash 4.4.12(1)-release`): The Fuck 3.29 using Python 3.11.4 and Bash 5.2.15(1)-release Your system (Debian 7, ArchLinux, Windows, etc.): Ubuntu Unity 23.04 How to reproduce the bug: - Run "sudo apt upgrade -y" ; - Get the "Packages were downgraded and -y was used without --allow-downgrades" error ; - Run "fuck" ; - Expect "sudo apt upgrade -y --allow-downgrades" but get "sudo apt upgrade". The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck): I ran the expected command manually before going here so I can't try it anymore, sorry. If the bug only appears with a specific application, the output of that application and its version: apt 2.6.0ubuntu0.1 (amd64) Anything else you think is relevant: I'd also like to suggest using headings for questions in order not to need code blocks for replies so that formatting would be possible. Thanks <!-- It's only with enough information that we can do something to fix the problem. -->
open
2023-12-01T15:37:56Z
2023-12-01T15:37:56Z
https://github.com/nvbn/thefuck/issues/1424
[]
KaKi87
0
pyg-team/pytorch_geometric
deep-learning
9,138
Metapath2vec index out of bounds error
### 🐛 Describe the bug Hi, I got IndexError: index 1 is out of bounds for dimension 0 with size 1, when running the code below ``` num_nodes_dict = {'a': 2, 'b': 2} edge_index_dict = { ('a', 'to', 'b'): torch.tensor([[0], [0]]), ('b', 'to', 'a'): torch.tensor([[0], [0]]), } metapath = [('a', 'to', 'b'), ('b', 'to', 'a')] model = MetaPath2Vec( edge_index_dict, embedding_dim=16, metapath=metapath, walk_length=2, context_size=2, walks_per_node=1, num_negative_samples=1, num_nodes_dict=num_nodes_dict, ) loader = model.loader(batch_size=16, shuffle=True) next(iter(loader)) ``` ### Versions Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.23.5 [pip3] torch==2.1.2+cu121 [pip3] torch_geometric==2.5.2 [pip3] torcheval==0.0.7 [pip3] torchvision==0.16.2+cu121 [pip3] triton==2.1.0
closed
2024-04-01T22:57:39Z
2024-04-03T16:29:01Z
https://github.com/pyg-team/pytorch_geometric/issues/9138
[ "bug" ]
qinglong233
1
litl/backoff
asyncio
149
Use monotonic time
Backoff uses `timedelta.total_seconds(datetime.datetime.now() - start)` to compute `elapsed` time which is used to figure out if `max_time` has been reached. This is flaky and doesn't work well when the system clock is being updated: when adjusting time from the network or adjusting to daylight saving time for example. Instead `monotonic` time should be used, using for example https://docs.python.org/3/library/time.html#time.monotonic
open
2022-01-26T04:39:11Z
2022-01-26T04:39:11Z
https://github.com/litl/backoff/issues/149
[]
wapiflapi
0
tqdm/tqdm
jupyter
1,103
AttributeError: 'tqdm' object has no attribute 'sp'
- [x] I have marked all applicable categories: + [x] exception-raising bug + [ ] visual output bug + [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate) + [ ] new feature request - [x] I have visited the [source website], and in particular read the [known issues] - [x] I have searched through the [issue tracker] for duplicates - [x] I have mentioned version numbers, operating system and environment, where applicable: ```python import tqdm, sys print(tqdm.__version__, sys.version, sys.platform) ``` --- Hi, I'm getting the following error: `AttributeError: 'tqdm' object has no attribute 'sp'` I realise that this has been reported before but wanted to open an extra issue since the other issues are about `tqdm_notebook`. (See #944 for example) The full error: ```python Exception ignored in: <function tqdm.__del__ at 0x7f9fe1a750d0> Traceback (most recent call last): File "/Users/username/opt/anaconda3/envs/env-name/lib/python3.8/site-packages/tqdm/std.py", line 1124, in __del__ self.close() File "/Users/username/opt/anaconda3/envs/env-name/lib/python3.8/site-packages/tqdm/notebook.py", line 271, in close self.sp(bar_style='danger') AttributeError: 'tqdm' object has no attribute 'sp' ``` I'm not using tqdm directly but as a part of https://github.com/flairNLP/flair The code is: ```python from flair.data import Sentence from flair.models import TextClassifier def _get_sentiment(text): classifier = TextClassifier.load("sentiment") sentence = Sentence(text) classifier.predict(sentence) result = sentence.to_dict() sentiment = result.get("labels") return sentiment ``` My environment is: ```python 4.54.1 3.8.5 (default, Sep 4 2020, 02:22:02) [Clang 10.0.0 ] darwin ``` I'm using [Spyder](https://github.com/spyder-ide/spyder) and no idea why, but the error doesn't happen all the time. If there is anything else you need to know, just let me know! [source website]: https://github.com/tqdm/tqdm/ [known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues [issue tracker]: https://github.com/tqdm/tqdm/issues?q= [StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
closed
2020-12-25T14:41:03Z
2020-12-26T16:42:11Z
https://github.com/tqdm/tqdm/issues/1103
[ "duplicate 🗐", "p2-bug-warning ⚠", "submodule-notebook 📓" ]
BastianZim
5
fastapi/sqlmodel
pydantic
150
How to create computed columns ?
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python class Album(SQLModel): title: str slug: str description: str # what i've tried but doesn't work e.g : # slug: str = column_property(slugify(title)) # E NameError: name 'title' is not defined class Album(SQLModel): title: str slug: str = column_property(slugify(title)) description: str ``` ### Description I'm trying to generate a column named "slug" that is a slugified version of "title" so it should be persisted in database, and be updated when title changes. But so far no luck, I've looked at column property but didn't manage to make it work with SQLModel. I saw that there are events "before update" ... but I don't think this is the way to go ### Operating System Windows ### Operating System Details _No response_ ### SQLModel Version 0.0.4 ### Python Version 3.9.2 ### Additional Context _No response_
open
2021-10-30T11:35:51Z
2024-02-14T13:28:18Z
https://github.com/fastapi/sqlmodel/issues/150
[ "question" ]
sorasful
12
mirumee/ariadne
graphql
588
Documenting what websocket sub-protocol is used for GraphQL subscriptions
There seem to be multiple sub-protocols for GraphQL over Websockets. There's the new(ish?) [graphql-ws](https://github.com/apollographql/subscriptions-transport-ws/blob/master/PROTOCOL.md) protocol and the (not actively maintained) and [subscriptions-transport-ws](https://github.com/apollographql/subscriptions-transport-ws/blob/master/PROTOCOL.md) protocol. It looks like that second protocol is used, it would be great if that could be documented.
closed
2021-05-27T13:58:37Z
2022-04-12T14:17:47Z
https://github.com/mirumee/ariadne/issues/588
[ "docs" ]
bartenra
5
numpy/numpy
numpy
28,239
TYP: `union1d` outputs `dtype[floating[_64Bit]]` which is incompatible with `dtype[float64]]`
### Describe the issue: While type checking, `numpy.union1d` outputs an array of type `ndarray[tuple[int, ...], dtype[floating[_64Bit]]]` even if both the arguments are of the type `ndarray[tuple[int, ...], dtype[float64]]` -- failing mypy checks. ### Reproduce the code example: ```python import numpy.typing as npt import numpy as np def do_something(a: npt.NDArray[np.float64], b: npt.NDArray[np.float64]) -> npt.NDArray[np.float64]: return np.union1d(a, b) ``` ### Error message: ```shell % mypy mypy_rep.py mypy_rep.py:6: error: Incompatible return value type (got "ndarray[tuple[int, ...], dtype[floating[_64Bit]]]", expected "ndarray[tuple[int, ...], dtype[float64]]") [return-value] Found 1 error in 1 file (checked 1 source file) ``` ### Python and NumPy Versions: 2.2.2 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] --- mypy 1.14.1 (compiled: yes) No extra configuration ### Runtime Environment: [{'numpy_version': '2.2.2', 'python': '3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 ' '(clang-1600.0.26.4)]', 'uname': uname_result(system='Darwin', node='Saranshs-MacBook-Pro.local', release='24.2.0', version='Darwin Kernel Version 24.2.0: Fri Dec 6 19:02:41 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6030', machine='arm64')}, {'simd_extensions': {'baseline': ['NEON', 'NEON_FP16', 'NEON_VFPV4', 'ASIMD'], 'found': ['ASIMDHP'], 'not_found': ['ASIMDFHM']}}] ### Context for the issue: _No response_
closed
2025-01-28T15:22:48Z
2025-01-28T19:12:27Z
https://github.com/numpy/numpy/issues/28239
[ "00 - Bug", "41 - Static typing" ]
Saransh-cpp
4
erdewit/ib_insync
asyncio
150
Error handling fields when calling qualifyContracts
I recently did an update of the library and am now getting an error when running this code: ``` contractStock = Stock('AAPL', 'SMART', 'USD') ib.qualifyContracts(contractStock) ``` Throws this: ``` Error handling fields: ['10', '8', '170032', 'AAPL', 'STK', '', '0', '', 'SMART', 'USD', 'AAPL', 'NMS', 'NMS', '265598', '0.01', '100', '', 'ACTIVETIM,ADJUST,ALERT,ALGO,ALLOC,AVGCOST,BASKET,BENCHPX,COND,CONDORDER,DARKONLY,DARKPOLL,DAY,DEACT,DEACTDIS,DEACTEOD,DIS,GAT,GTC,GTD,GTT,HID,IBKRATS,ICE,IMB,IOC,LIT,LMT,LOC,MIT,MKT,MOC,MTL,NGCOMB,NODARK,NONALGO,OCA,OPG,OPGREROUT,PEGBENCH,POSTONLY,PREOPGRTH,REL,RPI,RTH,SCALE,SCALEODD,SCALERST,SNAPMID,SNAPMKT,SNAPREL,STP,STPLMT,SWEEP,TRAIL,TRAILLIT,TRAILLMT,TRAILMIT,WHATIF', 'SMART,AMEX,NYSE,CBOE,PHLX,ISE,CHX,ARCA,ISLAND,DRCTEDGE,BEX,BATS,EDGEA,CSFBALGO,JEFFALGO,BYX,IEX,FOXRIVER,TPLUS1,NYSENAT,PSX', '1', '0', 'APPLE INC', 'NASDAQ', '', 'Technology', 'Computers', 'Computers', 'EST5EDT', '20190530:0400-2000;20190531:0400-2000', '20190530:0930-1600;20190531:0930-1600', '', '', '0'] Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/ib_insync/decoder.py", line 188, in interpret handler(fields) File "/usr/local/lib/python3.6/site-packages/ib_insync/decoder.py", line 293, in contractDetails cd.realExpirationDate) = fields ValueError: not enough values to unpack (expected 5, got 0) Unknown contract: Stock(symbol='AAPL', exchange='SMART', currency='USD') ```
closed
2019-05-30T20:15:11Z
2019-05-30T21:04:06Z
https://github.com/erdewit/ib_insync/issues/150
[]
Laurentvw
2
plotly/dash-table
plotly
96
dropdown display incorrect positioning
the dropdown menu appears below where it should be: ![screen shot 2018-09-14 at 3 17 04 pm](https://user-images.githubusercontent.com/2789078/45570714-a969ed80-b831-11e8-8db0-b8a9c9bc6ed8.png) (note: not seeing the same behaviour in the local example) cc @Marc-Andre-Rivet @wbrgss
closed
2018-09-14T19:21:43Z
2018-09-19T02:48:19Z
https://github.com/plotly/dash-table/issues/96
[]
cldougl
5
plotly/dash-table
plotly
404
dash-table: for the table shown on the dash website, can we drag a cell value to overwrite values in other cells horizontally or vertically?
For the current version of data_table, after the table shown on the dash website, could we drag the value in a certain cell to overwrite the values in other cells horizontally or vertically? (I only know that for the version of 'dash_table_experiments', the value in some cell can be dragged vertically to overwrite the values of cells in the same column) Could anyone help me with this question? Thank you in advance!
open
2019-03-29T20:01:03Z
2020-03-25T14:16:05Z
https://github.com/plotly/dash-table/issues/404
[ "dash-type-question" ]
WinnieFan-Xinxin
4
aimhubio/aim
tensorflow
2,977
More info in `aim runs ls`
## 🚀 Feature It would be pretty nice to have more info than just run hashes when running `aim runs ls`. ### Motivation In particular, this would allow me to write scripts which could remove aim runs as well as some data I store manually for convenience reasons. ### Pitch It would be real nice to have like a --format option we could give to the cli and the format would be filled with run name or experiment for example.
open
2023-09-08T08:38:54Z
2023-09-08T12:36:54Z
https://github.com/aimhubio/aim/issues/2977
[ "type / enhancement" ]
Castavo
2
serengil/deepface
machine-learning
995
Using service.verify() on the same picture will result in different distances
I want to handle multiple photos in the same http request。For example, upload 5 photos (img1, img2, img3, img4, img5) and compare img2, img3, img4, img5 with img1 respectively. I modified the routes.py code so that it can do this in the thread pool. However, using multi-threading will cause the distance of some pictures to be unstable. For example, I tested the two pictures in the picture twice, but the results were inconsistent. The results of the other four and three pictures were consistent. <img width="1918" alt="verify_result" src="https://github.com/serengil/deepface/assets/55915175/c164302b-17f8-48d4-a6ee-682771e4cab5"> Below is my modified code. import json from flask import Blueprint, request import service from concurrent.futures import ThreadPoolExecutor import logging from logging.handlers import TimedRotatingFileHandler log_path = "/home/shared/log/deepface_verify.log" logger = logging.getLogger(__name__) handler = TimedRotatingFileHandler(log_path, when='midnight', interval=1, backupCount=14) handler.setFormatter(logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S')) logger.addHandler(handler) logger.setLevel(logging.DEBUG) blueprint = Blueprint("routes", __name__) pool = ThreadPoolExecutor(max_workers=2) ret_format = '{"code": %d, "message": "%s", "data": %s}' @blueprint.route("/") def home(): return "<h1>Welcome to DeepFace API!</h1>" @blueprint.route("/represent", methods=["POST"]) def represent(): input_args = request.get_json() if input_args is None: return {"message": "empty input set passed"} img_path = input_args.get("img") if img_path is None: return {"message": "you must pass img_path input"} model_name = input_args.get("model_name", "VGG-Face") detector_backend = input_args.get("detector_backend", "opencv") enforce_detection = input_args.get("enforce_detection", True) align = input_args.get("align", True) obj = service.represent( img_path=img_path, model_name=model_name, detector_backend=detector_backend, enforce_detection=enforce_detection, align=align, ) return obj def verify_single_image(img1_path, img2_path, model_name, detector_backend, distance_metric, enforce_detection, align, threshold): result = 0 try: res = service.verify( img1_path=img1_path, img2_path=img2_path, model_name=model_name, detector_backend=detector_backend, distance_metric=distance_metric, align=align, enforce_detection=enforce_detection, ) distance = res['distance'] logger.info(f'verify res:{res}') logger.info(f'{img2_path} distance={distance}') result = 1 if distance < threshold else 0 except Exception: logger.error(f"exception raised.{img2_path}") return result @blueprint.route("/verify", methods=["POST"]) def verify(): input_args = request.get_json() if input_args is None: return {"message": "empty input set passed"} img1_path = input_args.get("img1_path") img2_paths = input_args.get("img2_paths") if img1_path is None: return {"message": "you must pass img1_path input"} if img2_paths is None: return {"message": "you must pass img2_path input"} model_name = input_args.get("model_name", "ArcFace") detector_backend = input_args.get("detector_backend", "yolov8") enforce_detection = input_args.get("enforce_detection", True) distance_metric = input_args.get("distance_metric", "euclidean_l2") align = input_args.get("align", True) threshold = input_args.get("threshold", 1.2) image_result = {} image_future = {} for img in img2_paths: image_future[img] = pool.submit(verify_single_image, img1_path=img1_path, img2_path=img, model_name=model_name, detector_backend=detector_backend, distance_metric=distance_metric, align=align, enforce_detection=enforce_detection, threshold=threshold ) for key, value in image_future.items(): image_result[key] = value.result() img_res_json = json.dumps(image_result) logger.error(f'result: {img_res_json}') return ret_format % (0, 'success', img_res_json) @blueprint.route("/analyze", methods=["POST"]) def analyze(): input_args = request.get_json() if input_args is None: return {"message": "empty input set passed"} img_path = input_args.get("img_path") if img_path is None: return {"message": "you must pass img_path input"} detector_backend = input_args.get("detector_backend", "opencv") enforce_detection = input_args.get("enforce_detection", True) align = input_args.get("align", True) actions = input_args.get("actions", ["age", "gender", "emotion", "race"]) demographies = service.analyze( img_path=img_path, actions=actions, detector_backend=detector_backend, enforce_detection=enforce_detection, align=align, ) return demographies
closed
2024-02-01T10:11:08Z
2024-02-25T14:38:48Z
https://github.com/serengil/deepface/issues/995
[ "dependencies" ]
helloliu01
4
igorbenav/fastcrud
pydantic
101
Cannot use get or get_multi filter with like string
Hello, I'm currently utilizing FastCRUD with the [FastAPI boilerplate](https://github.com/igorbenav/FastAPI-boilerplate), but I'm unsure how to implement a filter to retrieve a string from a field similar to the SQL 'LIKE' query, as shown below: `count_projects = await crud_projects.count(db=db, project_code=f"{short_code}%")` I need to count the projects whose project_code begins with a specific short_code (for example, "ABC"). Could you assist me in resolving this issue? Thank you!
closed
2024-06-04T09:49:56Z
2024-06-11T02:24:23Z
https://github.com/igorbenav/fastcrud/issues/101
[ "enhancement", "FastCRUD Methods" ]
Justinianus2001
4
coleifer/sqlite-web
flask
108
Feature Request - Query History
Firstly thanks for the time you have put into this, I am using this on home assistant. But it would be REALLY nice, if the query box can remember it's queries & save them into local_storage so that you can use the up/down keys to select previously entered queries. on home assistant the page refreshes after a while, and any query you have entered is removed and I have to recreate it again which can be quite frustrating. Adding the ability to use the up and down keys to navigate & select previous queries, for me would be a huge time-saver. Thanks again
closed
2022-12-09T02:16:14Z
2022-12-09T13:29:15Z
https://github.com/coleifer/sqlite-web/issues/108
[]
diginfo
1
freqtrade/freqtrade
python
10,968
Advanced orderflow data history problem
* Operating system: Windows 10 * Python Version: Python 3.12.7 * Freqtrade Version: 2024.10 I'm trying to backtest advanced orderflow strategy and I encounter a problem where freqtrade cannot find the history data, even though I have it downloaded and the bot recognizes it via `list-data --show --trades`. I have downloaded the trades for the specific pair using `--dl-trades` flag, I have `"use_public_trades": true`, in my config, and I also added everything that is recommended for orderflow such as cache size etc. Backtesting works just fine on normal history download. Please see the log below. Is there something I'm missing? ``` PS C:\Freqtrade\ft_userdata> docker-compose run --rm freqtrade backtesting --timerange 20241118-20241119 --strategy SampleStrategy --config user_data/config.json --export none --cache none --timeframe 5m [+] Creating 1/0 ✔ Network ft_userdata_default Created 0.0s 2024-11-21 14:46:34,415 - freqtrade - INFO - freqtrade 2024.10 2024-11-21 14:46:35,078 - numexpr.utils - INFO - NumExpr defaulting to 8 threads. 2024-11-21 14:46:38,017 - freqtrade.configuration.load_config - INFO - Using config: user_data/config.json ... 2024-11-21 14:46:38,021 - freqtrade.loggers - INFO - Verbosity set to 0 2024-11-21 14:46:38,021 - freqtrade.configuration.configuration - INFO - Parameter -i/--timeframe detected ... Using timeframe: 5m ... 2024-11-21 14:46:38,021 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 3 ... 2024-11-21 14:46:38,021 - freqtrade.configuration.configuration - INFO - Parameter --timerange detected: 20241118-20241119 ... 2024-11-21 14:46:38,109 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ... 2024-11-21 14:46:38,110 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/binance ... 2024-11-21 14:46:38,110 - freqtrade.configuration.configuration - INFO - Overriding timeframe with Command line argument 2024-11-21 14:46:38,110 - freqtrade.configuration.configuration - INFO - Parameter --export detected: none ... 2024-11-21 14:46:38,110 - freqtrade.configuration.configuration - INFO - Parameter --cache=none detected ... 2024-11-21 14:46:38,110 - freqtrade.configuration.configuration - INFO - Filter trades by timerange: 20241118-20241119 2024-11-21 14:46:38,111 - freqtrade.exchange.check_exchange - INFO - Checking exchange... 2024-11-21 14:46:38,126 - freqtrade.exchange.check_exchange - INFO - Exchange "binance" is officially supported by the Freqtrade development team. 2024-11-21 14:46:38,126 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration. 2024-11-21 14:46:38,126 - freqtrade.configuration.config_validation - INFO - Validating configuration ... 2024-11-21 14:46:38,130 - freqtrade.commands.optimize_commands - INFO - Starting freqtrade in Backtesting mode 2024-11-21 14:46:38,130 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled 2024-11-21 14:46:38,130 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.24 2024-11-21 14:46:38,130 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}} 2024-11-21 14:46:38,144 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}} 2024-11-21 14:46:38,160 - freqtrade.exchange.exchange - INFO - Using Exchange "Binance" 2024-11-21 14:46:40,545 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Binance'... 2024-11-21 14:46:40,590 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy SampleStrategy from '/freqtrade/user_data/strategies/sample_strategy.py'... 2024-11-21 14:46:40,591 - freqtrade.strategy.hyper - INFO - Found no parameter file. 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'timeframe' with value in config file: 5m. 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT. 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited. 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}. 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 3. 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'0': 0.1} 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 5m 2024-11-21 14:46:40,592 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.05 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: True 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'market', 'stoploss_on_exchange': True} 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'} 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 0 2024-11-21 14:46:40,593 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'} 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: True 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1 2024-11-21 14:46:40,594 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 3 2024-11-21 14:46:40,594 - freqtrade.configuration.config_validation - INFO - Validating configuration ... 2024-11-21 14:46:40,600 - freqtrade.resolvers.iresolver - INFO - Using resolved pairlist StaticPairList from '/freqtrade/freqtrade/plugins/pairlist/StaticPairList.py'... 2024-11-21 14:46:40,612 - freqtrade.optimize.backtesting - INFO - Using fee 0.0500% - worst case fee from exchange (lowest tier). 2024-11-21 14:46:40,629 - freqtrade.data.history.datahandlers.idatahandler - WARNING - No history for SAND/USDT:USDT, futures, 5m found. Use `freqtrade download-data` to download the data 2024-11-21 14:46:40,629 - freqtrade - ERROR - No data found. Terminating. ``` ``` PS C:\Freqtrade\ft_userdata> docker-compose run --rm freqtrade list-data --show --trades 2024-11-21 14:46:52,486 - freqtrade - INFO - freqtrade 2024.10 2024-11-21 14:46:52,964 - numexpr.utils - INFO - NumExpr defaulting to 8 threads. 2024-11-21 14:46:53,790 - freqtrade.configuration.load_config - INFO - Using config: user_data/config.json ... 2024-11-21 14:46:53,794 - freqtrade.loggers - INFO - Verbosity set to 0 2024-11-21 14:46:53,859 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ... 2024-11-21 14:46:53,860 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/binance ... 2024-11-21 14:46:53,860 - freqtrade.configuration.configuration - INFO - Detected --show-timerange 2024-11-21 14:46:53,862 - freqtrade.exchange.check_exchange - INFO - Checking exchange... 2024-11-21 14:46:53,876 - freqtrade.exchange.check_exchange - INFO - Exchange "binance" is officially supported by the Freqtrade development team. 2024-11-21 14:46:53,877 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration. 2024-11-21 14:46:53,877 - freqtrade.configuration.config_validation - INFO - Validating configuration ... Found trades data for 1 pair. ┏━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┓ ┃ Pair ┃ Type ┃ From ┃ To ┃ Trades ┃ ┡━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━┩ │ SAND/USDT:USDT │ futures │ 2024-11-17 00:00:02 │ 2024-11-21 13:16:44 │ 1307692 │ └────────────────┴─────────┴─────────────────────┴─────────────────────┴─────────┘ PS C:\Freqtrade\ft_userdata> ```
closed
2024-11-21T14:57:58Z
2024-11-21T17:15:17Z
https://github.com/freqtrade/freqtrade/issues/10968
[ "Question" ]
vladddy
6
pyro-ppl/numpyro
numpy
1,024
Scan fails with empty PYRO_STACK
When `scan` is taken outside the context of a non-empty `PYRO_STACK`, `scan` produces an error. The following code produces the error. I don't think it matters much what the scan actually does, but in this case it's interative multiplication of a vector by a matrix with the addition of an second vector at each step. ```python def matrix_rollout(K, c_in): def iteration(c_prev, c_in): c_next = jnp.dot(c_prev, K) + c_in return c_next, (c_next,) _, (ys,) = scan(iteration, init=jnp.asarray([1.0, 0.0]), xs=c_in) return ys matrix_rollout( jnp.asarray([[0.7, 0.3], [0.3, 0.7]]), c_in=jnp.asarray([ [1.0, 0.0], ]) ``` Produces ``` --------------------------------------------------------------------------- UnboundLocalError Traceback (most recent call last) <ipython-input-409-2df128ef5191> in <module> 12 return ys 13 ---> 14 matrix_rollout( 15 jnp.asarray([[0.7, 0.3], 16 [0.3, 0.7]]), <ipython-input-409-2df128ef5191> in matrix_rollout(K, c_in) 8 return c_next, (c_next,) 9 ---> 10 _, (ys,) = scan(iteration, init=jnp.asarray([1.0, 0.0]), xs=c_in) 11 12 return ys numpyro/contrib/control_flow/scan.py in scan(f, init, xs, length, reverse, history) 397 (length, rng_key, carry), (pytree_trace, ys) = msg['value'] 398 --> 399 if not msg["kwargs"].get("enum", False): 400 for msg in pytree_trace.trace.values(): 401 apply_stack(msg) UnboundLocalError: local variable 'msg' referenced before assignment ``` The relevant source code is [here](https://github.com/pyro-ppl/numpyro/blob/master/numpyro/contrib/control_flow/scan.py#L468). I don't really understand how messengers work, but it looks like it might be an easy fix. I'd be happy to do it myself, but I'm not sure what all this messengers code does...
closed
2021-04-26T06:28:03Z
2021-04-27T18:00:19Z
https://github.com/pyro-ppl/numpyro/issues/1024
[ "bug" ]
justinrporter
1
iperov/DeepFaceLab
machine-learning
487
cuda out of memory
facing problems with cuda out of memory... plz help or suggest some solution for this ![Screenshot (74)](https://user-images.githubusercontent.com/53683977/68545852-10177680-03f7-11ea-8f36-bec3342c76fe.png) ![Screenshot (75)](https://user-images.githubusercontent.com/53683977/68545853-10177680-03f7-11ea-9062-8772e3b8a0eb.png) ![Screenshot (76)](https://user-images.githubusercontent.com/53683977/68545854-10b00d00-03f7-11ea-87c5-3a5575127a51.png) problem
closed
2019-11-10T14:47:09Z
2020-03-28T05:41:46Z
https://github.com/iperov/DeepFaceLab/issues/487
[]
HrishikeshMahalle
0
sqlalchemy/alembic
sqlalchemy
612
Does not detect change in field length
Changes are not detected when I try to change the length of a field I am using with **PostgreSQL** For example: `field = db.Column(db.String(50)) to field = db.Column(db.String(100))` `NFO [alembic.env] No changes in schema detected.`
closed
2019-10-24T07:38:54Z
2019-10-24T18:24:38Z
https://github.com/sqlalchemy/alembic/issues/612
[ "question" ]
afr-dt
2
benbusby/whoogle-search
flask
784
[BUG] WHOOGLE_CONFIG_COUNTRY not changing country
**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behaviour: 1. Set WHOOGLE_CONFIG_COUNTRY to countryUK 2. Recreate container 3. Go to homepage, the country is set, but when searching it's still the country where the instance is hosted (for me, Germany) 4. If you go back to the homepage and just click "Apply" then it works. **Deployment Method** - [ ] Heroku (one-click deploy) - [x] Docker - [ ] `run` executable - [ ] pip/pipx - [ ] Other: [describe setup] **Version of Whoogle Search** - [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc) - [ ] Version [version number] - [ ] Not sure **Desktop (please complete the following information):** - OS: Windows 11 - Browser Firefox - Version latest [Right after container is made](https://i.imgur.com/HoS0gHv.png) [Search results without right after container is made](https://i.imgur.com/YSKrP3b.png) [Search results after clicking "Apply"](https://i.imgur.com/vXN2Srj.png)
closed
2022-06-14T09:01:31Z
2022-06-16T18:13:45Z
https://github.com/benbusby/whoogle-search/issues/784
[ "bug" ]
jjakc-76
2
fastapi-users/fastapi-users
asyncio
171
Swagger issue for endpoints register & update
Hi, First of all, great job. It's a very useful library. However, after having setup my project. I noticed a few issues in the generated Swagger documentation. Indeed, the request body is pre-filled with the following information: ``` { "id": "string", "email": "user@example.com", "is_active": true, "is_superuser": false, "password": "string" } ``` However, according to your documentation, only the fields `email` & `password` are required. It can lead to some misunderstandings for someone wanting to use the API for the first time since the Swagger (or redoc) should describe how to use the API. I think it's a cheap fix that can be very useful for when you'll find a solution for adding auth in the Swagger. Indeed, after having had a look at your code, one solution could be to make the models `BaseUserCreate` and `BaseUserUpdate` not to inherit from `BaseUser` but `BaseModel` instead. Looking forward to hearing from you :)
closed
2020-04-30T10:15:08Z
2021-03-25T21:16:07Z
https://github.com/fastapi-users/fastapi-users/issues/171
[ "enhancement" ]
anancarv
11
miguelgrinberg/flasky
flask
49
Question about advanced API design
What is the best way to design the API so that it can be used by both the client and the server? My view methods are getting cluttered and I was thinking a lot of my logic should be moved into the the API. I changed the existing flasky API blueprint so that it acts like a thin middleman. ``` @api.route('/posts/<int:id>') def get_post(id): return apiLogic.get_post(id) ``` I did this with the assumption that if I needed to access a post from the client-side I could use jQuery to send an HTTP request to the endpoint `/posts/<int:id>`. Additionally, if I wanted to render a page server side I could directly call `apiLogic.get_post(id)`. However, calling `apiLogic.get_post(id)` directly from a separate blueprint means I lose the permission-checking logic from blueprint_before_request. Would it be poor design to issue HTTP requests from the backend onto itself to get around this problem? Perhaps I'm thinking about API design in flask incorrectly, do you have any advice?
closed
2015-06-02T19:21:42Z
2016-06-01T16:23:42Z
https://github.com/miguelgrinberg/flasky/issues/49
[ "question" ]
seanlaff
4
sammchardy/python-binance
api
1,239
User_socket with ThreadedWebsocketManager not working
Hey, I want to get info on User Data for my Spot Account in form of a stream. If i use twm.start_user_socket i get an error message that my API-key format is invalid (APIError(code=-2014): API-key format invalid). But every other command through that same API works as intended, for example asset balance and also a TWM Stream for BTCBUSD. This is on a TestNet Account, i didn't test it on a real account. Is there any way to fix this or is this a TestNet issue? **To Reproduce** api_key = os.environ.get('binance_api') api_secret = os.environ.get('binance_secret') client = Client(api_key, api_secret, testnet=True) twm = ThreadedWebsocketManager() def main(): def handle_socket_message(msg): print(f"message type: {msg['e']}") print(msg) twm.start() twm.start_user_socket(callback=handle_socket_message) twm.join() twm.stop() **Expected behavior** TWM Stream for Spot Account User Data will start working **Environment (please complete the following information):** - Python version: 3.9 - OS: Windows 10 - python-binance version v1.0.16 **Logs or Additional context** Task exception was never retrieved future: <Task finished name='Task-5' coro=<ThreadedApiManager.start_listener() done, defined at . . . . . . binance.exceptions.BinanceAPIException: APIError(code=-2014): API-key format invalid.
closed
2022-08-26T23:19:37Z
2022-09-04T22:45:36Z
https://github.com/sammchardy/python-binance/issues/1239
[]
spackooo
0
roboflow/supervision
computer-vision
1,719
How to call YOLO's built-in API?
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question I want to call yolo's model.track so that I can use GPU or other yolo interfaces through yolo. Is this possible? thank you ![20241209110527](https://github.com/user-attachments/assets/372d6bcb-5b40-416d-b9c8-fd319cd946d8) import cv2 import supervision from ultralytics import YOLO model = YOLO("yolov8n.pt") image = cv2.imread("videos/img.png") #results = model(image)[0] results = model.track(image, persist=True, classes=[0], verbose=False, device='0')[0] detections = supervision.Detections.from_ultralytics(results) ### Additional _No response_
closed
2024-12-09T03:09:02Z
2024-12-10T08:44:33Z
https://github.com/roboflow/supervision/issues/1719
[ "question" ]
DreamerYinYu
1
lepture/authlib
flask
330
authlib testing framework for external usage
**Is your feature request related to a problem? Please describe.** In a project that uses authlib, I want to write tests without having to re-invent all the setup that authlib has for its own unit testing. **Describe the solution you'd like** In a project that uses authlib, I would like to reuse some of the features within authlib's "test" directory for my own tests. It would be nice if I could import the Python modules, and also have a way to reuse the included resource files. Maybe that latter part could be done using importlib.resources for Python 3.9 or later, and the backported importlib_resources for older versions of Python. https://importlib-resources.readthedocs.io/en/latest/ **Describe alternatives you've considered** Create a separate and duplicative tooling to create fixtures and factories for use in testing my project that uses authlib. **Additional context** Add any other context or screenshots about the feature request here.
open
2021-03-18T19:31:49Z
2025-02-20T20:56:15Z
https://github.com/lepture/authlib/issues/330
[ "feature request" ]
sheilatron
1
timkpaine/lantern
plotly
199
Lost lantern support for bokeh plots
**For the following syntax:** l.plot(df, 'line', 'bokeh') **Up until Jan. 15, 2020 the following warning was displayed:** BokehDeprecationWarning: 'legend' keyword is deprecated, use explicit 'legend_label', 'legend_field', or 'legend_group' keywords instead **Now the following error occurs:** AttributeError Traceback (most recent call last) <ipython-input-434-b3ce6db22026> in <module> ----> 1 l.plot(df, 'line', 'bokeh') /opt/tljh/user/lib/python3.6/site-packages/lantern/plotting/__init__.py in plot(data, kind, backend, size, theme, **kwargs) 51 if isinstance(kind, str): 52 getattr(f, kind)(data, **kwargs) ---> 53 return f.show(**show_args) 54 elif isinstance(kind, list): 55 # TODO handle color, scatter, etc /opt/tljh/user/lib/python3.6/site-packages/lantern/plotting/plot_bokeh.py in show(self, title, xlabel, ylabel, xaxis, yaxis, xticks, yticks, legend_field, grid, **kwargs) 48 49 if legend_field: ---> 50 self.figure.legend_field.location = (self.width+10, self.height+10) 51 legend_field = Legend(items=self.legend_field, location=(10, 100)) 52 legend_field.items = self.legend_field AttributeError: 'Figure' object has no attribute 'legend_field'
closed
2020-01-16T16:13:58Z
2020-01-16T18:09:16Z
https://github.com/timkpaine/lantern/issues/199
[]
rhamilton-bay4
0
quokkaproject/quokka
flask
125
use awesome-slugify in quokka.utils
Use this https://github.com/dimka665/awesome-slugify to replace this: https://github.com/pythonhub/quokka/blob/master/quokka/utils/text.py#L10-L19
closed
2014-01-22T19:03:38Z
2015-07-16T02:56:34Z
https://github.com/quokkaproject/quokka/issues/125
[ "enhancement" ]
rochacbruno
1
LibrePhotos/librephotos
django
753
Add deduplication
**Describe the enhancement you'd like** A menu to remove duplicate photos, possibly with a similar interface to the faces menu. **Describe why this will benefit the LibrePhotos** This will allow users to remove duplicate photos from their library. **Additional context** None needed.
open
2023-02-12T05:25:04Z
2023-09-19T18:50:15Z
https://github.com/LibrePhotos/librephotos/issues/753
[ "enhancement", "frontend", "backend", "mobile" ]
askiiart
1
tfranzel/drf-spectacular
rest-api
979
How to override `SerializerMethodField` to mark as not required
We have a `Serializer` such as: ```py class MyModelSerializer(MyModelSerializer): field = SerializerMethodField() def get_field(self, instance: MyModel) -> str | None: if instance.other_field: return None return "Hello World" ``` This creates a schema such as: ``` field* | string nullable: true readOnly: true ``` So this is a required, nullable field. This is the correct output. _But_... Some OpenAPI clients (such as the [Java generator in OpenAPI](https://openapi-generator.tech/docs/generators/java)) do not support fields that are both required _and_ nullable 😢 . Is there a way to override a `SerializerMethodField` to remove the `required` attribute? I didn't immediately see how [`@extend_schema_field`](https://drf-spectacular.readthedocs.io/en/latest/customization.html#step-3-extend-schema-field-and-type-hints) could be used for this.
closed
2023-04-25T19:17:31Z
2023-04-26T16:59:29Z
https://github.com/tfranzel/drf-spectacular/issues/979
[]
johnthagen
5
sammchardy/python-binance
api
1,143
why I get diffrent ohlcvs from ccxt and binance python against exchange
get ohlcvs with ccxt and i get below result for two days ![image](https://user-images.githubusercontent.com/77359545/153462029-6ac46903-c6a1-434a-9ff1-17bf19b44cb9.png) and also data from binance python with 'get_historical_klines' and i get below result for last day ![image](https://user-images.githubusercontent.com/77359545/153462374-c02a014d-8332-4684-9ddf-83db5c0b6813.png) and binance show diffrent ohlcvs than ccxt and binance python ![Screenshot (1)](https://user-images.githubusercontent.com/77359545/153464024-1d07b2d7-0a0c-4737-b2f8-1e74257e029a.png) ![Screenshot (2)](https://user-images.githubusercontent.com/77359545/153464030-f3cae0ee-91ef-4bf8-ba04-42f26b67380c.png) can you help me #11921
open
2022-02-10T17:36:53Z
2023-02-15T09:12:56Z
https://github.com/sammchardy/python-binance/issues/1143
[]
hossain93
1
ipython/ipython
jupyter
13,971
Bug in `Image` when using local file with`embed=False`
The following code should show an Image when using the `Image` object when the embed parameter is set to false. However, it does not do that. ```python from IPython.display import Image Image("assets/cute_dog.jpg", embed=False, width=300) ``` I could fix the problem by replacing `src=\"None\"` with `src=\"assets/cute_dog.jpg\"` in my local notebook file: <img width="1195" alt="image" src="https://user-images.githubusercontent.com/44469195/224571250-92190ef9-d6eb-47b7-bb32-97907f7a680b.png"> <img width="734" alt="image" src="https://user-images.githubusercontent.com/44469195/224571320-5f20269a-e5da-47c7-8cec-faae7ff3666f.png"> I think that this line needs to be modified: https://github.com/ipython/ipython/blob/3339fad91de3e3da1f02b0b209afef20f021ca33/IPython/core/display.py#L958
open
2023-03-12T20:30:37Z
2023-03-12T20:30:37Z
https://github.com/ipython/ipython/issues/13971
[]
kolibril13
0
akfamily/akshare
data-science
5,500
获得历史行情的接口,开盘价与软件显示的有差异
试了几个接口,数值上有些差异 stock_zh_a_daily_df = ak.stock_zh_a_daily(symbol="sz000025", start_date="20240103", end_date="20240104", adjust="") print(stock_zh_a_daily_df) date open high ... amount outstanding_share turnover 0 2024-01-03 16.00 16.10 ... 33580877.0 392778320.0 0.005370 1 2024-01-04 15.93 15.99 ... 34663873.0 392778320.0 0.005551 stock_zh_a_hist_tx_df = ak.stock_zh_a_hist_tx(symbol="sz000025", start_date="20240103", end_date="20240104", adjust="") print(stock_zh_a_hist_tx_df) date open close high low amount 0 2024-01-03 16.00 15.87 16.10 15.81 21093.0 1 2024-01-04 15.93 15.86 15.99 15.80 21803.0 接口返回16.0,我在股票软件上开到的2024-01-03开盘价是16.37 ![777](https://github.com/user-attachments/assets/0fa61988-46b0-4194-a33f-4eb2a479366f)
closed
2025-01-04T01:22:23Z
2025-01-04T07:32:37Z
https://github.com/akfamily/akshare/issues/5500
[ "bug" ]
zouhan6806504
2
TencentARC/GFPGAN
deep-learning
515
Surojit
open
2024-02-14T11:25:55Z
2024-02-14T11:26:23Z
https://github.com/TencentARC/GFPGAN/issues/515
[]
surojithalder937
3
qubvel-org/segmentation_models.pytorch
computer-vision
492
how to show dice score when not use dice loss ?
As the title says.
closed
2021-09-30T12:49:38Z
2021-10-08T07:14:09Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/492
[]
over-star
1
axnsan12/drf-yasg
django
798
Missing latest versions in changelog
Hello, the changelog file seems to be missing the last 2 releases. Is it still going to be updated or should I look directly at the releases page? (that seems to be always updated) Thanks for your time and keep up the good work
closed
2022-07-19T09:48:13Z
2022-07-19T12:53:21Z
https://github.com/axnsan12/drf-yasg/issues/798
[]
marcosox
3
ydataai/ydata-profiling
jupyter
1,645
support for numpy >= 2.0
### Missing functionality Numpy 2.0.0 was released on June 16. https://github.com/numpy/numpy/releases/tag/v2.0.0. The current version is 2.1.1. To date, ydata-profiling is not compatible with 2.0.x per `requirements.txt`. As such, it is currently a blocker for users to upgrade to 2.0.x if they have this package installed. ### Proposed feature Loosen requirements to include 2.0.x unless there are specific reasons (e.g., usage of deprecated function names) not to. ### Alternatives considered _No response_ ### Additional context _No response_
closed
2024-09-03T18:54:47Z
2025-01-04T19:58:09Z
https://github.com/ydataai/ydata-profiling/issues/1645
[ "dependencies 🔗" ]
quant5
9
ansible/awx
automation
15,819
Some examples in module descriptions reference awx.awx
### Please confirm the following - [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [x] I understand that AWX is open source software provided for free and that I might not receive a timely response. - [x] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.) ### Bug Summary Some examples have awx.awx.module_name and some don't. awx.awx will not work for ansible.controller collection. ### AWX version 4.6.7 ### Select the relevant components - [ ] UI - [ ] UI (tech preview) - [ ] API - [x] Docs - [ ] Collection - [ ] CLI - [ ] Other ### Installation method N/A ### Modifications no ### Ansible version _No response_ ### Operating system _No response_ ### Web browser _No response_ ### Steps to reproduce Modules: instance, workflow_job_template ### Expected results Follow the rest of the docs and specify module name without collection name. ### Actual results Some modules have awx.awx in front of module name ### Additional information _No response_
open
2025-02-07T20:59:14Z
2025-02-07T20:59:29Z
https://github.com/ansible/awx/issues/15819
[ "type:bug", "needs_triage", "community" ]
kk-at-redhat
0
pywinauto/pywinauto
automation
879
KeyError on Import with Python 3.8.1 64-bit
## Expected Behavior You can import pywinauto and use it. In the example script below, "script ran successfully" will print. ## Actual Behavior ``` Traceback (most recent call last): File "E:\Python38_64\lib\ctypes\__init__.py", line 123, in WINFUNCTYPE return _win_functype_cache[(restype, argtypes, flags)] KeyError: (<class 'ctypes.HRESULT'>, (<class 'ctypes.c_long'>, <class 'comtypes.automation.tagVARIANT'>, <class 'ctypes.c_long'>, <class 'comtypes.LP_POINTER(IUIAutomationTextRange)'>), 0) ``` ## Steps to Reproduce the Problem 1. Have Python 3.8.1 64-bit 2. `pip install pywinauto` 3. Attempt to `import pywinauto` ## Short Example of Code to Demonstrate the Problem ``` import pywinauto print("script ran successfully") ``` ## Specifications - Pywinauto version: 0.6.8 via `pip` - Python version and bitness: 3.8.1, 64-bit - Platform and OS: Windows 10 Pro, 64-bit, Version 1909, Build 18363.535
closed
2020-01-10T20:26:47Z
2020-01-11T13:56:25Z
https://github.com/pywinauto/pywinauto/issues/879
[ "duplicate", "3rd-party issue" ]
WinterPhoenix
3
desec-io/desec-stack
rest-api
168
Invalid IP Address Value Results in Server Error
Internal Server Error: /api/v1/unlock/user/████████████@gmail.com PdnsException at /api/v1/unlock/user/████████████@gmail.com Record n████████████m.dedyn.io./A '<3█████1███████.1>': Parsing record content (try 'pdnsutil check-zone'): while parsing IP address, expected digits at position 0 in '<3█████1███████.1>' Request Method: POST Request URL: https://desec.io/api/v1/unlock/user/████████████@gmail.com Django Version: 2.1.7 Python Executable: /usr/local/bin/uwsgi Python Version: 3.7.2 Python Path: ['.', '', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7', '/usr/local/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/site-packages'] Server time: ███████████████████████████████████ Installed Applications: ('django.contrib.auth', 'django.contrib.contenttypes', 'rest_framework', 'djoser', 'desecapi') Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware') Traceback: File "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py" in inner 34. response = get_response(request) File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response 126. response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response 124. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "./desecapi/views.py" in unlock 541. user.unlock() File "./desecapi/models.py" in unlock 126. domain.sync_to_pdns() File "./desecapi/models.py" in sync_to_pdns 192. self._publish() File "/usr/local/lib/python3.7/contextlib.py" in inner 74. return func(*args, **kwds) File "./desecapi/models.py" in _publish 358. pdns.set_rrsets(self, rrsets) File "./desecapi/pdns.py" in set_rrsets 140. _pdns_patch('/zones/' + domain.pdns_id, data) File "./desecapi/pdns.py" in _pdns_patch 54. raise PdnsException(r) Exception Type: PdnsException at /api/v1/unlock/user/████████████@gmail.com Exception Value: Record n████████████m.dedyn.io./A '<3█████1███████.1>': Parsing record content (try 'pdnsutil check-zone'): while parsing IP address, expected digits at position 0 in '<3█████1███████.1>' Request information: USER: [unable to retrieve the current user] GET: No GET data POST: csrfmiddlewaretoken = '████████████████████████████' g-recaptcha-response = '██████████████████████████████████████████' FILES: No FILES data COOKIES: csrftoken = '██████████████████████████████████████████' META: CONTENT_LENGTH = '440' CONTENT_TYPE = 'application/x-www-form-urlencoded' CSRF_COOKIE = '███████████████████████████████████' DOCUMENT_ROOT = '/etc/nginx/html' HTTPS = 'on' HTTP_ACCEPT = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' HTTP_ACCEPT_ENCODING = 'gzip, deflate, br' HTTP_ACCEPT_LANGUAGE = 'id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7' HTTP_CACHE_CONTROL = 'max-age=0' HTTP_CONTENT_LENGTH = '440' HTTP_CONTENT_TYPE = 'application/x-www-form-urlencoded' HTTP_COOKIE = '████████████████████████████████████████████████████████' HTTP_HOST = 'desec.io' HTTP_ORIGIN = 'https://desec.io' HTTP_REFERER = 'https://desec.io/api/v1/unlock/user/████████████@gmail.com' HTTP_UPGRADE_INSECURE_REQUESTS = '1' HTTP_USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/████████ (KHTML, like Gecko) Chrome/72.████████ Safari/████████' PATH_INFO = '/api/v1/unlock/user/████████████@gmail.com' QUERY_STRING = '' REMOTE_ADDR = '███████████████████████████████████' REMOTE_PORT = '███████' REQUEST_METHOD = 'POST' REQUEST_SCHEME = 'https' REQUEST_URI = '/api/v1/unlock/user/████████████@gmail.com' SCRIPT_NAME = '' SERVER_NAME = 'desec.io' SERVER_PORT = '443' SERVER_PROTOCOL = 'HTTP/2.0'
closed
2019-04-25T09:15:12Z
2019-04-25T09:35:59Z
https://github.com/desec-io/desec-stack/issues/168
[ "bug" ]
nils-wisiol
1
jacobgil/pytorch-grad-cam
computer-vision
3
test_caffe_model
Thanks for your sharing and nice work! @jacobgil How can i get the Gradient class activation maps with the model trained with caffe rather than pytorch models? Thank you very much!
closed
2017-08-30T03:07:33Z
2018-07-19T13:52:16Z
https://github.com/jacobgil/pytorch-grad-cam/issues/3
[]
3DMM-ICME2023
1
streamlit/streamlit
streamlit
9,906
Add 'key' to static widgets like st.link_button, st.popover
### Checklist - [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [X] I added a descriptive title and summary to this issue. ### Summary Most widgets have a `key` parameter - but not `st.link_button`. I can see why - because it is mostly "static", but it's still useful to be able to give an explicit key especially to map a specific style now that key can be used in the element class. ### Why? _No response_ ### How? _No response_ ### Additional Context _No response_
open
2024-11-22T15:14:15Z
2025-01-03T15:39:23Z
https://github.com/streamlit/streamlit/issues/9906
[ "type:enhancement", "feature:st.link_button" ]
arnaudmiribel
4
gradio-app/gradio
python
10,501
gr.ChatInterface within gr.Tabs title and history fields misplaced
### Describe the bug I put gr.ChatInterface under gr.Tab as below. But the title field (which should be at the top of the page) and history field (which should be at the left of the page) are put at the bottom of the page. with gr.Blocks() as iface: with gr.Tab("tab1"): gr.ChatInterface() with gr.Tab("tab1"): gr.ChatInterface() ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import gradio as gr with gr.Blocks() as iface: with gr.Tab("tab1"): gr.ChatInterface(....) with gr.Tab("tab2"): gr.ChatInterface(....) iface.launch() ``` ### Screenshot _No response_ ### Logs ```shell ``` ### System Info ```shell gradio==5.12.0 ``` ### Severity Blocking usage of gradio
closed
2025-02-05T02:17:15Z
2025-02-05T07:14:10Z
https://github.com/gradio-app/gradio/issues/10501
[ "bug", "needs repro" ]
justinzyw
6
ultralytics/ultralytics
machine-learning
18,706
Can I take a video file as an API input (using FastAPI's UploadFile) and stream it directly into a YOLOv11n model for object detection without saving the file or processing it into frames or fixed-size chunks?
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions. ### Question Hi, I am using YOLOv11n as apart of fastapi and can I take a video file as an API input (using FastAPI's UploadFile) and stream it directly into a YOLOv11n model for object detection without saving the file or processing it into frames or fixed-size chunks? ### Additional #### Desired Usage ``` from fastapi import FastAPI, UploadFile from fastapi.responses import JSONResponse from ultralytics import YOLO app = FastAPI() # Load the YOLO model model = YOLO("/path/to/yolo11n.pt") @app.post("/upload-and-detect") async def upload_and_detect(file: UploadFile): """ Accept a video file and pass it directly to the YOLO model for detection. """ try: # YOLO accepts a video path, so we pass the file object directly results = model(file.file, stream=True) # Use the stream=True for generator processing detections = [] for result in results: frame_detections = [] for box, conf, label in zip(result.boxes.xyxy, result.boxes.conf, result.boxes.cls): frame_detections.append({ "box": box.tolist(), "confidence": float(conf), "label": model.names[int(label)] }) detections.append(frame_detections) return JSONResponse(content={"detections": detections}) except Exception as e: return JSONResponse(content={"error": str(e)}, status_code=500) ```
open
2025-01-16T04:21:09Z
2025-01-16T11:37:14Z
https://github.com/ultralytics/ultralytics/issues/18706
[ "question", "detect" ]
hariv0
2
sqlalchemy/sqlalchemy
sqlalchemy
9,808
Committing two or more ORM objects with UUID columns causes DBAPIError
### Describe the bug This appears to be another regression caused by https://github.com/sqlalchemy/sqlalchemy/issues/9618. Similar (yet not identical) to https://github.com/sqlalchemy/sqlalchemy/issues/9739. The issue occurs with the asyncpg driver, for models that have a column of type `UUID`. Adding one entry at a time works fine, here is an example SQL statement: INSERT INTO b (a_id) VALUES ($1) RETURNING b.id When adding two or more objects, this is the SQL that is generated: INSERT INTO b (a_id) SELECT p0::UUID FROM (VALUES ($1, 0), ($2, 1)) AS imp_sen(p0, sen_counter) ORDER BY sen_counter RETURNING b.id, b.id AS id__1 This second statement generates the following error: sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DataError'>: invalid input for query argument $1: UUID('6fa30cae-d519-45f6-8f2e-69dde86a94... (expected str, got UUID) For comparison, this is the (working) SQL generated when adding two rows using the psycopg2 driver: INSERT INTO b (a_id) SELECT p0::UUID FROM (VALUES (%(a_id__0)s, 0), (%(a_id__1)s, 1)) AS imp_sen(p0, sen_counter) ORDER BY sen_counter RETURNING b.id, b.id AS id__1 ### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected _No response_ ### SQLAlchemy Version in Use 2.0.13 ### DBAPI (i.e. the database driver) asyncpg ### Database Vendor and Major Version PostgreSQL 15.1 ### Python Version 3.11 ### Operating system OSX ### To Reproduce ```python import os from dotenv import load_dotenv import sqlalchemy as sa import sqlalchemy.orm as so from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker import asyncio from uuid import uuid4, UUID class Model(so.DeclarativeBase): pass load_dotenv() engine = create_async_engine(os.environ['DATABASE_URL'], echo=True) Session = async_sessionmaker(engine, expire_on_commit=False) class A(Model): __tablename__ = 'a' id: so.Mapped[UUID] = so.mapped_column(default=uuid4, primary_key=True) class B(Model): __tablename__ = 'b' id: so.Mapped[int] = so.mapped_column(primary_key=True) a_id: so.Mapped[UUID] = so.mapped_column(sa.ForeignKey('a.id')) a = so.relationship(A) async def main(): async with engine.begin() as conn: await conn.run_sync(Model.metadata.drop_all) await conn.run_sync(Model.metadata.create_all) async with Session() as session: a = A(id=uuid4()) #bs = [B(a=a)] bs = [B(a=a), B(a=a)] session.add_all(bs) await session.commit() async with engine.begin() as conn: await conn.run_sync(Model.metadata.drop_all) asyncio.run(main()) ``` ### Error ``` 2023-05-19 19:13:47,305 INFO sqlalchemy.engine.Engine select pg_catalog.version() 2023-05-19 19:13:47,305 INFO sqlalchemy.engine.Engine [raw sql] () 2023-05-19 19:13:47,311 INFO sqlalchemy.engine.Engine select current_schema() 2023-05-19 19:13:47,311 INFO sqlalchemy.engine.Engine [raw sql] () 2023-05-19 19:13:47,315 INFO sqlalchemy.engine.Engine show standard_conforming_strings 2023-05-19 19:13:47,316 INFO sqlalchemy.engine.Engine [raw sql] () 2023-05-19 19:13:47,319 INFO sqlalchemy.engine.Engine BEGIN (implicit) 2023-05-19 19:13:47,323 INFO sqlalchemy.engine.Engine SELECT pg_catalog.pg_class.relname FROM pg_catalog.pg_class JOIN pg_catalog.pg_namespace ON pg_catalog.pg_namespace.oid = pg_catalog.pg_class.relnamespace WHERE pg_catalog.pg_class.relname = $1::VARCHAR AND pg_catalog.pg_class.relkind = ANY (ARRAY[$2::VARCHAR, $3::VARCHAR, $4::VARCHAR, $5::VARCHAR, $6::VARCHAR]) AND pg_catalog.pg_table_is_visible(pg_catalog.pg_class.oid) AND pg_catalog.pg_namespace.nspname != $7::VARCHAR 2023-05-19 19:13:47,323 INFO sqlalchemy.engine.Engine [generated in 0.00026s] ('a', 'r', 'p', 'f', 'v', 'm', 'pg_catalog') 2023-05-19 19:13:47,329 INFO sqlalchemy.engine.Engine SELECT pg_catalog.pg_class.relname FROM pg_catalog.pg_class JOIN pg_catalog.pg_namespace ON pg_catalog.pg_namespace.oid = pg_catalog.pg_class.relnamespace WHERE pg_catalog.pg_class.relname = $1::VARCHAR AND pg_catalog.pg_class.relkind = ANY (ARRAY[$2::VARCHAR, $3::VARCHAR, $4::VARCHAR, $5::VARCHAR, $6::VARCHAR]) AND pg_catalog.pg_table_is_visible(pg_catalog.pg_class.oid) AND pg_catalog.pg_namespace.nspname != $7::VARCHAR 2023-05-19 19:13:47,329 INFO sqlalchemy.engine.Engine [cached since 0.006091s ago] ('b', 'r', 'p', 'f', 'v', 'm', 'pg_catalog') 2023-05-19 19:13:47,330 INFO sqlalchemy.engine.Engine SELECT pg_catalog.pg_class.relname FROM pg_catalog.pg_class JOIN pg_catalog.pg_namespace ON pg_catalog.pg_namespace.oid = pg_catalog.pg_class.relnamespace WHERE pg_catalog.pg_class.relname = $1::VARCHAR AND pg_catalog.pg_class.relkind = ANY (ARRAY[$2::VARCHAR, $3::VARCHAR, $4::VARCHAR, $5::VARCHAR, $6::VARCHAR]) AND pg_catalog.pg_table_is_visible(pg_catalog.pg_class.oid) AND pg_catalog.pg_namespace.nspname != $7::VARCHAR 2023-05-19 19:13:47,330 INFO sqlalchemy.engine.Engine [cached since 0.007731s ago] ('a', 'r', 'p', 'f', 'v', 'm', 'pg_catalog') 2023-05-19 19:13:47,332 INFO sqlalchemy.engine.Engine SELECT pg_catalog.pg_class.relname FROM pg_catalog.pg_class JOIN pg_catalog.pg_namespace ON pg_catalog.pg_namespace.oid = pg_catalog.pg_class.relnamespace WHERE pg_catalog.pg_class.relname = $1::VARCHAR AND pg_catalog.pg_class.relkind = ANY (ARRAY[$2::VARCHAR, $3::VARCHAR, $4::VARCHAR, $5::VARCHAR, $6::VARCHAR]) AND pg_catalog.pg_table_is_visible(pg_catalog.pg_class.oid) AND pg_catalog.pg_namespace.nspname != $7::VARCHAR 2023-05-19 19:13:47,332 INFO sqlalchemy.engine.Engine [cached since 0.00911s ago] ('b', 'r', 'p', 'f', 'v', 'm', 'pg_catalog') 2023-05-19 19:13:47,333 INFO sqlalchemy.engine.Engine CREATE TABLE a ( id UUID NOT NULL, PRIMARY KEY (id) ) 2023-05-19 19:13:47,333 INFO sqlalchemy.engine.Engine [no key 0.00009s] () 2023-05-19 19:13:47,340 INFO sqlalchemy.engine.Engine CREATE TABLE b ( id SERIAL NOT NULL, a_id UUID NOT NULL, PRIMARY KEY (id), FOREIGN KEY(a_id) REFERENCES a (id) ) 2023-05-19 19:13:47,340 INFO sqlalchemy.engine.Engine [no key 0.00013s] () 2023-05-19 19:13:47,346 INFO sqlalchemy.engine.Engine COMMIT 2023-05-19 19:13:47,351 INFO sqlalchemy.engine.Engine BEGIN (implicit) 2023-05-19 19:13:47,352 INFO sqlalchemy.engine.Engine INSERT INTO a (id) VALUES ($1) 2023-05-19 19:13:47,352 INFO sqlalchemy.engine.Engine [generated in 0.00015s] (UUID('6fa30cae-d519-45f6-8f2e-69dde86a94c2'),) 2023-05-19 19:13:47,358 INFO sqlalchemy.engine.Engine INSERT INTO b (a_id) SELECT p0::UUID FROM (VALUES ($1, 0), ($2, 1)) AS imp_sen(p0, sen_counter) ORDER BY sen_counter RETURNING b.id, b.id AS id__1 2023-05-19 19:13:47,358 INFO sqlalchemy.engine.Engine [generated in 0.00007s (insertmanyvalues) 1/1 (ordered)] (UUID('6fa30cae-d519-45f6-8f2e-69dde86a94c2'), UUID('6fa30cae-d519-45f6-8f2e-69dde86a94c2')) 2023-05-19 19:13:47,360 INFO sqlalchemy.engine.Engine ROLLBACK Traceback (most recent call last): File "asyncpg/protocol/prepared_stmt.pyx", line 168, in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg File "asyncpg/protocol/codecs/base.pyx", line 206, in asyncpg.protocol.protocol.Codec.encode File "asyncpg/protocol/codecs/base.pyx", line 111, in asyncpg.protocol.protocol.Codec.encode_scalar File "asyncpg/pgproto/./codecs/text.pyx", line 29, in asyncpg.pgproto.pgproto.text_encode File "asyncpg/pgproto/./codecs/text.pyx", line 12, in asyncpg.pgproto.pgproto.as_pg_string_and_size TypeError: expected str, got UUID The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 531, in _prepare_and_execute self._rows = await prepared_stmt.fetch(*parameters) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/asyncpg/prepared_stmt.py", line 176, in fetch data = await self.__bind_execute(args, 0, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/asyncpg/prepared_stmt.py", line 241, in __bind_execute data, status, _ = await self.__do_execute( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/asyncpg/prepared_stmt.py", line 230, in __do_execute return await executor(protocol) ^^^^^^^^^^^^^^^^^^^^^^^^ File "asyncpg/protocol/protocol.pyx", line 183, in bind_execute File "asyncpg/protocol/prepared_stmt.pyx", line 197, in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg asyncpg.exceptions.DataError: invalid input for query argument $1: UUID('6fa30cae-d519-45f6-8f2e-69dde86a94... (expected str, got UUID) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2108, in _exec_insertmany_context dialect.do_execute( File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 920, in do_execute cursor.execute(statement, parameters) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 564, in execute self._adapt_connection.await_( File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 126, in await_only return current.driver.switch(awaitable) # type: ignore[no-any-return] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 187, in greenlet_spawn value = await result ^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 543, in _prepare_and_execute self._handle_exception(error) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 493, in _handle_exception self._adapt_connection._handle_exception(error) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 782, in _handle_exception raise translated_error from error sqlalchemy.dialects.postgresql.asyncpg.AsyncAdapt_asyncpg_dbapi.Error: <class 'asyncpg.exceptions.DataError'>: invalid input for query argument $1: UUID('6fa30cae-d519-45f6-8f2e-69dde86a94... (expected str, got UUID) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/mgrinberg/Documents/dev/python/retrofun/practice/postgresql/uuid-bug/models.py", line 45, in <module> asyncio.run(main()) File "/Users/mgrinberg/.pyenv/versions/3.11.0/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/.pyenv/versions/3.11.0/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/.pyenv/versions/3.11.0/lib/python3.11/asyncio/base_events.py", line 650, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/practice/postgresql/uuid-bug/models.py", line 40, in main await session.commit() File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/session.py", line 932, in commit await greenlet_spawn(self.sync_session.commit) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 194, in greenlet_spawn result = context.switch(value) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 1906, in commit trans.commit(_to_root=True) File "<string>", line 2, in commit File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/state_changes.py", line 137, in _go ret_value = fn(self, *arg, **kw) ^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 1221, in commit self._prepare_impl() File "<string>", line 2, in _prepare_impl File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/state_changes.py", line 137, in _go ret_value = fn(self, *arg, **kw) ^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 1196, in _prepare_impl self.session.flush() File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 4154, in flush self._flush(objects) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 4290, in _flush with util.safe_reraise(): File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__ raise exc_value.with_traceback(exc_tb) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 4251, in _flush flush_context.execute() File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/unitofwork.py", line 467, in execute rec.execute(self) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/unitofwork.py", line 644, in execute util.preloaded.orm_persistence.save_obj( File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj _emit_insert_statements( File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/orm/persistence.py", line 1133, in _emit_insert_statements result = connection.execute( ^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1413, in execute return meth( ^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 483, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1637, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1836, in _execute_context return self._exec_insertmany_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2116, in _exec_insertmany_context self._handle_dbapi_exception( File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2339, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2108, in _exec_insertmany_context dialect.do_execute( File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 920, in do_execute cursor.execute(statement, parameters) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 564, in execute self._adapt_connection.await_( File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 126, in await_only return current.driver.switch(awaitable) # type: ignore[no-any-return] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 187, in greenlet_spawn value = await result ^^^^^^^^^^^^ File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 543, in _prepare_and_execute self._handle_exception(error) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 493, in _handle_exception self._adapt_connection._handle_exception(error) File "/Users/mgrinberg/Documents/dev/python/retrofun/venv/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 782, in _handle_exception raise translated_error from error sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DataError'>: invalid input for query argument $1: UUID('6fa30cae-d519-45f6-8f2e-69dde86a94... (expected str, got UUID) [SQL: INSERT INTO b (a_id) SELECT p0::UUID FROM (VALUES ($1, 0), ($2, 1)) AS imp_sen(p0, sen_counter) ORDER BY sen_counter RETURNING b.id, b.id AS id__1] [parameters: (UUID('6fa30cae-d519-45f6-8f2e-69dde86a94c2'), UUID('6fa30cae-d519-45f6-8f2e-69dde86a94c2'))] (Background on this error at: https://sqlalche.me/e/20/dbapi) ``` ### Additional context _No response_
closed
2023-05-19T18:18:31Z
2023-05-19T23:52:29Z
https://github.com/sqlalchemy/sqlalchemy/issues/9808
[ "bug", "postgresql", "orm", "near-term release" ]
miguelgrinberg
2
quokkaproject/quokka
flask
606
Docker
Update docker image https://github.com/rochacbruno/quokka_ng/issues/65
closed
2018-02-07T01:45:24Z
2018-10-02T20:23:44Z
https://github.com/quokkaproject/quokka/issues/606
[ "1.0.0", "hacktoberfest" ]
rochacbruno
0
deeppavlov/DeepPavlov
tensorflow
719
sklearn_component crashed when a large (ovr 4GiB) model saving
```Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/user/.virtualenvs/support/lib/python3.6/site-packages/deeppavlov/__main__.py", line 3, in <module> main() File "/home/user/.virtualenvs/support/lib/python3.6/site-packages/deeppavlov/deep.py", line 86, in main start_epoch_num=start_epoch_num) File "/home/user/.virtualenvs/support/lib/python3.6/site-packages/deeppavlov/core/commands/train.py", line 225, in train_evaluate_model_from_config model = fit_chainer(config, iterator) File "/home/user/.virtualenvs/support/lib/python3.6/site-packages/deeppavlov/core/commands/train.py", line 100, in fit_chainer component.save() File "/home/user/.virtualenvs/support/lib/python3.6/site-packages/deeppavlov/models/sklearn/sklearn_component.py", line 241, in save pickle.dump(self.model, f) OverflowError: cannot serialize a bytes object larger than 4 GiB ``` I suggest **pickle.dump** should use protocol with version **4.0** Please note link below https://stackoverflow.com/questions/29704139/pickle-in-python3-doesnt-work-for-large-data-saving
closed
2019-02-20T12:32:15Z
2019-03-18T08:41:39Z
https://github.com/deeppavlov/DeepPavlov/issues/719
[]
ismaslov
2
ipython/ipython
jupyter
14,273
Fix prompt backward compat.
see #14223
closed
2023-12-27T08:56:13Z
2024-01-08T09:34:56Z
https://github.com/ipython/ipython/issues/14273
[]
Carreau
0
albumentations-team/albumentations
deep-learning
2,299
[Speed up] PlasmaShadow
Benchmark shows that `kornia` has faster `PlasmaShadow` implementation => need to learn from it and fix.
open
2025-01-24T16:01:01Z
2025-01-24T16:03:37Z
https://github.com/albumentations-team/albumentations/issues/2299
[ "Speed Improvements" ]
ternaus
0
neuml/txtai
nlp
768
Notebook 42 error
![CleanShot 2024-08-22 at 00 16 57@2x](https://github.com/user-attachments/assets/c44a95d5-c4ef-4555-b1d6-e7e5ab9a16b6)
closed
2024-08-21T22:17:21Z
2024-08-22T08:59:48Z
https://github.com/neuml/txtai/issues/768
[ "bug" ]
heijligers
2
davidsandberg/facenet
tensorflow
739
Transfer learning on custom dataset (Asian faces)?
This is a follow up to this [thread](https://github.com/davidsandberg/facenet/issues/591) I'm not talking about custom classifier from @davidsandberg (i.e. training classifier on known classes).. this problem im referring to is mainly calculating L2 distance on Asian faces. I tried all models from past (20170131-005910, 20170131-234652, 20180408-102900, and 20180402-114759)... and the L2 distances for same face seem to be too high and vice versa (different faces have l2 distances too low) https://s7.postimg.cc/yi28ve5rv/image.png 1) Are there notes / repos / tutorials specifically on transfer learning for custom dataset (particularly Asian faces). 2) Anyone working on this? Looking for collaborators. Would gladly open source this work w/some guidance & help. (Ideally Keras based, but i guess TF would work too) 3) What is the dataset supposed to look like? (i.e. specific angles, labels, etc.) Would appreciate if someone point me to a sample dataset
open
2018-05-09T09:19:19Z
2024-10-19T12:37:42Z
https://github.com/davidsandberg/facenet/issues/739
[]
taewookim
22
KaiyangZhou/deep-person-reid
computer-vision
500
Load ONNX model
I managed to export some models from the model zoo into ONNX format. However, I have difficulties getting it to work with torchreid. In `torchtools.py`, instead of `torch.load()`, I added `checkpoint = onnx.load(fpath)`. This resulted in the following error: ``` File "yolov5_deepsort\reid models\deep-person-reid\torchreid\utils\torchtools.py", line 280, in load_pretrained_weights if 'state_dict' in checkpoint: TypeError: argument of type 'ModelProto' is not iterable ``` Any advice?
open
2022-04-06T02:15:19Z
2022-08-11T12:46:08Z
https://github.com/KaiyangZhou/deep-person-reid/issues/500
[]
HeChengHui
9
ultralytics/yolov5
pytorch
12,448
How benchmark.py is actually works?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I am curious to know how `benchmark.py` script actually works. I use the script lots of time to grasp insight in terms of inference speed of my custom trained models. Does the mechanism is done by batch inferencing validation images set? And what does the inference time tell you? Is it overall time process or per image process? Can you explain how the process is worked? Because I need to know. Thanks in advance! ### Additional _No response_
closed
2023-11-30T03:18:18Z
2024-01-11T00:21:23Z
https://github.com/ultralytics/yolov5/issues/12448
[ "question", "Stale" ]
itsjustrafli
2
microsoft/nni
pytorch
5,659
How to start a remote experiment without sftp?
open
2023-08-09T06:43:21Z
2023-08-11T07:00:10Z
https://github.com/microsoft/nni/issues/5659
[]
liuanhua110
0
voila-dashboards/voila
jupyter
1,449
Remove usage of `ADMIN_GITHUB_TOKEN` for releasing with the Jupyter Releaser
The Jupyter Releaser updated its example workflows to use a GitHub App instead of relying on the `ADMIN_GITHUB_TOKEN`: https://github.com/jupyter-server/jupyter_releaser/blob/89be0fbdc13f47011c95a870218c18d94c4f193b/example-workflows/publish-release.yml#L24 For reference this is already in use in some other Jupyter projects, for example Jupyter Server: https://github.com/jupyter-server/jupyter_server/blob/69361eeb2702572faca6e6cc8ac1003e393eb5aa/.github/workflows/publish-release.yml#L24 We could start doing the same for the repos under the `voila-dashboards` organization, following the guide: https://jupyter-releaser.readthedocs.io/en/latest/how_to_guides/convert_repo_from_repo.html#checklist-for-adoption
closed
2024-03-11T14:36:47Z
2024-03-27T08:56:12Z
https://github.com/voila-dashboards/voila/issues/1449
[ "maintenance" ]
jtpio
4
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,607
Configurazione automatica del certificato - Let's Encrypt
### What version of GlobaLeaks are you using? 4.12.8 ### What browser(s) are you seeing the problem on? Chrome, Firefox ### What operating system(s) are you seeing the problem on? Linux ### Describe the issue Abbiamo provato ad avviare la Configurazione Automatica per generare il certificato https facendo uso della Certification Authority Let's Encrypt. La configurazione rimane bloccata in quanto viene generato l'errore (riscontrato nei log) riportato di seguito. --------------------------------- Platform: Host: whistleblowing.xxxxx.cloud (a5mdcz3l2smxumbwqyu3kblnuk3k5isikd2uxnu2xluirtl3j7ngmyyd.onion) Version: 4.12.8 AttributeError Attribute not found. Traceback (most recent call last): File "/root/GlobaLeaks/backend/env/lib/python3.10/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "/root/GlobaLeaks/backend/globaleaks/rest/api.py", line 425, in concludeHandlerFailure self.handle_exception(err, request) File "/root/GlobaLeaks/backend/globaleaks/rest/api.py", line 252, in handle_exception e.tid = request.tid File "/root/GlobaLeaks/backend/env/lib/python3.10/site-packages/josepy/util.py", line 191, in _setattr_ raise AttributeError("can't set attribute") AttributeError: can't set attribute ------------------------------------- Potete darci qualche informazione a riguardo? Grazie ### Proposed solution _No response_
closed
2023-09-06T13:21:51Z
2023-09-06T13:24:38Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3607
[]
LorenzoPetraroli
0
benbusby/whoogle-search
flask
245
[BUG] <whoogle not Opening in Tor Browser>
**Describe the bug** I'm trying to open Whoogle Search Engine in chrome, it works fine but when i tried to run in TOR browser show like "Unable to connect" **To Reproduce** Steps to reproduce the behaviour: 1. Open the TOR Browser 2. Enter the URL then click Enter ![Tor_Browser](https://user-images.githubusercontent.com/55937175/113004432-39be7280-9191-11eb-9324-a31f30839965.png) 3. But It Work in chrome browser ![chrom_browser](https://user-images.githubusercontent.com/55937175/113005068-c10be600-9191-11eb-933d-49fabda302ed.png) **Deployment Method** - [ ] Heroku (one-click deploy) - [ ] Docker - [x] `run` executable - [ ] pip/pipx - [ ] Other: [describe setup] **Version of Whoogle Search** - [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc) - [ ] Version [version number] - [ ] Not sure **Desktop (please complete the following information):** - OS: Ubuntu 20.04 - Browser TOR - Version 10.0.15
closed
2021-03-30T14:27:36Z
2021-04-03T03:45:14Z
https://github.com/benbusby/whoogle-search/issues/245
[ "bug" ]
KK-Repos
1
aleju/imgaug
deep-learning
138
Scale short/long side
Please add an option to resize the shorter / longer side to the specific size with `keep-aspect-ratio`. ```py # example iaa.Scale({"short":500,"long":"keep-aspect-ratio"}) ```
open
2018-06-09T07:09:51Z
2019-03-01T10:34:14Z
https://github.com/aleju/imgaug/issues/138
[]
elbaro
2
quantumlib/Cirq
api
7,007
Wrapped qasm gate definition throws `QasmException`
**Description of the issue** Passing an OpenQASM 2.0 string like: ``` OPENQASM 2.0; include "qelib1.inc"; gate gate_PauliEvolution(param0) q0,q1,q2,q3 { rzz(8.227302641391761) q2,q3; rzz(8.227302641391761) q0,q1; rzz(8.227302641391761) q0,q3; } gate gate_PauliEvolution_140345447972352(param0) q0,q1,q2,q3 { rx(4.508626411417277) q3; rx(4.508626411417277) q2; rx(4.508626411417277) q1; rx(4.508626411417277) q0; } gate gate_PauliEvolution_140344494262464(param0) q0,q1,q2,q3 { rzz(2.935954138718162) q2,q3; rzz(2.935954138718162) q0,q1; rzz(2.935954138718162) q0,q3; } gate gate_PauliEvolution_140344494417552(param0) q0,q1,q2,q3 { rx(1.865563952607601) q3; rx(1.865563952607601) q2; rx(1.865563952607601) q1; rx(1.865563952607601) q0; } gate gate_QAOA(param0,param1,param2,param3) q0,q1,q2,q3 { h q0; h q1; h q2; h q3; gate_PauliEvolution(4.1136513206958805) q0,q1,q2,q3; gate_PauliEvolution_140345447972352(2.2543132057086384) q0,q1,q2,q3; gate_PauliEvolution_140344494262464(1.467977069359081) q0,q1,q2,q3; gate_PauliEvolution_140344494417552(0.9327819763038006) q0,q1,q2,q3; } qreg q[4]; gate_QAOA(4.1136513206958805,2.2543132057086384,1.467977069359081,0.9327819763038006) q[0],q[1],q[2],q[3]; ``` to `circuit_from_qasm()` throws: ``` QasmException: Syntax error: 'gate' ...gate gate_PauliEvolution(param0) q0,q1,q2,q3 { rzz(8.227302641391761) q2,q3; rzz(8.227302641391761) q0,q1; rzz(8.227302641391761) q0,q3; } gate gate_PauliEvolution_140343572895824(param0) q0,q1,q2,q3 { rx(4.508626411417277) q3; rx(4.508626411417277) q2; rx(4.508626411417277) q1; rx(4.508626411417277) q0; } gate gate_PauliEvolution_140343573297872(param0) q0,q1,q2,q3 { rzz(2.935954138718162) q2,q3; rzz(2.935954138718162) q0,q1; rzz(2.935954138718162) q0,q3; } gate gate_PauliEvolution_140343555602864(param0) q0,q1,q2,q3 { rx(1.865563952607601) q3; rx(1.865563952607601) q2; rx(1.865563952607601) q1; rx(1.865563952607601) q0; } gate gate_QAOA(param0,param1,param2,param3) q0,q1,q2,q3 { h q0; h q1; h q2; h q3; gate_PauliEvolution(4.1136513206958805) q0,q1,q2,q3; gate_PauliEvolution_140343572895824(2.2543132057086384) q0,q1,q2,q3; gate_PauliEvolution_140343573297872(1.467977069359081) q0,q1,q2,q3; gate_PauliEvolution_140343555602864(0.9327819763038006) q0,q1,q2,q3; } qreg q[4]; gate_QAOA(4.1136513206958805,2.2543132057086384,1.467977069359081,0.9327819763038006) q[0],q[1],q[2],q[3] ^ at line 3, column 6 ``` **How to reproduce the issue** This can be reproduced via the following code snippet: ```python from cirq.contrib.qasm_import import circuit_from_qasm import numpy as np import qiskit from qiskit import qasm2 from qiskit.circuit.library import QAOAAnsatz from qiskit.quantum_info import SparsePauliOp depth = 2 cost_operator = SparsePauliOp(["ZZII", "IIZZ", "ZIIZ"]) qaoa_circuit = QAOAAnsatz(cost_operator, reps=depth) main_circuit = qiskit.QuantumCircuit(4) main_circuit.compose(qaoa_circuit, inplace=True) beta_val = np.pi * np.random.rand(depth) gamma_val = 2 * np.pi * np.random.rand(depth) initial_parameters = list(beta_val) + list(gamma_val) main_circuit = main_circuit.assign_parameters(initial_parameters) circuit_from_qasm(qasm2.dumps(main_circuit)) ``` **Cirq version** ``` cirq-core==1.5.0.dev20250128015450 ```
closed
2025-01-30T23:13:39Z
2025-02-19T00:17:59Z
https://github.com/quantumlib/Cirq/issues/7007
[ "kind/bug-report" ]
bharat-thotakura
4
strawberry-graphql/strawberry
django
2,813
Exception handling with status code . As graphql-strawberry always return status code as 200 OK.
<!---I am new to graphql how to handle status code in graphql-strawberry--> ## Feature Request Type - [ ] Core functionality - [ ] Alteration (enhancement/optimization) of existing feature(s) - [ ] New behavior ## Description <!-- A few sentences describing what it is. --> def default_resolver(root, field): """resolver""" try: return operator.getitem(root, field) except KeyError: return getattr(root, field) config = StrawberryConfig( default_resolver=default_resolver ) schema = strawberry.Schema(query=Query,config=config) graphql_app = GraphQLRouter(schema, graphiql = env!='production')
closed
2023-06-06T08:08:09Z
2025-03-20T15:56:12Z
https://github.com/strawberry-graphql/strawberry/issues/2813
[]
itsckguru
1
paperless-ngx/paperless-ngx
django
8,051
[BUG] Some issues with custom field queries
### Description I've been testing the great new feature of the custom field filtering and stumbled on a few issues. When creating a complex filter, it is not possible to delete some parts of it, even if they are still empty. This can be reproduced by creating the following outline: ![grafik](https://github.com/user-attachments/assets/d028e596-14fe-4db1-9544-98305a1e7d8f) It is then not possible to delete the last added area with the red cross delete button (while it is possible to delete the one above it). Even when filling it with real queries, this issue persists. So adding by error another "{ }" block is an issue as it cannot be deleted anymore without deleting other, already configured parts. This also happens in more complex scenarios, e.g. with more sub filters: ![grafik](https://github.com/user-attachments/assets/08f1bf99-f81a-42f0-9e4e-30e4405a9663) In this scenario, the lower two sections (a total of 5 sections) cannot be deleted. Only the first part can be removed and afterwards also other areas can be deleted. It seems that only the uppermost two areas can be deleted. Everything below cannot be deleted. The other issue I found might not be a bug, but it seems not logical. It is about the "Not" selection. It is only available for whole subquery blocks, but not for single queries. So for a single query: ![grafik](https://github.com/user-attachments/assets/3fcf142d-9367-49c3-b9f3-30e0ecd0ff1e) It is not possible to use "Not", but it is when adding a "{ }" block: ![grafik](https://github.com/user-attachments/assets/7cc8f74f-a695-44df-9481-6b9c9c8f3e7c) it is possible. As there are "Is null" and "Has value", it would sometimes be nice to use "Has not value" rather to use both "Is null" and "Has value". With adding another sub query it is possible, but a little more complicated then it seems necessary. The last issue is about the "exists" query. This seems not to work as expected: ![grafik](https://github.com/user-attachments/assets/1ee98d74-e091-4040-9cec-a2cb7c5de3e8) Results in many hits of the form: ![grafik](https://github.com/user-attachments/assets/f0499b9d-b0cf-4125-9e8d-b155ddfbc97a) So obviously, the fields are existing for the documents. Also some documents without the fields are shown, but it does not seem to be consistent at all. With "true", only documents with the fields seem to appear, but with "false" also lots of documents that have the field, appear in the list... ### Steps to reproduce See above for examples. ### Webserver logs ```bash No logs seem helpful here. If they do, please tell me! ``` ### Browser logs _No response_ ### Paperless-ngx version 2.13.0 ### Host OS Ubuntu 24.04 ### Installation method Docker - official image ### System status _No response_ ### Browser Mozilla Firefox 131.0.3 ### Configuration changes _No response_ ### Please confirm the following - [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [X] I have already searched for relevant existing issues and discussions before opening this report. - [X] I have updated the title field above with a concise description.
closed
2024-10-27T11:13:09Z
2024-12-03T03:18:29Z
https://github.com/paperless-ngx/paperless-ngx/issues/8051
[ "bug", "frontend" ]
B5JohnSheridan
9
ageitgey/face_recognition
python
1,119
C++?
Hi, and thank you for making this code available. I would like to convert it to c++, as I know dlib is a c++ lib. Specifically this function ``` import face_recognition known_image = face_recognition.load_image_file("biden.jpg") unknown_image = face_recognition.load_image_file("unknown.jpg") biden_encoding = face_recognition.face_encodings(known_image)[0] unknown_encoding = face_recognition.face_encodings(unknown_image)[0] results = face_recognition.compare_faces([biden_encoding], unknown_encoding) ``` Have you written custom approaches to the face reidentification? Or is it all dlib functionality? Thanks!
closed
2020-04-22T14:19:35Z
2020-04-22T14:31:32Z
https://github.com/ageitgey/face_recognition/issues/1119
[]
antithing
1
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
927
Still get this error--module 'torch._C' has no attribute '_cuda_setDevice'
I tried to reproduce the code from [https://github.com/samet-akcay/ganomaly](url) and run the commands in the git bash software. But I meet the following problems and it seems difficult for me to fix it by myself: ![image](https://user-images.githubusercontent.com/39890732/75091679-2367df80-55ab-11ea-8097-5617abd1910a.png) **the main error is "AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'"** 1. I tried to fix this problems by refering #360 and #67—— that is, I change the code **torch.cuda.set_device(self.opt.gpu_ids[0])** to **torch.cuda.set_device(self.opt.gpu_ids[-1])** and **torch._C._cuda_setDevice(device)** to **torch._C._cuda_setDevice(-1)**,but it still not works. 2. I tried to reinstall the pytorch and update to the newest version (1.4.0), still exists error. 3. I read the PyTorch Q&A and there may be some problems about my CUDA, I tried to add **--gpu_ids -1** to my code (that is, sh experiments/run_mnist.sh --gpu_ids -1, see the following picture), still exit error. ![image](https://user-images.githubusercontent.com/39890732/75091866-f9afb800-55ac-11ea-90bc-7f45dced0c60.png) I was stucked by this problem by few days and I hope someone could help me. Thanks a lot!
closed
2020-02-22T11:53:32Z
2020-02-25T09:24:44Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/927
[]
Minqi824
1
opengeos/leafmap
plotly
32
Export map to html
Add a one-liner function for exporting map to html
closed
2021-06-12T00:31:25Z
2021-06-12T01:42:35Z
https://github.com/opengeos/leafmap/issues/32
[ "Feature Request" ]
giswqs
1
3b1b/manim
python
1,424
name 'Scene' is not defined
manim example_scenes.py Traceback (most recent call last): File "/home/yyslush/.local/bin/manim", line 8, in <module> sys.exit(main()) File "/home/yyslush/.local/lib/python3.8/site-packages/manimlib/__init__.py", line 9, in main config = manimlib.config.get_configuration(args) File "/home/yyslush/.local/lib/python3.8/site-packages/manimlib/config.py", line 155, in get_configuration module = get_module(args.file) File "/home/yyslush/.local/lib/python3.8/site-packages/manimlib/config.py", line 150, in get_module spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "example_scenes.py", line 12, in <module> class OpeningManimExample(Scene): NameError: name 'Scene' is not defined python3.8
closed
2021-03-02T08:13:08Z
2021-06-12T10:24:54Z
https://github.com/3b1b/manim/issues/1424
[]
yyslush
2
slackapi/bolt-python
fastapi
1,252
Action listener payload typing support
I'm currently working with Block Kit interactive components with Bolt, and have an action listener that I'd like to make use of `body` within in a type-safe manner. As detailed in [the docs for `slack_bolt.kwargs_injection.args`](https://tools.slack.dev/bolt-python/api-docs/slack_bolt/kwargs_injection/args.html), both `payload` and `body` have a type of `Dict[str, Any]`. I'm making use of Pyright static type checking throughout my project, but this unfortunately feels like a bit of a blind spot due to the use of `Any`. Is the intent that SDK users manually narrow the type themselves by referring to the documentation/samples within ex. [block action payloads](https://api.slack.com/reference/interaction-payloads/block-actions) and checking fields/values as needed, or is there formal support for Python types that cover each payload type variation that can be passed to the action listener? I noticed [`slack_bolt.request.payload_utils`](https://tools.slack.dev/bolt-python/api-docs/slack_bolt/request/payload_utils.html#slack_bolt.request.payload_utils.to_action) provides some helper functions for inspecting payloads at runtime, but ideally there's also types available to complement these type guards for static type checking tools like Pyright. Are types for listener payloads currently provided within the Slack Bolt Python SDK? ### Reproducible in: #### The `slack_bolt` version `1.21.2` #### Python runtime version `Python 3.12.9` #### OS info N/A (OS-independent) #### Steps to reproduce: A simple example based on a sample in the [Bolt docs](https://tools.slack.dev/bolt-python/concepts/actions/#listening-to-actions-using-a-constraint-object), with added typing: ```python from slack_bolt import Ack, App from slack_sdk import WebClient app = App(token=os.environ["SLACK_BOT_TOKEN"]) @app.action("my_action") def handle_action(ack: Ack, body: dict[str, Any], client: WebClient): ack() # this approach gives us runtime guardrails, but no help when running static type checkers if "container" in body and "message_ts" in body["container"]: client.reactions_add( name="white_check_mark", channel=body["channel"]["id"], timestamp=body["container"]["message_ts"], ) ``` ### Expected result: Body/payload can be narrowed to some type other than `dict[str, Any]` that has knowledge of what keys and values should exist in the dictionary. ### Actual result: There is seemingly no way (that I can find in the docs) to narrow the type of `body`, meaning fields are treated as raw strings and values are treated as `Any`. ## Requirements > Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules. ✅
closed
2025-02-21T17:12:02Z
2025-02-21T20:44:00Z
https://github.com/slackapi/bolt-python/issues/1252
[ "question" ]
jstol
2
newpanjing/simpleui
django
2
win10操作系统安装的时候,报字符编码错误
我的环境是:win10操作系统,python3.6.4,django1.10.6 报错详细如下: **$ pip install django-simpleui==1.4.3** Collecting django-simpleui==1.4.3 Using cached https://files.pythonhosted.org/packages/a1/64/9a031a3573e25ba51e3069ddd08f53b4b54846f952e18e6e09f5a03b49e7/django-simpleui-1.4.3.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\Users\wt\AppData\Local\Temp\pip-install-tmq8k1db\django-simpleui\setup.py", line 4, in <module> open('README.md', 'r').read(), UnicodeDecodeError: 'gbk' codec can't decode byte 0xa3 in position 112: illegal multibyte sequence ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in D:\Users\wt\AppData\Local\Temp\pip-install-tmq8k1db\django-simpleui\
closed
2018-12-13T13:54:37Z
2018-12-14T00:37:45Z
https://github.com/newpanjing/simpleui/issues/2
[]
wthahaha
4
samuelcolvin/watchfiles
asyncio
255
.pyd ignored pattern is added back by PythonFilter.extensions
### Description Not sure if oversight or by design: ``` ... ignore_entity_patterns: Sequence[str] = ( r'\.py[cod]$', ... self.extensions = ('.py', '.pyx', '.pyd') + tuple(extra_extensions) ``` ### Example Code _No response_ ### Watchfiles Output ```Text . ``` ### Operating System & Architecture . ### Environment . ### Python & Watchfiles Version . ### Rust & Cargo Version .
closed
2023-12-11T14:49:27Z
2024-08-07T11:27:16Z
https://github.com/samuelcolvin/watchfiles/issues/255
[ "bug" ]
minusf
1
Lightning-AI/pytorch-lightning
deep-learning
20,153
Confusing recommendation to use sync_dist=True even with TorchMetrics
### Bug description Hello! When I train and validate a model in a multi-GPU setting (HPC, sbatch job that requests multiple GPUs on a single node), I use `self.log(..., sync_dist=True)` when logging PyTorch losses, and don't specify any value for `sync_dist` when logging metrics from TorchMetrics library. However, I still get warnings like ``` ... It is recommended to use `self.log('val_mean_recall', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. ... It is recommended to use `self.log('val_bg_recall', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. ``` These specific messages correspond to logging `tmc.MulticlassRecall(len(self.task.class_names), average="macro", ignore_index=self.metric_ignore_index)` and individual components of `tmc.MulticlassRecall(len(self.task.class_names), average="none", ignore_index=self.metric_ignore_index)`. Full code listing for metric object definitions and logging is provided in the "reproducing the bug" section. As I understand from a note [here](https://lightning.ai/docs/torchmetrics/stable/pages/lightning.html#logging-torchmetrics), and from discussion [here](https://github.com/Lightning-AI/pytorch-lightning/discussions/6501#discussioncomment-553152), one doesn't typically need to explicitly use `sync_dist` when using TorchMetrics. I wonder if I still need to enable `sync_dist=True` as advised in the warnings due to some special case that I am not aware about, or should I follow the docs and keep it as is? In any case, this is probably a bug, either in documentation, or in warning code. Thank you! ### What version are you seeing the problem on? 2.3.0 ### How to reproduce the bug ```python self.val_metric_funs = tm.MetricCollection( { "cm_normalize_all": tmc.MulticlassConfusionMatrix( len(self.task.class_names), ignore_index=self.metric_ignore_index, normalize="all", ), "recall_average_macro": tmc.MulticlassRecall( len(self.task.class_names), average="macro", ignore_index=self.metric_ignore_index, ), "recall_average_none": tmc.MulticlassRecall( len(self.task.class_names), average="none", ignore_index=self.metric_ignore_index, ), "precision_average_macro": tmc.MulticlassPrecision( len(self.task.class_names), average="macro", ignore_index=self.metric_ignore_index, ), "precision_average_none": tmc.MulticlassPrecision( len(self.task.class_names), average="none", ignore_index=self.metric_ignore_index, ), "f1_average_macro": tmc.MulticlassF1Score( len(self.task.class_names), average="macro", ignore_index=self.metric_ignore_index, ), "f1_average_none": tmc.MulticlassF1Score( len(self.task.class_names), average="none", ignore_index=self.metric_ignore_index, ), } ) ``` ``` if not sanity_check: for metric_name, metric in metrics.items(): metric_fun = self.val_metric_funs[metric_name] metric_name_ = metric_name.split("_")[0] if isinstance(metric_fun, tmc.MulticlassConfusionMatrix): for true_class_num in range(metric.shape[0]): true_class_name = self.task.class_names[true_class_num] for pred_class_num in range(metric.shape[1]): pred_class_name = self.task.class_names[pred_class_num] self.log( f"val_true_{true_class_name}_pred_{pred_class_name}_cm", metric[true_class_num, pred_class_num].item(), on_step=False, on_epoch=True, logger=True, ) elif isinstance( metric_fun, ( tmc.MulticlassRecall, tmc.MulticlassPrecision, tmc.MulticlassF1Score, ), ): if metric_fun.average == "macro": self.log( f"val_mean_{metric_name_}", metric.item(), on_step=False, on_epoch=True, logger=True, ) elif metric_fun.average == "none": for class_num, metric_ in enumerate(metric): class_name = self.task.class_names[class_num] self.log( f"val_{class_name}_{metric_name_}", metric_.item(), on_step=False, on_epoch=True, logger=True, ) else: raise NotImplementedError( f"Code for logging metric {metric_name} is not implemented" ) else: raise NotImplementedError( f"Code for logging metric {metric_name} is not implemented" ) ``` ### Error messages and logs ``` ... It is recommended to use `self.log('val_mean_recall', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. ... It is recommended to use `self.log('val_bg_recall', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices. ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version: 2.3.0 #- PyTorch Version: 2.3.1 #- Python version: 3.11.9 #- OS: Linux #- CUDA/cuDNN version: 11.8 #- How you installed Lightning: conda-forge ``` </details> ### More info _No response_ cc @carmocca
open
2024-08-02T09:33:59Z
2024-11-13T16:55:30Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20153
[ "bug", "help wanted", "logging", "ver: 2.2.x" ]
srprca
9
blacklanternsecurity/bbot
automation
1,639
Add consistent method to identify service record type names is needed
**Description** A consistent way in which to test a host name as being an SRV or related service record is needed. These types of names do (should) not have A/AAAA/CNAME or similar records, and are simply used to advertise configuration information and/or policy information for different Internet facing services. We need a consistent way in which to perform this test, rather than having duplicated code in multiple places in different modules. The function should simply return a boolean so that modules can quickly test whether a host name will be relevant and worth inspecting or using in context for what the module does. NOTE: While underscores are technically not supposed to exist in DNS names as per RFC's, they can be used, so we can't assume that a discovered name that contains or starts with an underscore is a service record and so should check for specific strings.
closed
2024-08-07T16:43:58Z
2024-08-29T21:35:03Z
https://github.com/blacklanternsecurity/bbot/issues/1639
[ "enhancement" ]
colin-stubbs
0
ydataai/ydata-profiling
jupyter
1,358
'config_explorative.yaml' NOT FOUND
### Current Behaviour ![image](https://github.com/ydataai/ydata-profiling/assets/84823680/68e35c96-aa6e-4a16-85a4-a223812beafb) ### Expected Behaviour file should display ### Data Description file should display ### Code that reproduces the bug _No response_ ### pandas-profiling version NA ### Dependencies ```Text NA ``` ### OS _No response_ ### Checklist - [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues) - [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report. - [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
closed
2023-06-06T18:31:49Z
2023-06-17T02:25:28Z
https://github.com/ydataai/ydata-profiling/issues/1358
[ "question/discussion ❓" ]
SamuelDevdas
1
gee-community/geemap
streamlit
2,086
SecretNotFoundError when initializing geemap.Map() despite proper ee initialization
### Environment Information Please run the following code on your computer and share the output with us so that we can better debug your issue: ```python import geemap geemap.Report() ``` ![image](https://github.com/user-attachments/assets/416f6e2e-9097-451b-9861-5e59bad86f64) ### Description Recently, when running `map = geemap.Map()`, I encounter a `SecretNotFoundError` despite having properly initialized Earth Engine. This error occurs even after successfully running the authentication and initialization steps for Earth Engine. ## Steps to Reproduce ```python import ee import geemap ee.Authenticate() ee.Initialize(project='my-project') map = geemap.Map() # Raises error here The geemap.Map() function should initialize without errors after Earth Engine authentication and initialization. This issue occurs despite running the authentication and initialization steps as per the documentation. This was working fine previously but started occurring recently. ### What I Did ``` Paste the command(s) you ran and the output. If there was a crash, please include the traceback here. ``` ![image](https://github.com/user-attachments/assets/a2634996-ad5c-4b19-a5ec-8f880a4f107f)
closed
2024-07-19T09:16:38Z
2024-07-19T11:48:48Z
https://github.com/gee-community/geemap/issues/2086
[ "bug" ]
CristinaMarsh
3
sgl-project/sglang
pytorch
4,143
[Bug] HiRadixCache.__init__() got an unexpected keyword argument 'token_to_kv_pool_allocator'
### Checklist - [ ] 1. I have searched related issues but cannot get the expected help. - [ ] 2. The bug has not been fixed in the latest version. - [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback. - [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed. - [ ] 5. Please use English, otherwise it will be closed. ### Describe the bug ```python docker run --gpus all --shm-size 32g -p 30000:30000 \ -e HF_TOKEN=... \ -v /teamspace/studios/this_studio/my-models:/model \ --ipc=host --network=host --privileged $1 \ python3 -m sglang.launch_server --model /model/deepseek-ai/DeepSeek-R1 \ --tp 8 \ --context-length 128000 \ --trust-remote-code \ --port 30000 \ --speculative-algo NEXTN \ --speculative-draft-model lmsys/DeepSeek-R1-NextN \ --speculative-num-steps 2 \ --speculative-eagle-topk 4 \ --speculative-num-draft-tokens 4 \ --enable-hierarchical-cache \ --host 0.0.0.0 [2025-03-06 13:56:52 TP3] Capture draft cuda graph end. Time elapsed: 21.19 s. avail mem=18.07 GB [2025-03-06 13:56:52 TP3] max_total_num_tokens=483506, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=32, context_len=128000 [2025-03-06 13:56:52 TP3] Scheduler hit an exception: Traceback (most recent call last): File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 2255, in run_scheduler_process scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank) File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 269, in __init__ self.init_memory_pool_and_cache() File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 431, in init_memory_pool_and_cache self.tree_cache = HiRadixCache( TypeError: HiRadixCache.__init__() got an unexpected keyword argument 'token_to_kv_pool_allocator' ``` ### Reproduction Shared above ### Environment H200
open
2025-03-06T14:59:04Z
2025-03-11T14:10:23Z
https://github.com/sgl-project/sglang/issues/4143
[]
tchaton
4
CorentinJ/Real-Time-Voice-Cloning
pytorch
762
Error when running synthesizer_train.py > line 35, in <module>
So I have used this git + made my own additions based on padmalcom´s branch to build a fit for me and my german sets. Anaconda + Enviornment Python on 3.7 with all the pre req´s I have successfully trained it on the german books padmalcom uses too, but converted them to flac instead using wav´s via batch process in flic flac **I went first with** python encoder_preprocess.py python encoder_train.py GermanBookMale python synthesizer_preprocess_audio.py python synthesizer_preprocess_embeds.py **successfully.** Now i am failing at (GerVoice3.7) E:\DeepFakes\cleanRTVC>**python synthesizer_train.py GermanBookMale** E:\DeepFakes\cleanRTVC\SV2TTS\synthesizer\ Arguments: run_id: GermanBookMale syn_dir: E:\DeepFakes\cleanRTVC\SV2TTS\synthesizer\ models_dir: synthesizer/saved_models/ save_every: 1000 backup_every: 25000 force_restart: False hparams: Checkpoint path: synthesizer\saved_models\GermanBookMale\GermanBookMale.pt **_EXISTS, 117mb_** Loading training data from: E:\DeepFakes\cleanRTVC\SV2TTS\synthesizer\train.txt **_EXISTS, 2,85mb_** Using model: Tacotron Using device: cuda **_<<< 1080 Ti 11GB_** Initialising Tacotron Model... Trainable Parameters: 30.874M Loading weights at synthesizer\saved_models\GermanBookMale\GermanBookMale.pt Tacotron weights loaded from step 0 Using inputs from: E:\DeepFakes\cleanRTVC\SV2TTS\synthesizer\train.txt **_EXISTS, 2,85mb_** E:\DeepFakes\cleanRTVC\SV2TTS\synthesizer\mels **_EXISTS, 2,10gb_** E:\DeepFakes\cleanRTVC\SV2TTS\synthesizer\embeds **_EXISTS, 54mb_** **Found 13847 samples** +----------------+------------+---------------+------------------+ | Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) | +----------------+------------+---------------+------------------+ | 20k Steps | 12 | 0.001 | 2 | +----------------+------------+---------------+------------------+ Traceback (most recent call last): **File "synthesizer_train.py", line 35, in <module> train(STAR-STAR-SYMBOLvars(args))** File "E:\DeepFakes\cleanRTVC\synthesizer\train.py", line 158, in train for i, (texts, mels, embeds, idx) in enumerate(data_loader, 1): File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\site-packages\torch\utils\data\dataloader.py", line 278, in __iter__ return _MultiProcessingDataLoaderIter(self) File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\site-packages\torch\utils\data\dataloader.py", line 682, in __init__ w.start() File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) **AttributeError: Can't pickle local object 'train.<locals>.<lambda>'** (GerVoice3.7) E:\DeepFakes\cleanRTVC>Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Users\User\anaconda3\envs\GerVoice3.7\lib\multiprocessing\spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) **EOFError: Ran out of input** as add on info: (GerVoice3.7) E:\DeepFakes\cleanRTVC>conda list torch Name Version Build Channel **pytorch** **1.2.0** py3.7**_cuda100_cudnn7_1** pytorch torchfile 0.1.0 pypi_0 pypi torchvision 0.4.0 py37_cu100 pytorch any hint of what could go wrong here? cant point out an error?
closed
2021-05-23T23:28:15Z
2021-06-04T00:16:34Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/762
[]
Highpressure
1
oegedijk/explainerdashboard
dash
251
TypeError: __init__() got an unexpected keyword argument 'unbound_message'
Hi I am somehow unable to import the SDK from explainerdashboard import ClassifierExplainer, ExplainerDashboard Throws an error. Is this still expected to work in Jupyter notebook?
closed
2023-01-19T05:57:00Z
2023-07-06T14:50:14Z
https://github.com/oegedijk/explainerdashboard/issues/251
[]
Jpnewmenji
1
microsoft/nni
data-science
5,064
nni cannot use tensorboard
When I use NNI to conduct hyperparameter searching, I cannot use tensorboard. As I selected the experiment and click tensorboard button, it will quickly pop up a python window (fig.1), and then the tensorboard page shows error (fig.2). for the path in trail source code, I use log_dir = os.path.join(os.environ["NNI_OUTPUT_DIR"], 'tensorboard') Indeed, I can see tensorboard log file in each exp directory, but I cannot open the file via nni platform. However, when I open terminal, and use the tensorboard from there, open the file in the trail output dir, the tensorboard can work well. ![Screenshot 2022-08-12 173918](https://user-images.githubusercontent.com/76143149/184328582-880460da-6cc0-4c09-a016-fec9f2a1f385.jpg) ![Screenshot 2022-08-12 174009](https://user-images.githubusercontent.com/76143149/184328607-5c011589-4cd0-447e-889b-d76b26e89099.jpg) **Environment**: - NNI version: v2.8 - Training service (local|remote|pai|aml|etc): local - Client OS: windows - Server OS (for remote mode only): - Python version: 3.9.12 - PyTorch/TensorFlow version: pytorch - Is conda/virtualenv/venv used?: conda - Is running in Docker?: no **Configuration**: - Experiment config : hyperparameter tuning
closed
2022-08-12T09:48:59Z
2023-05-12T02:34:38Z
https://github.com/microsoft/nni/issues/5064
[ "bug", "user raised", "support", "tensorboard" ]
cehw
2
ml-tooling/opyrator
fastapi
37
How to run ui and api on same port once?
Great framework, really user friendly! Have some suggestions during usage. Now ui and api need to run with different ports, ``` opyrator launch-ui conversion:convert opyrator launch-api conversion:convert ``` Is there a way to run them together ``` opyrator launch conversion:convert ``` So that I can access `GET /` for ui, `GET /docs` for docs, `POST /call` for apis.
closed
2021-05-26T05:31:11Z
2021-11-06T02:07:02Z
https://github.com/ml-tooling/opyrator/issues/37
[ "feature", "stale" ]
loongmxbt
2
ultralytics/yolov5
deep-learning
12,923
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 32 for tensor number 1 in the list.
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report. ### YOLOv5 Component _No response_ ### Bug When i add "CA" to v5-7.0, segment , i want add CA to .yaml backbone each C3 end, when i train , begin: RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 32 for tensor number 1 in the list. ### Environment Traceback (most recent call last): File "D:/YOLO/yolov5-v7.0/models/yolo.py", line 377, in <module> model = Model(opt.cfg).to(device) File "D:/YOLO/yolov5-v7.0/models/yolo.py", line 195, in __init__ m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward File "D:/YOLO/yolov5-v7.0/models/yolo.py", line 194, in <lambda> forward = lambda x: self.forward(x)[0] if isinstance(m, Segment) else self.forward(x) File "D:/YOLO/yolov5-v7.0/models/yolo.py", line 209, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "D:/YOLO/yolov5-v7.0/models/yolo.py", line 121, in _forward_once x = m(x) # run File "D:\Anaconda\envs\yolov5-7.0\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\YOLO\yolov5-v7.0\models\common.py", line 313, in forward return torch.cat(x, self.d) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 32 for tensor number 1 in the list. Process finished with exit code 1 ### Minimal Reproducible Example _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [X] Yes I'd like to help by submitting a PR!
closed
2024-04-15T09:01:19Z
2024-05-26T00:23:58Z
https://github.com/ultralytics/yolov5/issues/12923
[ "bug", "Stale" ]
zhangsanliisi
2
raphaelvallat/pingouin
pandas
115
plot_paired with many subjects: boxplot not decipherable
Hi! First of all many thanks for providing such a great tool! While using `plot_paired`, I stumbled upon the problem, that having many subjects makes the boxplot indecipherable, since it is displayed behind the lines and markers. I could make a pull request implementing an argument like `boxplot_foreground=False` (default), which adjusts the boxplot `zorder` to put the boxplot in front of the lines if set to `True`. I'd combine this with setting the transparency of the boxes to `0.75`, either with `boxplot_kwargs={'boxprops': {'alpha': .75}}` (also sets box boundaries to transparent) or by altering the patch artist after plotting, only setting the face alpha, as in [this reply](https://github.com/mwaskom/seaborn/issues/979). While doing so, I'd also implement that setting one of the kwargs f.i. in `boxplot_kwargs` does **not overwrite** other default kwargs, such as the facecolors. Do you want me to make a pull request for anything of the above mentioned things?
closed
2020-08-11T11:03:25Z
2020-08-18T09:05:18Z
https://github.com/raphaelvallat/pingouin/issues/115
[ "feature request :construction:" ]
JoElfner
7
Anjok07/ultimatevocalremovergui
pytorch
561
closed
closed
2023-05-19T10:10:59Z
2023-05-24T03:48:26Z
https://github.com/Anjok07/ultimatevocalremovergui/issues/561
[]
tricky61
0
deezer/spleeter
deep-learning
48
Stopped working.
At the end outputs the same file without separating the voice and accompaniment.
closed
2019-11-07T11:02:15Z
2019-11-07T11:16:24Z
https://github.com/deezer/spleeter/issues/48
[ "bug", "invalid", "RTMP" ]
Drawmahs
0
modelscope/modelscope
nlp
426
示例文档压缩包中所有图片显示不全
示例包路径:http://modelscope-ipynb-hz.oss-cn-hangzhou-internal.aliyuncs.com/default/tutorial_documentation/tutorial_documentation.tar.gz ![截图_选择区域_20230728170546](https://github.com/modelscope/modelscope/assets/40030020/17d5bc67-49d8-4027-bc5d-467cb2348154) 我把它下载下来解压,图片仍然显示不全: ![image](https://github.com/modelscope/modelscope/assets/40030020/3667e08b-bb04-4bb8-b70a-f4aeddeb37c8) ![image](https://github.com/modelscope/modelscope/assets/40030020/dc215a0b-b32d-4147-a6ed-dd48b3acbf52)
closed
2023-07-28T13:22:34Z
2024-07-14T01:57:41Z
https://github.com/modelscope/modelscope/issues/426
[ "Stale" ]
seawenc
3
ShishirPatil/gorilla
api
75
Leveraging Llama 2
I don’t see any existing discussion about leveraging Meta’s new Llama 2 model. Curious if you guys have any plans in the making for using this new base model in gorilla.
open
2023-07-27T13:18:12Z
2023-08-03T09:22:18Z
https://github.com/ShishirPatil/gorilla/issues/75
[]
tmc
7
autokey/autokey
automation
700
Setting a Keyboard Shortcut is not possible, because no Keyboard signal is recognized.
### Has this issue already been reported? - [X] I have searched through the existing issues. ### Is this a question rather than an issue? - [X] This is not a question. ### What type of issue is this? Crash/Hang/Data loss ### Which Linux distribution did you use? Fedora Workstation 36 ### Which AutoKey GUI did you use? Both ### Which AutoKey version did you use? I have tried it in the newest (0.96.0), in the newest that the package manager offers (0.95.10) and custom install (0.95.6). ### How did you install AutoKey? Tried everything. Git, pip3 install, using alien to make deb files rpm files and installed through this. ### Can you briefly describe the issue? ![Screenshot from 2022-06-14 17-31-00](https://user-images.githubusercontent.com/104461856/173620268-42d4d32e-a671-485a-8b56-756ba18590a1.png) ### Can the issue be reproduced? _No response_ ### What are the steps to reproduce the issue? Install in the favourite way. Create new folder/Script or change hotkey of existing one. click "press to set" ### What should have happened? recognize the keyboard input ### What actually happened? error message and the program gets kinda stuck in the loop of wanting an input ### Do you have screenshots? ![Screenshot from 2022-06-14 17-31-00](https://user-images.githubusercontent.com/104461856/173620750-5c392803-a251-4cdd-8bf9-394d15de6b38.png) ### Can you provide the output of the AutoKey command? ```bash 2022-06-14 17:50:19,158 INFO - root - Initialising application 2022-06-14 17:50:19,161 INFO - root - Initialise global hotkeys 2022-06-14 17:50:19,161 INFO - config-manager - Loading config from existing file: /home/elias/.config/autokey/autokey.json 2022-06-14 17:50:19,161 DEBUG - config-manager - Loading folder at '/home/elias/.config/autokey/data/My Phrases' 2022-06-14 17:50:19,162 DEBUG - config-manager - Loading folder at '/home/elias/.config/autokey/data/Sample Scripts' 2022-06-14 17:50:19,163 DEBUG - config-manager - Loading folder at '/home/elias/.config/autokey/data/hallo' 2022-06-14 17:50:19,163 INFO - config-manager - Configuration changed - rebuilding in-memory structures 2022-06-14 17:50:19,163 DEBUG - inotify - Adding watch for /home/elias/.config/autokey/data/My Phrases 2022-06-14 17:50:19,163 DEBUG - inotify - Adding watch for /home/elias/.config/autokey/data/My Phrases/Addresses 2022-06-14 17:50:19,163 DEBUG - inotify - Adding watch for /home/elias/.config/autokey/data/Sample Scripts 2022-06-14 17:50:19,164 DEBUG - inotify - Adding watch for /home/elias/.config/autokey/data/hallo 2022-06-14 17:50:19,164 INFO - config-manager - Successfully loaded configuration 2022-06-14 17:50:19,164 DEBUG - inotify - Adding watch for /home/elias/.config/autokey/data 2022-06-14 17:50:19,164 DEBUG - inotify - Adding watch for /home/elias/.config/autokey 2022-06-14 17:50:19,164 DEBUG - config-manager - Global settings: {'isFirstRun': True, 'serviceRunning': True, 'menuTakesFocus': False, 'showTrayIcon': True, 'sortByUsageCount': True, 'promptToSave': False, 'enableQT4Workaround': False, 'interfaceType': 'XRecord', 'undoUsingBackspace': True, 'windowDefaultSize': [1920, 1011], 'hPanePosition': 1028, 'columnWidths': [150, 50, 100], 'showToolbar': True, 'notificationIcon': 'autokey-status', 'workAroundApps': '.*VirtualBox.*|krdc.Krdc', 'triggerItemByInitial': False, 'disabledModifiers': [], 'scriptGlobals': {}} 2022-06-14 17:50:19,164 INFO - service - Starting service 2022-06-14 17:50:19,179 DEBUG - interface - Enabling sending using Alt-Grid 2022-06-14 17:50:19,179 DEBUG - interface - Modifier masks: {<Key.SHIFT: '<shift>'>: 1, <Key.CAPSLOCK: '<capslock>'>: 2, <Key.CONTROL: '<ctrl>'>: 4, <Key.ALT: '<alt>'>: 8, <Key.ALT_GR: '<alt_gr>'>: 128, <Key.SUPER: '<super>'>: 64, <Key.HYPER: '<hyper>'>: 64, <Key.META: '<meta>'>: 8, <Key.NUMLOCK: '<numlock>'>: 16} 2022-06-14 17:50:19,200 DEBUG - interface - Alt-Grid: XK_ISO_Level3_Shift, 65027 2022-06-14 17:50:19,200 DEBUG - interface - X Server Keymap, listing unmapped keys. 2022-06-14 17:50:19,201 DEBUG - iomediator - Set modifier Key.CAPSLOCK to False 2022-06-14 17:50:19,201 DEBUG - iomediator - Set modifier Key.NUMLOCK to True 2022-06-14 17:50:19,201 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,202 DEBUG - interface - __flushEvents: Entering event loop. 2022-06-14 17:50:19,202 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,202 INFO - iomediator - Created IoMediator instance, current interface is: <XRecordInterface(XInterface-thread, initial daemon)> 2022-06-14 17:50:19,203 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,203 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,203 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,204 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,206 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,207 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,207 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,207 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,208 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,208 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,209 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,209 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,210 INFO - interface - XRecord interface thread starting 2022-06-14 17:50:19,210 INFO - service - Service now marked as running 2022-06-14 17:50:19,210 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,212 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,212 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,212 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,213 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,213 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,213 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,213 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,213 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,214 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,214 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,214 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,215 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,215 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,215 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,215 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,215 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,216 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,216 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,216 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,216 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,216 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,217 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,217 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,217 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,217 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,217 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,218 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,218 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,218 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,218 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,218 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,219 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,219 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,219 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,219 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,219 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,220 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,220 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,220 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,220 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,221 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,221 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,221 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,221 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,221 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,222 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,222 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,222 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,222 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,222 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,223 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,223 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,223 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,223 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,223 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,223 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,224 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,224 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,224 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,224 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,224 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,225 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,225 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,225 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,225 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,225 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,225 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,226 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,226 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,226 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,226 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,226 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,227 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,227 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,227 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,227 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,227 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,227 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,228 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,228 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,228 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,228 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,228 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,229 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,229 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,229 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,229 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,229 DEBUG - interface - Grabbing hotkey: ['<ctrl>'] '<f7>' 2022-06-14 17:50:19,238 DEBUG - phrase-menu - Sorting phrase menu by usage count 2022-06-14 17:50:19,238 DEBUG - phrase-menu - Triggering menu item by position in list 2022-06-14 17:50:19,240 DEBUG - root - Created DBus service 2022-06-14 17:50:19,240 INFO - root - Entering main() 2022-06-14 17:50:19,245 DEBUG - interface - Recorded keymap change event 2022-06-14 17:50:19,445 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' (autokey-gtk:12322): Gtk-CRITICAL **: 17:50:19.451: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed 2022-06-14 17:50:19,456 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,456 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,457 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,457 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,457 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,457 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,458 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,458 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,458 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,458 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,458 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,458 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,459 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,459 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,459 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,459 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,459 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,459 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,460 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,460 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,460 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,460 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,460 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,460 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,461 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,461 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,461 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,461 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,461 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,461 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,462 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,462 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,462 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,462 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,462 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,462 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,463 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,463 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,463 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,463 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,463 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,464 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,464 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,464 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,464 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,464 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,465 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,465 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,465 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,465 DEBUG - interface - Ungrabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,465 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,476 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,476 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,476 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,476 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,477 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,477 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,477 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,477 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,477 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,477 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,478 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,478 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,478 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,478 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,478 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,479 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,479 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,479 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,479 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,479 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,480 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,480 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,480 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,480 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,480 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,480 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,481 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,481 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,481 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,481 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,481 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,482 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,482 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,482 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,482 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,482 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,482 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,483 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,483 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,483 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,483 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,483 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,483 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,484 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,484 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,484 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,484 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,484 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,484 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,485 DEBUG - interface - Ungrabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,485 DEBUG - interface - Ungrabbing hotkey: ['<ctrl>'] '<f7>' 2022-06-14 17:50:19,496 DEBUG - interface - Enabling sending using Alt-Grid 2022-06-14 17:50:19,497 DEBUG - interface - Modifier masks: {<Key.SHIFT: '<shift>'>: 1, <Key.CAPSLOCK: '<capslock>'>: 2, <Key.CONTROL: '<ctrl>'>: 4, <Key.ALT: '<alt>'>: 8, <Key.ALT_GR: '<alt_gr>'>: 128, <Key.SUPER: '<super>'>: 64, <Key.HYPER: '<hyper>'>: 64, <Key.META: '<meta>'>: 8, <Key.NUMLOCK: '<numlock>'>: 16} 2022-06-14 17:50:19,521 DEBUG - interface - Alt-Grid: XK_ISO_Level3_Shift, 65027 2022-06-14 17:50:19,521 DEBUG - interface - X Server Keymap, listing unmapped keys. 2022-06-14 17:50:19,528 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,528 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,528 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,528 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,528 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,529 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,529 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,529 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,529 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,530 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,530 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,530 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,530 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,530 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,531 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,531 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,531 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,531 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,531 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,532 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,532 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,532 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,532 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,532 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,533 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,533 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,533 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,533 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,533 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,534 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,534 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,534 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,534 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,535 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,535 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,535 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,535 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,535 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,536 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,536 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,536 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,537 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,537 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,537 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,537 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,538 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,538 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,538 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,539 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,539 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,539 DEBUG - interface - Grabbing hotkey: ['<super>'] 'k' 2022-06-14 17:50:19,539 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,540 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,540 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,540 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,541 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,541 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,541 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,542 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,542 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,542 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,542 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,543 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,543 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,543 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,544 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,544 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,544 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,544 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,545 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,545 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,545 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,546 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,546 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,546 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,547 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,547 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,547 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,548 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,548 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,548 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,548 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,548 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,549 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,549 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,549 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,549 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,550 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,550 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,550 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,551 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,551 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,551 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,551 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,552 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,552 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,552 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,552 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,553 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,553 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,553 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,553 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'k' 2022-06-14 17:50:19,554 DEBUG - interface - Grabbing hotkey: ['<ctrl>'] '<f7>' 2022-06-14 17:50:31,460 INFO - root - Displaying configuration window 2022-06-14 17:50:38,574 ERROR - interface - Error in X event loop thread Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/autokey/interface.py", line 242, in __eventLoop method(*args) File "/usr/lib/python3.10/site-packages/autokey/interface.py", line 674, in __grab_keyboard focus.grab_keyboard(True, X.GrabModeAsync, X.GrabModeAsync, X.CurrentTime) AttributeError: 'int' object has no attribute 'grab_keyboard' 2022-06-14 17:50:45,320 INFO - config-manager - Configuration changed - rebuilding in-memory structures 2022-06-14 17:50:45,321 INFO - config-manager - Persisting configuration 2022-06-14 17:50:45,322 INFO - config-manager - Backing up existing config file 2022-06-14 17:50:45,322 DEBUG - inotify - Reporting IN_MODIFY event at /home/elias/.config/autokey/autokey.json~ 2022-06-14 17:50:45,322 DEBUG - inotify - Reporting IN_MODIFY event at /home/elias/.config/autokey/autokey.json~ 2022-06-14 17:50:45,323 INFO - config-manager - Finished persisting configuration - no errors 2022-06-14 17:50:45,323 DEBUG - inotify - Reporting IN_MODIFY event at /home/elias/.config/autokey/autokey.json 2022-06-14 17:50:45,324 DEBUG - phrase-menu - Sorting phrase menu by usage count 2022-06-14 17:50:45,324 DEBUG - phrase-menu - Triggering menu item by position in list ^CTraceback (most recent call last): File "/usr/bin/autokey-gtk", line 33, in <module> sys.exit(load_entry_point('autokey==0.95.10', 'console_scripts', 'autokey-gtk')()) File "/usr/lib/python3.10/site-packages/autokey/gtkui/__main__.py", line 9, in main a.main() File "/usr/lib/python3.10/site-packages/autokey/gtkapp.py", line 273, in main Gtk.main() File "/usr/lib/python3.10/site-packages/gi/overrides/Gtk.py", line 1687, in main with register_sigint_fallback(Gtk.main_quit): File "/usr/lib64/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/usr/lib64/python3.10/site-packages/gi/_ossighelper.py", line 237, in register_sigint_fallback signal.default_int_handler(signal.SIGINT, None) KeyboardInterrupt ``` ### Anything else? ive tried it with every versions gtk and qt subversion
closed
2022-06-14T15:53:03Z
2022-06-16T13:19:37Z
https://github.com/autokey/autokey/issues/700
[ "duplicate", "help-wanted", "upstream bug", "installation/configuration" ]
TravisPasta
6
tortoise/tortoise-orm
asyncio
1,572
model relationship bugged
data come in unknown format after sometime the relationship itself break msgx: fields.ManyToManyRelation["Messagex"] = fields.ManyToManyField("tortmodels.Messagex", related_name="personxs", through="personx_messagex") data = await Personx.get(id=id).prefetch_related("msgx") async for dx in data.msgx: print(dx.id) ![image](https://github.com/tortoise/tortoise-orm/assets/103671642/c870b66a-f4a6-4c03-9a0d-fac3a0e962dc)
closed
2024-03-18T18:39:15Z
2024-04-25T14:41:23Z
https://github.com/tortoise/tortoise-orm/issues/1572
[]
xalteropsx
8
X-PLUG/MobileAgent
automation
37
perception_infos index out of range
![image](https://github.com/user-attachments/assets/6dcca0ed-9e02-4a40-bcfc-7e49ecab60c2) It usually occurs those errors
open
2024-07-29T08:42:04Z
2025-03-15T09:07:02Z
https://github.com/X-PLUG/MobileAgent/issues/37
[]
1270645409
2
youfou/wxpy
api
60
请问函数在main外面为啥不能注册成功?
把函数放到main里面就可以运行 但是要怎样可以放到外面呢? 不好意思,谢谢 from wxpy import * def print_others(msg): print(msg) return msg.text if __name__ == "__main__" : bot = Bot() bot.register( print_others ) embed( shell= 'ipython' )
closed
2017-05-22T08:57:13Z
2017-05-22T12:26:55Z
https://github.com/youfou/wxpy/issues/60
[]
vicliu6
1
Lightning-AI/pytorch-lightning
data-science
19,569
Importing DeepSpeedStrategy fails
### Bug description I try to initialize pl.trainer with deepspeed strategy. However, I meet a strange error, which is shown below ### What version are you seeing the problem on? v2.2 ### How to reproduce the bug ```python strategy = DeepSpeedStrategy(config="/nvme/zecheng/modelzipper/projects/state-space-model/configs/deepspeed/stage2.json") trainer = pl.Trainer( default_root_dir=os.path.join(tb_logger.log_dir , "checkpoints"), logger=tb_logger, callbacks=[lr_monitor, ckpt_monitor], check_val_every_n_epoch=1 if data_module.val_dataloader is not None else 1000000, # set a large number if no validation set # strategy=DDPStrategy(find_unused_parameters=False), strategy=strategy, precision="bf16-mixed", max_steps=config.experiment.num_training_steps, devices=config.experiment.device_num, gradient_clip_val=1, enable_model_summary=True, num_sanity_val_steps=20, fast_dev_run=5 # for debugging ) ``` ``` ### Error messages and logs ``` Traceback (most recent call last): File "/nvme/zecheng/modelzipper/projects/state-space-model/mamba/train.py", line 397, in main trainer = pl.Trainer( File "/home/amax/anaconda3/envs/zecheng/lib/python3.10/site-packages/pytorch_lightning/utilities/argparse.py", line 70, in insert_env_defaults return fn(self, **kwargs) File "/home/amax/anaconda3/envs/zecheng/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 400, in __init__ self._accelerator_connector = _AcceleratorConnector( File "/home/amax/anaconda3/envs/zecheng/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 134, in __init__ self._check_config_and_set_final_flags( File "/home/amax/anaconda3/envs/zecheng/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 203, in _check_config_and_set_final_flags raise ValueError( ValueError: You selected an invalid strategy name: `strategy=<lightning.pytorch.strategies.deepspeed.DeepSpeedStrategy object at 0x7fb932781750>`. It must be either a string or an instance of `pytorch_lightning.strategies.Strategy`. Example choices: auto, ddp, ddp_spawn, deepspeed, ... Find a complete list of options in our documentation at https://lightning.ai ``` ### Environment pytorch-lightning version 2.2.0.post0 deepspeed version 0.11.2 pytorch version 2.1.2 ### More info None
closed
2024-03-04T14:43:36Z
2024-03-05T09:03:15Z
https://github.com/Lightning-AI/pytorch-lightning/issues/19569
[ "question" ]
ZetangForward
3
huggingface/diffusers
deep-learning
10,231
ImportError when running diffusers tutorial code in Ubuntu 20.04
### Describe the bug When I try to run: `from diffusers import DDPMPipeline ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") image = ddpm(num_inference_steps=25).images[0] image` > (habitat) /home/zhangyiqing/IROS2025/habitat-lab $ /home/zhangyiqing/miniconda3/envs/habitat/bin/python /home/zhangyiqing/IROS2025/habitat-lab/dirtyHand/PPO_topdownmap.py Traceback (most recent call last): File "/home/zhangyiqing/IROS2025/habitat-lab/dirtyHand/PPO_topdownmap.py", line 7, in <module> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 725, in from_pretrained cached_folder = cls.download( File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 1390, in download ignore_patterns = _get_ignore_patterns( File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 906, in _get_ignore_patterns raise EnvironmentError( OSError: Could not find the necessary `safetensors` weights in {'diffusion_pytorch_model.safetensors', 'diffusion_pytorch_model.bin'} (variant=None) Change "use_safetensors" to None to download the model: > Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 30.22it/s] Traceback (most recent call last): File "/home/zhangyiqing/IROS2025/habitat-lab/dirtyHand/PPO_topdownmap.py", line 7, in <module> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=None).to("cuda") File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 948, in from_pretrained model = pipeline_class(**init_kwargs) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/ddpm/pipeline_ddpm.py", line 43, in __init__ self.register_modules(unet=unet, scheduler=scheduler) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/pipeline_utils.py", line 165, in register_modules library, class_name = _fetch_class_library_tuple(module) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 734, in _fetch_class_library_tuple not_compiled_module = _unwrap_model(module) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 234, in _unwrap_model if is_compiled_module(model): File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/diffusers/utils/torch_utils.py", line 88, in is_compiled_module if is_torch_version("<", "2.0.0") or not hasattr(torch, "_dynamo"): File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/__init__.py", line 2214, in __getattr__ return importlib.import_module(f".{name}", __name__) File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_dynamo/__init__.py", line 2, in <module> from . import convert_frame, eval_frame, resume_execution File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 48, in <module> from . import config, exc, trace_rules File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 52, in <module> from .variables import ( File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_dynamo/variables/__init__.py", line 38, in <module> from .higher_order_ops import ( File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 14, in <module> import torch.onnx.operators File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/onnx/__init__.py", line 49, in <module> from ._internal.exporter import ( # usort:skip. needs to be last to avoid circular import File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/onnx/_internal/exporter/__init__.py", line 13, in <module> from ._analysis import analyze File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_analysis.py", line 14, in <module> import torch._export.serde.schema File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_export/__init__.py", line 33, in <module> from torch._export.non_strict_utils import make_constraints File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_export/non_strict_utils.py", line 16, in <module> from torch._dynamo.variables.builder import TrackedFake File "/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 73, in <module> from ..trace_rules import ( ImportError: cannot import name 'is_callable_allowed' from partially initialized module 'torch._dynamo.trace_rules' (most likely due to a circular import) (/home/zhangyiqing/miniconda3/envs/habitat/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py) ### Reproduction from diffusers import DDPMPipeline import matplotlib.pyplot as plt from huggingface_hub.hf_api import HfFolder; HfFolder.save_token("token") ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") image = ddpm(num_inference_steps=25).images[0] plt.imshow(image) ### Logs _No response_ ### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - 🤗 Diffusers version: 0.31.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31 - Running on Google Colab?: No - Python version: 3.9.20 - PyTorch version (GPU?): 2.4.0.post301 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.26.5 - Transformers version: not installed - Accelerate version: 1.2.1 - PEFT version: not installed - Bitsandbytes version: not installed - Safetensors version: 0.4.5 - xFormers version: not installed - Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_
open
2024-12-16T03:06:31Z
2025-02-21T15:03:44Z
https://github.com/huggingface/diffusers/issues/10231
[ "bug", "stale" ]
Benson722
6
onnx/onnx
machine-learning
6,294
Shape inference segfaults when adding tensors of shapes `(0,)` and `(1,)`
# Bug Report ### Is the issue related to model conversion? No ### Describe the bug The example graph consists of a single add node with two `int64` inputs with initializers. One is initialized to `[]` and the other to `[0]`. According to broadcasting rules these should broadcast: > In ONNX, a set of tensors are multidirectional broadcastable to the same shape if one of the following is true: > * The tensors all have exactly the same shape. > * **The tensors all have the same number of dimensions and the length of each dimensions is either a common length or 1.** > * The tensors that have too few dimensions can have their shapes prepended with a dimension of length 1 to satisfy property 2. A = [], shape(A) = (0,) B = [0], shape(B) = (1,) shape(result) = (0,) Running shape inference on this graph results in segmentation fault. The error only occurs with integer dtypes and `data_prop=True` ### System information OS Platform and Distribution (e.g. Linux Ubuntu 20.04): macOS 13.6.7 ONNX version (e.g. 1.13): 1.16.1 Python version: 3.12.4 ### Reproduction instructions ```Python import onnx model = onnx.load('model.onnx') onnx.shape_inference.infer_shapes(model, data_prop=True) ``` [model.onnx.zip](https://github.com/user-attachments/files/16601828/model.onnx.zip) Alternatively using spox ```Python import spox.opset.ai.onnx.v21 as op import numpy as np a = op.const([], dtype=np.int64) b = op.const([0], dtype=np.int64) result = op.add(a, b) # segfault ``` ### Expected behavior <!-- A clear and concise description of what you expected to happen. --> ### Notes <!-- Any additional information -->
closed
2024-08-13T15:28:17Z
2024-09-03T23:44:36Z
https://github.com/onnx/onnx/issues/6294
[ "bug", "module: shape inference", "contributions welcome" ]
MatejUrbanQC
1
chezou/tabula-py
pandas
247
Tabula app is able to extract pdf while tabula-py cant
# Summary of your issue When using Tabula app it can extract all the information in the page When using the tabula-py with templates, It extract's only a single line The template I generated and downloaded from Tabula app # Check list before submit <!--- Write and check the following questionaries. --> - [x] Did you read [FAQ](https://tabula-py.readthedocs.io/en/latest/faq.html)? - [x] (Optional, but really helpful) Your PDF URL: [PDF link](https://github.com/chezou/tabula-py/files/4880925/pdf_paginas_interesse15_ocr.pdf) [Tabula Template](https://github.com/chezou/tabula-py/files/4880930/tabula_template_jc.zip) - [x] Paste the output of `import tabula; tabula.environment_info()` on Python REPL: ? ``` import tabula; tabula.environment_info() Python version: 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] Java version: java version "1.8.0_251" Java(TM) SE Runtime Environment (build 1.8.0_251-b08) Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode) tabula-py version: 1.3.0 platform: Windows-10-10.0.18362-SP0 uname: uname_result(system='Windows', node='JESSICA-PC', release='10', version='10.0.18362', machine='AMD64', processor='Intel64 Family 6 Model 158 Stepping 10, GenuineIntel') linux_distribution: ('MSYS_NT-10.0', '2.10.0', '') mac_ver: ('', ('', '', ''), '') ``` If not possible to execute `tabula.environment_info()`, please answer following questions manually. - [ ] Paste the output of `python --version` command on your terminal: ? - [ ] Paste the output of `java -version` command on your terminal: ? - [ ] Does `java -h` command work well?; Ensure your java command is included in `PATH` - [ ] Write your OS and it's version: ? # What did you do when you faced the problem? I've also tried to use guess=True, lattice=True, multiple_tables=True, without a template, but nothing improved ## Code: ``` df = tabula.read_pdf_with_template(path_pdf, template_file, guess=False, stream=True) ``` ## Expected behavior: ![tabulapp](https://user-images.githubusercontent.com/25270393/86633814-816bf100-bfa7-11ea-93bf-bf019e706f99.PNG) ## Actual behavior: ![tabulapy](https://user-images.githubusercontent.com/25270393/86633734-6ac59a00-bfa7-11ea-8e6e-8e2575c9d426.PNG) ``` Only NISSAN/KICKS is extracted ``` ## Related Issues:
closed
2020-07-06T19:47:41Z
2020-07-07T23:24:57Z
https://github.com/chezou/tabula-py/issues/247
[ "not a bug" ]
jcabralc
2
open-mmlab/mmdetection
pytorch
11,468
RTMDet训练中显存异常激增memory dramatic increase or leak
我在使用RTMDet训练类似O365v1这样的大型数据集时,发现显存占用远超COCO数据集,经过torch.cuda.max_memory_allocated()定位显存激增的位置发生在DynamicSoftLabelAssigner中的 soft_cls_cost = F.binary_cross_entropy_with_logits( valid_pred_scores, soft_label, reduction='none') 目前我不清楚,为什么会导致显存激增,但我使用gpu_assign_thr暂时缓解了这个问题,有相同经历的朋友可以参考[PR](https://github.com/open-mmlab/mmdetection/pull/11467),也希望有懂得大佬解答一下。 我的环境是: torch 2.1.0 numpy 1.23.0 mmdet 3.3.0 mmcv 2.1.0 mmengine 0.10.3
open
2024-02-08T17:24:27Z
2024-02-08T17:24:43Z
https://github.com/open-mmlab/mmdetection/issues/11468
[]
Baboom-l
0
docarray/docarray
pydantic
1,583
Provide SubIndex feature on ExactNNSearchIndexer
closed
2023-05-26T15:39:33Z
2023-06-08T07:47:07Z
https://github.com/docarray/docarray/issues/1583
[]
JoanFM
0
marcomusy/vedo
numpy
1,026
Behavior of `vedo.show` when called twice with different arguments
Hi @marcomusy When running this script : ```python import vedo mesh = vedo.Mesh(vedo.dataurl+"spider.ply") vedo.show(mesh.c('b'), offscreen=True) vedo.show(mesh.c('r'), offscreen=False) ``` I expected the second call to `show()` to create a new plotter with `offscreen=False`. But it seemed that the parameter `offscreen` kept to its previous value. Looking at the code, I spotted that the behavior I wanted was reachable adding `new=True` to the second call to `show`, as by default, `show` reuse the previous plotter instance. I would find it more natural to have `new` automatically set to `True` if I ask explicitly a different behavior than the actual`vedo.plotter_instance`. My suggestion would be to add something like ```python if vedo.plotter_instance and vedo.plotter_instance.offscreen != offscreen: new = True ``` at the beginning of the function `vedo.plotter.show()`. I add this on my local fork and it worked as I expected for my example. What to you think about adding these lines to vedo ? Maybe other arguments (interactive, ... ?) should be treated the same way ? I you think it's a good idea, I can open a PR to integrate it.
open
2024-01-18T11:42:56Z
2024-01-18T12:20:51Z
https://github.com/marcomusy/vedo/issues/1026
[]
Louis-Pujol
1
pyg-team/pytorch_geometric
pytorch
9,496
OSError: torch_scatter/_version_cpu.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSs
### 😵 Describe the installation problem I have pytorch 2.3.1 cpu_generic_py310ha4c588e_0 conda-forge installed and I want to use torchdrug. I have installed pip install torch_geometric pip install ninja wheel pip install git+https://github.com/pyg-team/pyg-lib.git pip install torch-scatter -f https://data.pyg.org/whl/torch-2.3.0+cpu.html pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.3.0+cpu.html pip install torch-cluster -f https://data.pyg.org/whl/torch-2.3.0+cpu.html pip install torch-spline-conv -f https://data.pyg.org/whl/torch-2.3.0+cpu.html pip install torchdrug and get the OS error when import it Could you please tell me how to fix it? ![image](https://github.com/pyg-team/pytorch_geometric/assets/76455307/81c17219-fea7-42a9-823b-bbdab4badbee) ### Environment * PyG version: * PyTorch version: * OS: * Python version: * CUDA/cuDNN version: * How you installed PyTorch and PyG (`conda`, `pip`, source): * Any other relevant information (*e.g.*, version of `torch-scatter`):
open
2024-07-09T12:13:09Z
2024-08-19T10:50:50Z
https://github.com/pyg-team/pytorch_geometric/issues/9496
[ "installation" ]
Lili-Cao
2
erdewit/ib_insync
asyncio
418
Error 322
When I try to connect to gateway: ``` ib = IB() ib.connect('127.0.0.1', 4001, clientId=11) ``` I get this error: ``` Error 322, reqId 2: Error processing request.-'cc' : cause - jextend.cc.n(cc.java:310) <IB connected to 127.0.0.1:4001 clientId=11> ``` It was working before, it started all of the sudden, no change to anything! Anyone had this issue? Thanks
closed
2021-12-08T22:35:57Z
2022-09-29T15:13:58Z
https://github.com/erdewit/ib_insync/issues/418
[]
rezaa1
6
BayesWitnesses/m2cgen
scikit-learn
267
Travis 50min limit... again
Today I saw our jobs at `master` hit 50min Travis limit per job 3 times. Guess, it's time to either review #243 or reorganize jobs at Travis. Refer to #125 for the past experience and to #114 for some further ideas. cc @izeigerman
closed
2020-07-08T18:30:53Z
2020-07-26T11:47:57Z
https://github.com/BayesWitnesses/m2cgen/issues/267
[]
StrikerRUS
2
plotly/dash
dash
3,150
Are hooks correctly saving assigned priority?
From `_hooks.py` we see that hook decorators, e.g. `@hooks.layout`, are actually just wrappers around `self.add_hook`. Looking at `self.add_hook` and how it handles input parameter `priority`, how is the value given to `priority` passed down to the hook? It is only used in the check `if not priority`, which fails if a non-zero value is given and is otherwise not used. The value given to `_Hook`, is `priority=p`, and `p=0` is set. Is this a bug or am I misunderstanding how hooks are later used/called? See: https://github.com/plotly/dash/blob/v3.0.0rc1/dash/_hooks.py#L58-L81
closed
2025-02-06T13:03:04Z
2025-02-13T22:10:59Z
https://github.com/plotly/dash/issues/3150
[]
aGitForEveryone
3
Significant-Gravitas/AutoGPT
python
8,738
Prevent use of webhook-triggered blocks if `PLATFORM_BASE_URL` is not set
* Part of #8352 * Implement on #8358 ## TODO - [X] Disable webhook-triggered blocks if `PLATFORM_BASE_URL` is not set - [X] Raise error in `BaseWebhooksManager` on attempt to create webhook if `PLATFORM_BASE_URL` is not set - [X] Add field validator for `PLATFORM_BASE_URL`
closed
2024-11-21T17:28:42Z
2024-11-22T00:13:12Z
https://github.com/Significant-Gravitas/AutoGPT/issues/8738
[]
Pwuts
0