repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
FactoryBoy/factory_boy | sqlalchemy | 215 | Generic Foreign Keys and Rails-like polymorphic associations | I can't seem to figure out how to implement a GenericForeignKey with FactoryBoy. I have a model `Comment` and a model `Post`, but `Comment` can be attached to mostly anything. In Python (django), these look like:
``` python
class Comment(models.Model):
"""Comment for any other object."""
#: Message for the Comment
message = models.TextField()
#: Date of creation, automatically set upon creation
created_at = models.DateTimeField(auto_now=True)
#: Type of class this is associated to
owner_type = models.ForeignKey(ContentType)
#: ID of the associated model
owner_id = models.PositiveIntegerField()
# Object to associate with
content_object = GenericForeignKey('owner_type', 'owner_id')
class Meta:
ordering = ['-created_at']
```
I was about to implement this using the @factory.post_generation decorator but I wasn't quite sure how to associate the keys properly there either. Any more information on this?
| closed | 2015-07-09T14:52:31Z | 2015-07-25T12:10:34Z | https://github.com/FactoryBoy/factory_boy/issues/215 | [
"Q&A"
] | Amnesthesia | 4 |
lukas-blecher/LaTeX-OCR | pytorch | 177 | [feature] Support readline hotkey? | That is input `<C-B>/<C-F>/...` can move the cursor in `Predict LaTeX code for image ("?"/"h" for help).` Thanks. | closed | 2022-09-12T11:03:52Z | 2022-09-20T09:05:07Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/177 | [] | Freed-Wu | 1 |
vitalik/django-ninja | rest-api | 1,063 | query parameter variabel | some applications request data with url query, and one of the variable call "pass"
ex:
http://my_url/api?pass=testvalue
when i try to gaet the parameter
@api.get("/user")
def list_weapons(request, pass:str):
return pass
it's be a problem because pass can't use as variable in python. how to handle it?
| open | 2024-01-25T17:00:16Z | 2024-01-26T12:31:20Z | https://github.com/vitalik/django-ninja/issues/1063 | [] | lutfyneutron | 1 |
plotly/dash-table | plotly | 231 | Feature request: Ability to remove table borders completely | I'm trying to convert some tables that I rendered in my app through convoluted HTML into the new `DataTable`s, but in order to reproduce the format I need to get rid of some of the table borders and that doesn't seem possible right now. I have used `style_as_list_view` but that doesn't help me with the horizontal borders. Basically, I'm trying to style my table like this:

Just as there is the `style_as_list_view` flag, could there be one called something like `style_without_borders` so that we can then add borders as needed through normal cell styling?
If this is acceptable and with some guidance, I'm happy to submit a PR for this. | closed | 2018-11-07T17:23:36Z | 2021-10-05T09:06:54Z | https://github.com/plotly/dash-table/issues/231 | [] | oriolmirosa | 7 |
xonsh/xonsh | data-science | 5,610 | conda and mamba: `DeprecationWarning: Use xonsh.lib.lazyasd instead of xonsh.lazyasd.` | Just want to pin here the related PRs in downstream tools
The fix for this in downstream tools:
```xsh
try:
# xonsh >= 0.18.0
from xonsh.lib.lazyasd import lazyobject
except:
# xonsh < 0.18.0
from xonsh.lazyasd import lazyobject
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2024-07-18T15:36:31Z | 2024-07-18T15:38:54Z | https://github.com/xonsh/xonsh/issues/5610 | [
"refactoring"
] | anki-code | 1 |
flairNLP/flair | nlp | 3,307 | [Question]: Fail to install allennlp=0.9.0 | ### Question
When I install allennlp without defining the version, elmo embedding does not work.
But when I install with allennlp 0.9.0 it cannot install successful in colab.
Collecting spacy<2.2,>=2.1.0 (from allennlp==0.9.0)
Using cached spacy-2.1.9.tar.gz (30.7 MB)
Installing build dependencies ... error
error: subprocess-exited-with-error | closed | 2023-08-30T18:25:22Z | 2023-09-04T09:57:06Z | https://github.com/flairNLP/flair/issues/3307 | [
"question"
] | nonameisagoodname | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,676 | [Bug]: Networks with errors lora | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When I try to create an image using Lora, I get
error network lora name(32) and the lora does not come out correctly.
Example: When trying to use lora diana-prince-dcau-ponyxl-lora-nochekaiser
This is what the error message will look like
Network with error: diana-prince-dcau-ponyxl-lora-nochekaiser (32)
The same goes for other loras.

### Steps to reproduce the problem
1. Select Lora.
2. Create a prompt for that Lora.
3. Start the task.
### What should have happened?
The Lora (character) must function properly and generate outputs successfully.
Additionally, the error message "error network lora name (32)" should no longer appear.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-11-24-02-54.json](https://github.com/user-attachments/files/17891574/sysinfo-2024-11-24-02-54.json)
### Console logs
```Shell
Loading network C:\stable-diffusion-webui\models\Lora\loradiana-prince-dcau-ponyxl-lora-nochekaiser.safetensors: FileNotFoundError
Traceback (at the end of the most recent call):
File “C:\stable-diffusion-webui\extensions-builtin\Lora\networks.py”, line 321, in load_networks.
NET = load_networks(name, network_on_disk)
In File “C:\stable-diffusion-webui\extensions-builtin\Lora\networks.py”, Line 160, load_network
net.mtime = os.path.getmtime(network_on_disk.filename)
File “C:\Python310\lib\genericpath.py”, line 55, in getmtime
Return os.stat(filename).st_mtime
File not found error: [WinError 2] The specified file could not be found: 'C:\\stable-diffusion-webui\\models\\Lora\\loradiana-prince-dcau-ponyxl-lora-nochekaiser.safetensors'
```
### Additional information
This is the version I'm using.
version:v1.10.1 • python: 3.10.11 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 • checkpoint: ac006fdd7e
The path to lora.
C:\stable-diffusion-webui\models\Lora
I used this model.
autismmixSDXL_autismmixConfetti.safetensors
Steps I have tried to resolve the issue:
1. Reinstalled Stable Diffusion.
2. Deleted and reinstalled the Lora file.
3. Renamed the Lora file.
4. Reinstalled the checkpoint (model).
I still have no idea why this error occurs. I would greatly appreciate your help. | closed | 2024-11-24T03:02:37Z | 2025-01-17T12:57:32Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16676 | [
"bug-report"
] | Musess | 0 |
PaddlePaddle/ERNIE | nlp | 713 | ERNIE的词表中的##开头的token的用途? | ERNIE的词表文件中有很多##开头的字,但ernie本身就是基于字的模型,那为什么还需要这些##开头的字呢? | closed | 2021-07-08T12:48:03Z | 2021-10-02T06:09:43Z | https://github.com/PaddlePaddle/ERNIE/issues/713 | [
"wontfix"
] | KinvenW | 2 |
HumanSignal/labelImg | deep-learning | 724 | CreateML annotation format does not work. | The CreateML annotation format is currently not supported in the tool. It can be selected, but saving and loading does not work. I assume that this is a temporary fix for some incompatibility, but I would suggest either making it work or getting rid of the format altogether to not have unexpected behavior in the tool
_Originally posted by @Cerno-b in https://github.com/tzutalin/labelImg/pull/723#r598457996_ | closed | 2021-03-22T06:46:54Z | 2022-09-26T03:41:50Z | https://github.com/HumanSignal/labelImg/issues/724 | [] | Cerno-b | 1 |
deezer/spleeter | deep-learning | 792 | [Feature] speaker/singer/vocalist separation | ## Description
speaker separation, so when we hear different speakers or singers or vocalists,
every different voice gets a separated audio voice track
## Additional information
| closed | 2022-09-27T02:45:38Z | 2022-10-07T10:33:54Z | https://github.com/deezer/spleeter/issues/792 | [
"enhancement",
"feature"
] | bartman081523 | 2 |
CPJKU/madmom | numpy | 264 | Spectrul flux vs median deviation | Hello,
I notice in the you have changed the way you compute positive differences in the Mel filterbank in comparison to your paper from 2011. Seems you no longer look at the deviations from the median, but rather utilize flux.
Would this affect the results from 2011 or should I go ahead with the median deviations?
Also,
Could you tell me what the difference is in the BEATS_BLSTM models?
Cheers | closed | 2017-03-07T19:59:34Z | 2017-04-12T06:45:14Z | https://github.com/CPJKU/madmom/issues/264 | [
"question"
] | ghost | 2 |
matterport/Mask_RCNN | tensorflow | 2,517 | Why applying biggest anchor sizes on smallest feature maps? | Hi everyone,
In model, it is called
```
a = utils.generate_pyramid_anchors(
self.config.RPN_ANCHOR_SCALES,
self.config.RPN_ANCHOR_RATIOS,
backbone_shapes,
self.config.BACKBONE_STRIDES,
self.config.RPN_ANCHOR_STRIDE)
```
for the anchor generation. In utils.generate_pyramid_anchors, we can find this line
```
anchors.append(generate_anchors(scales[i], ratios, feature_shapes[i],
feature_strides[i], anchor_stride))
```
meaning that we apply the scales[i] size to generate anchor on the feature map of size feature_shapes[i]. We have on one hand
`config.RPN_ANCHOR_SCALES = (32, 64, 128, 256, 512)`
and
`backbone_shapes = compute_backbone_shapes(self.config, image_shape)`
on the other hand, that is in fact the array containing the shapes of the feature maps, the biggest being first. To be clear, we have for an input image of 1024x1024x1
`backbone_shapes = np.array( [256,256], [128,128], [64,64], [32,32], [16,16] )`
It means that we apply the smallest anchor size to the biggest feature maps (may be ok) and the biggest anchor sizes on the smallest feature maps. My question is why? I would have considered to apply the smallest anchor sizes to the smallest features maps? Am I missing something here?
Thank you in advance. | closed | 2021-03-24T15:30:09Z | 2025-01-06T12:33:33Z | https://github.com/matterport/Mask_RCNN/issues/2517 | [] | gdavid57 | 1 |
jupyterhub/jupyterhub-deploy-docker | jupyter | 19 | How to spawn user container with DOCKER_NOTEBOOK_DIR | suppose a user named bob signed in with github, how to spwan a bob container with DOCKER_NOTEBOOK_DIR to /home/bob/work?
| closed | 2016-08-18T04:21:52Z | 2022-12-05T00:55:52Z | https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/19 | [
"enhancement",
"help wanted"
] | z333d | 7 |
Ehco1996/django-sspanel | django | 22 | 嗯,,,提个建议 | 单端口多用户,,感觉比较重要。。。
形势 | closed | 2017-10-26T05:29:03Z | 2017-10-28T18:53:49Z | https://github.com/Ehco1996/django-sspanel/issues/22 | [] | cheapssr | 4 |
ludwig-ai/ludwig | computer-vision | 3,881 | Cannot run/install finetuning colab notebook | **Describe the bug**
The [demo colab notebook](https://colab.research.google.com/drive/1r4oSEwRJpYKBPM0M0RSh0pBEYK_gBKbe) for finetuning Llama-2-7b is crashing at the third runnable cell when trying to import torch.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-3-dac5961b998e>](https://localhost:8080/#) in <cell line: 5>()
3 import logging
4 import os
----> 5 import torch
6 import yaml
7
12 frames
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through)
2043 encoding = "utf-8"
2044 else:
-> 2045 encoding = locale.getpreferredencoding(False)
2046
2047 if not isinstance(encoding, str):
TypeError: <lambda>() takes 0 positional arguments but 1 was given
```
**To Reproduce**
1. Go to https://colab.research.google.com/drive/1r4oSEwRJpYKBPM0M0RSh0pBEYK_gBKbe
2. Connect T4 GPU
3. Run the first three cells
4. Last cell should fail with the error message
**Expected behavior**
It should work!
**Environment (please complete the following information):**
(not sure if relevant)
| closed | 2024-01-15T17:15:32Z | 2024-01-15T21:29:12Z | https://github.com/ludwig-ai/ludwig/issues/3881 | [] | dotXem | 3 |
xlwings/xlwings | automation | 2,267 | test_markdown.py on macOS | #### OS (e.g. Windows 10 or macOS Sierra)
macOS
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
xlwings - 0.30.7
Python 3.11
#### Describe your issue (incl. Traceback!)
```python
pytest tests/test_markdown.py
```
All tests fail on macOS. As is also noted in a comment within the module
> `test_markdown.py`
> Characters are currently not properly supported
> on macOS due to an Excel/AppleScript bug
Knowing this is the case for macOS, would it be an idea to wrap theses tests within a pytest class and skip the test if the platform is macOS. As is being done for some other individual tests?
Skipping a tests has my preference as this gives me as a developer the most ease of mind. Seeing failing tests kinda makes me nervous 🙂.
```python
/Users/hayer/xlwings/xlwings/pro/reports/markdown.py:214: UserWarning: Markdown formatting is currently ignored on macOS.
warnings.warn("Markdown formatting is currently ignored on macOS.")
----
FAILED tests/test_markdown.py::test_markdown_cell_defaults_formatting - assert False is True
FAILED tests/test_markdown.py::test_markdown_cell_defaults_value - AssertionError: assert 'Title\nText ...eak\nnew line' == 'Title\n\nTex...eak\nnew line'
FAILED tests/test_markdown.py::test_markdown_cell_h1 - assert None == (255, 0, 0)
FAILED tests/test_markdown.py::test_markdown_cell_strong - assert None == (255, 0, 0)
FAILED tests/test_markdown.py::test_markdown_cell_emphasis - assert None == (255, 0, 0)
FAILED tests/test_markdown.py::test_markdown_cell_unordered_list - AssertionError: assert ' a first bul... a second bul' == '-'
FAILED tests/test_markdown.py::test_markdown_shape_defaults_formatting - AttributeError: Characters isn't supported on macOS with shapes.
FAILED tests/test_markdown.py::test_markdown_shape_defaults_value - AssertionError: assert 'Title\nText ...eak\nnew line' == 'Title\n\nTex...eak\nnew line'
FAILED tests/test_markdown.py::test_markdown_shape_h1 - AttributeError: Characters isn't supported on macOS with shapes.
FAILED tests/test_markdown.py::test_markdown_shape_strong - AttributeError: Characters isn't supported on macOS with shapes.
FAILED tests/test_markdown.py::test_markdown_shape_emphasis - AttributeError: Characters isn't supported on macOS with shapes.
FAILED tests/test_markdown.py::test_markdown_shape_unordered_list - AttributeError: Characters isn't supported on macOS with shapes.
``` | closed | 2023-05-24T11:44:10Z | 2023-05-25T09:34:00Z | https://github.com/xlwings/xlwings/issues/2267 | [] | Jeroendevr | 0 |
huggingface/datasets | nlp | 6,941 | Supporting FFCV: Fast Forward Computer Vision | ### Feature request
Supporting FFCV, https://github.com/libffcv/ffcv
### Motivation
According to the benchmark, FFCV seems to be fastest image loading method.
### Your contribution
no | open | 2024-06-01T05:34:52Z | 2024-06-01T05:34:52Z | https://github.com/huggingface/datasets/issues/6941 | [
"enhancement"
] | Luciennnnnnn | 0 |
nalepae/pandarallel | pandas | 77 | Hangs on Completion When nb_workers is Too High | Debian 9.9
pandarallel 1.4.5
pyhton 3.7.5
I'm applying in parallel some string comparisons:
```
track_matches = isrcs_to_match_by_title["title_cleaned"].parallel_apply(
lambda title_cleaned: tracks.index[tracks["name_cleaned"].values == title_cleaned])
```
It's not always reproducible. In some runs it will work and others it won't.
Setting `progress_bar=True` or `False` doesn't seem to affect it.
The higher the number of processes, the less likely it seems to complete. When I set `nb_workers=8`, it always completes. 24 sometimes completes, 96 never completes.
On completion the processes all die out (none of them are being used), and the program never continues. | closed | 2020-02-14T00:04:04Z | 2022-09-07T11:17:17Z | https://github.com/nalepae/pandarallel/issues/77 | [] | xanderdunn | 2 |
explosion/spaCy | machine-learning | 12,738 | Custom spaCy NER model not making expected predictions | ## Issue
A custom NER model (trained to identify certain code numbers) does not produce predictions in certain documents.
The tables below show text snippets of pairs of two documents in row#1 and row#2 respectively, where row#1 represents text on which model returns correct prediction and row#2 represents another text on which model does not return correct prediction, despite text1 and text2 being very similar.
I have tried multiple variations of these texts from row#3 onwards, in an attempt to pinpoint what piece of text is causing differences in predictions. The issue here is- if the model recognizes a given code as the correct inference in one document, why is it not able to identify another similar looking code number as the correct inference, in another similar document.
### Model Inputs and Corresponding Outputs
Example1:
<img width="1162" alt="PC spaCy model input and output" src="https://github.com/explosion/spaCy/assets/15351802/8755fdd3-2c92-45cf-b4f2-203aca4e0c6d">
Example2:
<img width="1193" alt="Screenshot 2023-06-18 at 6 59 41 PM" src="https://github.com/explosion/spaCy/assets/15351802/8b5220dd-7a7a-45f2-b156-57595519d346">
## Environment
* spaCy version: 3.0.6
* Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26
* Python version: 3.10.10
| closed | 2023-06-18T23:14:18Z | 2023-07-20T00:02:26Z | https://github.com/explosion/spaCy/issues/12738 | [] | MisbahKhan789 | 3 |
yzhao062/pyod | data-science | 150 | callbacks in autoencoder | How can i implement callback parameter in fit moder Autoencoder ?
There is not parameter.
from keras.callbacks.callbacks import EarlyStopping
cb_earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0,
mode='auto', baseline=None, restore_best_weights=False)
pyod_model.fit(scaler, callbacks=[cb_earlystop])
TypeError: fit() got an unexpected keyword argument 'callbacks'
Can you implement this parameter? Its very usefull for monitor, early stop and another cases.
| open | 2019-12-16T16:58:31Z | 2024-05-07T13:05:38Z | https://github.com/yzhao062/pyod/issues/150 | [
"enhancement",
"help wanted",
"good first issue"
] | satrum | 6 |
pydantic/pydantic-settings | pydantic | 446 | Validation is only performed during debugging | I have an environment file that contains several key variables necessary for my application's proper functioning. However, I accidentally added a comment in front of one of these variables, which caused some confusion during debugging. While my code runs smoothly without any apparent issues, I encounter a validation error related to that specific variable when I debug it.
Here are sample codes:
```python
# config.py
from typing import Literal
from pydantic_settings import BaseSettings, SettingsConfigDict
class Config(BaseSettings):
ENVIRONMENT: Literal['debug', 'testing', 'production'] = 'debug'
model_config = SettingsConfigDict(
env_file='.env',,
env_ignore_empty=True,
extra='ignore',
)
config = Config()
```
``` python
# main.py
from config import config
```
``` python
# .env
ENVIRONMENT=debug # some comments here
```
Enclosing the `debug` statement in double quotes doesn't seem to resolve the issue, but removing the comment does.
Am I missing something? | closed | 2024-10-18T09:58:35Z | 2024-10-18T16:14:17Z | https://github.com/pydantic/pydantic-settings/issues/446 | [
"unconfirmed"
] | BabakAmini | 11 |
ipython/ipython | data-science | 14,202 | Syntax Highlighting 0b and 0x Prefix | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
The prefixes for binary ```0b``` and hex ```0x``` are not highlighted in a different colour in ipython like they are in vscode. Having the prefix in a different colour makes the code a bit more readable, emphasising that it is not a decimal number:
```
0b01101111
0x6f
f'binary {0b01101111}, hexadecimal {0x6f}'
```

| open | 2023-10-02T10:30:05Z | 2023-10-02T10:30:05Z | https://github.com/ipython/ipython/issues/14202 | [] | PhilipYip1988 | 0 |
microsoft/nni | data-science | 5,488 | It is recommended to combine quantification of NNI with onnxruntime | NNI is a great project and its quantification function is very easy.
However, I find that the quantified results can only be converted to tensorrt acceleration for the GPU within the framework of nni. This will be the biggest resistance for developers like mobile or cross-language platforms to choose nni.
So if you can combine the quantification function of NNI with the reasoning function of onnxruntime, it will be very convenient for project deployment. I believe it will make NNI the most popular project. | open | 2023-03-29T07:54:13Z | 2023-03-31T02:35:49Z | https://github.com/microsoft/nni/issues/5488 | [
"feature request"
] | Z-yq | 0 |
drivendataorg/erdantic | pydantic | 139 | Add support for msgspec.Struct | [msgspec](https://jcristharif.com/msgspec/) is a serialization and validation library, and it has a `Struct` class for declaring typed dataclass-like classes.
https://jcristharif.com/msgspec/structs.html | open | 2025-03-24T01:10:21Z | 2025-03-24T01:10:21Z | https://github.com/drivendataorg/erdantic/issues/139 | [
"enhancement"
] | jayqi | 0 |
ading2210/poe-api | graphql | 144 | Run temp_message error | - I know this question may be stupid, but I struggled for a long time.
- Is this library working now?
- I used version `0.4.8.`
- And this is my first time run it, then I got this error
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-5:
Traceback (most recent call last):
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-7:
Traceback (most recent call last):
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-4:
Traceback (most recent call last):
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
KeyError: 'chatOfBotDisplayName'
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 233, in get_bot_thread
chat_data = self.get_bot(bot["node"]["displayName"])
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 214, in get_bot
chat_data = data["pageProps"]["data"]["chatOfBotDisplayName"]
KeyError: 'chatOfBotDisplayName'
INFO:root:Subscribing to mutations
WARNING:websocket:websocket connected
INFO:root:Sending message to capybara: Who are you?
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (1/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (2/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (3/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (4/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (5/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (6/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (7/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (8/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (9/10)...
WARNING:root:Server returned a status code of 404 while downloading https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json. Retrying (10/10)...
Traceback (most recent call last):
File "D:\Git_project\Duolingo_auto_answer\temp.py", line 20, in <module>
for chunk in client.send_message("capybara", message, with_chat_break=True):
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 484, in send_message
chat_id = self.get_bot_by_codename(chatbot)["chatId"]
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 254, in get_bot_by_codename
return self.get_bot(bot_codename)
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 210, in get_bot
data = request_with_retries(self.session.get, url).json()
File "C:\Users\111\AppData\Local\Programs\Python\Python38\lib\site-packages\poe.py", line 53, in request_with_retries
raise RuntimeError(f"Failed to download {url} too many times.")
RuntimeError: Failed to download https://poe.com/_next/data/2WUKJiaLlyAttUI7qPryw/capybara.json too many times. | closed | 2023-07-04T02:20:00Z | 2023-07-04T09:08:26Z | https://github.com/ading2210/poe-api/issues/144 | [
"bug"
] | LiMingchen159 | 3 |
flaskbb/flaskbb | flask | 218 | Activation email error | Hello, I have issue with email activation token sending.
[2016-09-23 23:56:21,050: INFO/MainProcess] Received task: flaskbb.email.send_activation_token[811ca7d8-7f1b-4440-8e89-3a5f5e9c266d]
[2016-09-23 23:56:21,061: ERROR/MainProcess] Task flaskbb.email.send_activation_token[811ca7d8-7f1b-4440-8e89-3a5f5e9c266d] raised unexpected: DetachedInstanceError('Instance <User at 0x7fc6a76e1590> is not bound to a Session; attribute refresh operation cannot proceed',)
Traceback (most recent call last):
File "/var/www/virtenv/flaskbb/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(_args, *_kwargs)
File "/var/www/flaskbb/app.py", line 92, in **call**
return TaskBase.**call**(self, _args, *_kwargs)
File "/var/www/virtenv/flaskbb/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in **protected_call**
return self.run(_args, *_kwargs)
File "/var/www/flaskbb/email.py", line 48, in send_activation_token
token = make_token(user=user, operation="activate_account")
File "/var/www/flaskbb/utils/tokens.py", line 37, in make_token
data = {"id": user.id, "op": operation}
File "/var/www/virtenv/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 237, in **get**
return self.impl.get(instance_state(instance), dict_)
File "/var/www/virtenv/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 578, in get
value = state._load_expired(state, passive)
File "/var/www/virtenv/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/state.py", line 474, in _load_expired
self.manager.deferred_scalar_loader(self, toload)
File "/var/www/virtenv/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 610, in load_scalar_attributes
(state_str(state)))
DetachedInstanceError: Instance <User at 0x7fc6a76e1590> is not bound to a Session; attribute refresh operation cannot proceed
Have any ideas?
| closed | 2016-09-23T21:03:36Z | 2018-04-15T07:47:40Z | https://github.com/flaskbb/flaskbb/issues/218 | [] | JustOnce | 3 |
modelscope/modelscope | nlp | 828 | 下载数据集失败了 | Thanks for your error report and we appreciate it a lot.
**Checklist**
* I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* I have searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
**Describe the bug**
```
Traceback (most recent call last):
File "/Users/wangxiaoxin/Downloads/test/download.py", line 6, in <module>
ds = MsDataset.load('modelscope/Youku-AliceMind', namespace='tany0699', split='train')
File "/opt/homebrew/lib/python3.9/site-packages/modelscope/msdatasets/ms_dataset.py", line 315, in load
dataset_inst = remote_dataloader_manager.load_dataset(
File "/opt/homebrew/lib/python3.9/site-packages/modelscope/msdatasets/data_loader/data_loader_manager.py", line 132, in load_dataset
oss_downloader.process()
File "/opt/homebrew/lib/python3.9/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 82, in process
self._build()
File "/opt/homebrew/lib/python3.9/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 109, in _build
meta_manager.parse_dataset_structure()
File "/opt/homebrew/lib/python3.9/site-packages/modelscope/msdatasets/meta/data_meta_manager.py", line 119, in parse_dataset_structure
raise 'Cannot find dataset meta-files, please fetch meta from modelscope hub.'
TypeError: exceptions must derive from BaseException
```
**To Reproduce**
```
from modelscope.msdatasets.ms_dataset import MsDataset
from modelscope import HubApi
api=HubApi()
api.login('')
ds = MsDataset.load('modelscope/Youku-AliceMind', namespace='tany0699', split='train')
print(next(iter(ds)))
```
**Your Environments (__required__)**
* OS: `uname -a`
* CPU: `lscpu`
* Commit id (e.g. `a3ffc7d8`)
* You may add addition that may be helpful for locating the problem, such as
* How you installed PyTorch [e.g., pip, conda, source]
* Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)
Please @ corresponding people according to your problem:
Model related: @wenmengzhou @tastelikefeet
Model hub related: @liuyhwangyh
Dataset releated: @wangxingjun778
Finetune related: @tastelikefeet @Jintao-Huang
Pipeline related: @Firmament-cyou @wenmengzhou
Contribute your model: @zzclynn
| closed | 2024-04-12T12:41:49Z | 2024-04-17T08:13:47Z | https://github.com/modelscope/modelscope/issues/828 | [] | 459737087 | 2 |
pydata/pandas-datareader | pandas | 111 | Additional data for yahoo finance data reader | Can additional data be added to the `Datareader` other than OHLC and volume data.
Data such as can be obtained:
- Dividend information (dates, yield, ex-dividend date...etc)
- EPS
- Market cap
- Price to book ratio
- P/E ratio
- Financial information such as quarter earnings.
The yahoo API supports streaming some of these through special tags as seen here: https://greenido.wordpress.com/2009/12/22/yahoo-finance-hidden-api/
This can be done in request but it would be nice if the Datareader had some option to scrape this information.
Thoughts?
| closed | 2015-10-19T08:11:41Z | 2018-01-18T16:32:37Z | https://github.com/pydata/pandas-datareader/issues/111 | [] | ccsv | 5 |
deezer/spleeter | tensorflow | 77 | [Discussion] WARNING:spleeter:[WinError 2] The system cannot find the file specified | Hello guys,
I'm trying spleeter on a Win10 laptop. When running ``spleeter separate -i spleeter\audio_example.mp3 -p spleeter:2stems -o output`` I got the following error:
``WARNING:spleeter:[WinError 2] The system cannot find the file specified``
I'm sure the file path is correct, so I'm confused about this error. Anyone has idea?
For your information, I didnt install spleeter using "conda" but using "pip" because when using conda I always had the problem of "simplejson finished with status error". | closed | 2019-11-12T10:32:50Z | 2019-11-16T23:21:21Z | https://github.com/deezer/spleeter/issues/77 | [
"question"
] | gladys0313 | 9 |
google-research/bert | tensorflow | 888 | how use the pretrain checkpoint to continue train on my own corpus? | I want to load the pretrain checkpoint to continue train on my own corpus, I use the `run_pretrain.py` code and set the init_checkpoint to the pretrain dir, while I run the code, it raise error?
```
ERROR:tensorflow:Error recorded from training_loop: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
From /job:worker/replica:0/task:0:
Key bert/embeddings/LayerNorm/beta/adam_m not found in checkpoint
[[node save/RestoreV2 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
```
I know that when finish training, it is better to remove `adam_m` and `adam_v` parameter to reduce the size of the checkpoint file, but I while want to continue train on the pretrain checkpoint, how to sovle this problem, may be I can recovert adam reference variable name in the checkpoint file ?thank you | open | 2019-10-25T13:27:04Z | 2021-11-10T03:27:30Z | https://github.com/google-research/bert/issues/888 | [] | RyanHuangNLP | 7 |
databricks/koalas | pandas | 2,213 | read_excel's parameter - mangle_dupe_cols is used to handle duplicate columns but fails if the duplicate columns are case sensitive. | mangle_dupe_cols - default is True
So ideally it should have handled duplicate columns, but in case if the columns are case sensitive it fails as below.
AnalysisException: Reference '`Sheet.col`' is ambiguous, could be: Sheet.col, Sheet.col.
Where two columns are Col and cOL
In the best practices, there is a mention of not to use case sensitive columns - https://koalas.readthedocs.io/en/latest/user_guide/best_practices.html#do-not-use-duplicated-column-names
Either the docs for read_excel/mangle_dupe_cols has to be updated about this or it has to be handled. | open | 2022-01-17T18:39:26Z | 2022-01-25T15:10:34Z | https://github.com/databricks/koalas/issues/2213 | [
"docs"
] | saikrishnapujari102087 | 2 |
robusta-dev/robusta | automation | 1,601 | enable cpu limit for "robusta-forwarder" service as current config cause cpu hogging | **Describe the bug**
Current helm config for `cpu limit` of `robusta-forwarder` caused the pods to consume available cpu of the Node.
**To Reproduce**
NA
**Expected behavior**
`robusta-forwarder` pod should run in specified cpu limits.
**Screenshots**
```
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
worker7.copper.cp.xxxxx.xxxx.com 42648m 98% 34248Mi 14%
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top po -n robusta
NAME CPU(cores) MEMORY(bytes)
robusta-forwarder-6ddb7758f7-xm42p 53400m 502Mi
robusta-runner-6cb648c696-44sqp 9m 838Mi
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc delete -n robusta po robusta-forwarder-6ddb7758f7-xm42p
pod "robusta-forwarder-6ddb7758f7-xm42p" deleted
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc get po -n robusta
NAME READY STATUS RESTARTS AGE
robusta-forwarder-6ddb7758f7-b2l69 1/1 Running 0 36s
robusta-runner-6cb648c696-44sqp 2/2 Running 2 (31h ago) 3d22h
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc version
Client Version: 4.15.15
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.14.36
Kubernetes Version: v1.27.16+03a907c
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc get po -n robusta
NAME READY STATUS RESTARTS AGE
robusta-forwarder-6ddb7758f7-b2l69 1/1 Running 0 17h
robusta-runner-6cb648c696-44sqp 2/2 Running 2 (2d1h ago) 4d16h
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top po -n robusta
NAME CPU(cores) MEMORY(bytes)
robusta-forwarder-6ddb7758f7-b2l69 29m 286Mi
robusta-runner-6cb648c696-44sqp 880m 991Mi
pratikraj@Pratiks-MacBook-Pro ~ %
pratikraj@Pratiks-MacBook-Pro ~ % oc adm top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
worker7.copper.cp.xxxxx.xxxx.com 1284m 2% 34086Mi 14%
pratikraj@Pratiks-MacBook-Pro ~ %
```
**Environment Info (please complete the following information):**
```
Client Version: 4.15.15
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.14.36
Kubernetes Version: v1.27.16+03a907c
```
**Additional context**
Add any other context about the problem here.
| open | 2024-10-22T06:03:59Z | 2024-10-29T06:57:15Z | https://github.com/robusta-dev/robusta/issues/1601 | [] | Rajpratik71 | 3 |
QuivrHQ/quivr | api | 3,378 | fix: Linking brain syncs return as root | File list should filter on local only | closed | 2024-10-16T07:33:39Z | 2024-10-16T08:19:30Z | https://github.com/QuivrHQ/quivr/issues/3378 | [
"bug"
] | linear[bot] | 1 |
neuml/txtai | nlp | 816 | Fix error with inferring function parameters in agents | Currently, when passing a function as a tool, it's not correctly inferring the name and input parameters. This should be fixed.
In the meantime, callable objects are working as expected. | closed | 2024-11-21T19:01:47Z | 2024-11-21T19:16:22Z | https://github.com/neuml/txtai/issues/816 | [
"bug"
] | davidmezzetti | 0 |
holoviz/panel | matplotlib | 7,428 | Bubble map at each well location represented by pie charts. | In the oil industry we typically plot a bubble plot at each well location on a map where the size of the bubble might be scaled by daily oil production. In our applications the bubbles are actually pie charts at each well location where the pie could be divided up by total oil, water or gas production at each well.
I would think that this application might be useful for other mapping applications too. | closed | 2024-10-21T18:51:30Z | 2025-01-21T13:43:48Z | https://github.com/holoviz/panel/issues/7428 | [] | Philliec459 | 4 |
python-restx/flask-restx | api | 358 | Apply method_decorators globally | Is there a way to apply `method_decorators` globally without having to define them on every Resource? | closed | 2021-07-22T16:34:26Z | 2021-07-22T19:07:27Z | https://github.com/python-restx/flask-restx/issues/358 | [
"question"
] | santalvarez | 1 |
scikit-hep/awkward | numpy | 3,036 | awkward._nplikes.typetracer.TypeTracerArray._new needs further optimization, it is hot code | ### Version of Awkward Array
2.6.1
### Description and code to reproduce
The present code is below:
```python3
@classmethod
def _new(
cls,
dtype: DType,
shape: tuple[ShapeItem, ...],
form_key: str | None = None,
report: TypeTracerReport | None = None,
):
self = super().__new__(cls)
self.form_key = form_key
self.report = report
if not isinstance(shape, tuple):
raise TypeError("typetracer shape must be a tuple")
if not all(isinstance(x, int) or x is unknown_length for x in shape):
raise TypeError("typetracer shape must be integers or unknown-length")
if not isinstance(dtype, np.dtype):
raise TypeError("typetracer dtype must be an instance of np.dtype")
self._shape = shape
self._dtype = dtype
return self
```
in particular:
```python3
if not isinstance(shape, tuple):
raise TypeError("typetracer shape must be a tuple")
if not all(isinstance(x, int) or x is unknown_length for x in shape):
raise TypeError("typetracer shape must be integers or unknown-length")
if not isinstance(dtype, np.dtype):
raise TypeError("typetracer dtype must be an instance of np.dtype")
```
is quite expensive at the scale of thousands of awkward array kernel calls and in normal, non-development, operation none of these if statements ever evaluate to True. They are relatively expensive to evaluate when there is no data processing, see for example typetracing in dask-awkward.
We know already that commenting out these lines brings significant performance improvements.
A reasonable alternative to that would be wrapping this code in an if block.
```python3
@classmethod
def _new(
cls,
dtype: DType,
shape: tuple[ShapeItem, ...],
form_key: str | None = None,
report: TypeTracerReport | None = None,
):
self = super().__new__(cls)
self.form_key = form_key
self.report = report
if __development_build__:
if not isinstance(shape, tuple):
raise TypeError("typetracer shape must be a tuple")
if not all(isinstance(x, int) or x is unknown_length for x in shape):
raise TypeError("typetracer shape must be integers or unknown-length")
if not isinstance(dtype, np.dtype):
raise TypeError("typetracer dtype must be an instance of np.dtype")
self._shape = shape
self._dtype = dtype
return self
```
Where `__development_build__` is just a stand in name. I think the only requirements would be that the parameter to the if statement is a simple bool, no lookups (etc.), as this is known to be pretty hot code.
Savings are O(1s) for a complex HEP analysis when building a single few-thousand layer task graph. This corresponds to many minutes saved in processing for full-scale analysis.
@jpivarski @agoose77 @martindurant | closed | 2024-03-01T16:03:08Z | 2024-03-21T23:09:00Z | https://github.com/scikit-hep/awkward/issues/3036 | [
"performance"
] | lgray | 10 |
Ehco1996/django-sspanel | django | 207 | 还更吗? | 还更吗? | closed | 2019-02-01T10:53:50Z | 2019-02-25T13:47:57Z | https://github.com/Ehco1996/django-sspanel/issues/207 | [] | perfect-network | 2 |
unit8co/darts | data-science | 2,111 | How to use darts libraries with PySpark? | Hello all,
I'm trying to integrate darts library with PySpark session to make my application perform time-series prediction in parallelism.
I referred this blog [Time-Series Forecasting with Spark and Prophet](https://medium.com/@y.s.yoon/scalable-time-series-forecasting-in-spark-prophet-cnn-lstm-and-sarima-a5306153711e) inplace of Prophet i was trying using darts.
Is this a good approch? or any other suggestion for prediction in parallelism ? | closed | 2023-12-07T14:26:12Z | 2024-01-10T07:44:56Z | https://github.com/unit8co/darts/issues/2111 | [
"question"
] | AyushBhardwaj321 | 1 |
sinaptik-ai/pandas-ai | data-visualization | 712 | Using pandasai with llama2 70B. Error: "Unfortunately, I was not able to answer your question, because of the following error:\n\nCSVFormatter.__init__() got an unexpected keyword argument 'line_terminator'\n" | ### System Info
pandasai version 1.4.1
python version 3.10.6
llm - llama2 70B
environment - sagemaker
### 🐛 Describe the bug
code:
``` python
from pandasai.llm import HuggingFaceTextGen
from pandasai import SmartDataframe
llm = HuggingFaceTextGen(
inference_server_url="ENDPOINT_URL"
)
df = SmartDataframe("data.csv", config={"llm": llm})
df.chat("plot a chart of col_1 by col_2")
```
Error:
"Unfortunately, I was not able to answer your question, because of the following error:\n\nCSVFormatter.__init__() got an unexpected keyword argument 'line_terminator'\n" | closed | 2023-10-31T02:24:14Z | 2024-06-01T00:19:41Z | https://github.com/sinaptik-ai/pandas-ai/issues/712 | [] | aparnakesarkar | 2 |
ultralytics/yolov5 | pytorch | 13,127 | YOLOv5 receptive range size | Hello
I am currently doing small defect detection. The length or width of the defect occupies at most 0.02-0.08 of the original image. I am using YOLOv5s. I have found in other previous questions that YOLO default uses P3 P4 P5 as the detection head. In other questions and answers, you can Knowing that the P5 is a large detector, I thought the listing could be deleted in my case. In your previous answer I noticed that you mentioned the receptive field.
I want to ask about this. I calculated the receptive field size in 5s to skip "skip connection".
(The only modules that affect perception and calculation are CNN and bottleneck in CSP)
P1 is 6
P2+CSP is 18
P3+CSP is 66
P4+CSP is 194
P5+CSP is 322
What I want to ask you about is about receptive field correspondence.
In my example, my input size is 640 multiplied by the overall defect size to cover the maximum range of 0.08, and the defective pixels account for 52. So I think I only need to cover the receptive field with the tiny defect size and expand it to twice the receptive field size in another layer. To make another area partial, taking the above 52 as an example, I should select 66 of P3 and reduce the bottleneck in P4 to twice the size of 66 and delete P5.
(Since P5 is aimed at large objects and the larger receptive field may cause the object to be blurred in the depth of the rolling machine), I think that the reason for modifying this part from the backbone is because the backbone is feature extraction.
What I want to ask is, is the idea of deleting the bottleneck correct?
In addition, does this idea correspond to the receptive field you mentioned (I am doing research on alignment) | closed | 2024-06-25T13:06:54Z | 2024-10-20T19:48:52Z | https://github.com/ultralytics/yolov5/issues/13127 | [
"question"
] | Heaven0612 | 8 |
viewflow/viewflow | django | 81 | django 1.7+ support? | whats the roadmap/plans for supporting atleast django 1.7?
| closed | 2015-03-19T16:47:34Z | 2015-07-31T08:24:57Z | https://github.com/viewflow/viewflow/issues/81 | [
"request/question"
] | Bashar | 1 |
davidteather/TikTok-Api | api | 378 | [BUG] - Your Error Here | # Read Below!!! If this doesn't fix your issue delete these two lines
**You may need to install chromedriver for your machine globally. Download it [here](https://sites.google.com/a/chromium.org/chromedriver/) and add it to your path.**
**Describe the bug**
A clear and concise description of what the bug is.
**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
```
# Code Goes Here
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Error Trace (if any)**
Put the error trace below if there's any error thrown.
```
# Error Trace Here
```
**Desktop (please complete the following information):**
- OS: [e.g. Windows 10]
- TikTokApi Version [e.g. 3.3.1] - if out of date upgrade before posting an issue
**Additional context**
Add any other context about the problem here.
| closed | 2020-11-20T10:31:11Z | 2020-11-20T10:40:46Z | https://github.com/davidteather/TikTok-Api/issues/378 | [
"bug"
] | handole | 1 |
sunscrapers/djoser | rest-api | 353 | It's unclear what is meant by SERIALIZERS.current_user | I'm getting a deprecation warning "Current user endpoints now use their own serializer setting. For more information, see: https://djoser.readthedocs.io/en/latest/settings.html#serializers".
The referred documentation mentions that "Current user endpoints now use the serializer specified by SERIALIZERS.current_user". However, it's unclear what `SERIALIZERS.current_user` exactly refers to. Does it mean `settings.DJOSER['SERIALIZERS']['current_user']`? Or does it mean that `settings.SERIALIZERS` should point to an object that has a `current_user` member? In the latter case, I suppose `settings.SERIALIZERS.current_user` is a string that points to a serializer class?
In any case, I'm confused. Could this be clarified? | closed | 2019-02-12T10:37:59Z | 2019-05-15T18:08:39Z | https://github.com/sunscrapers/djoser/issues/353 | [] | mnieber | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 1,402 | Model weights changed? Won't run past 1% | I'm getting this error every time I try to use MDX23C_D1581 modelI. I didn't change any settings (that I'm aware of) and I even uninstalled/reinstalled the GUI but the same thing is happening. I'm completely lost.
Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Error(s) in loading state_dict for TFC_TDF_net:
size mismatch for final_conv.2.weight: copying a param with shape torch.Size([32, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 128, 1, 1])."
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 652, in seperate
File "separate.py", line 741, in demix
File "torch\nn\modules\module.py", line 1667, in load_state_dict
"
Error Time Stamp [2024-06-11 17:21:37]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: True
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: MDX-Net: UVR-MDX-NET-Voc_FT
vr_other_secondary_model: Demucs: v3 | mdx_extra_q
vr_bass_secondary_model: Demucs: v4 | htdemucs_ft
vr_drums_secondary_model: Demucs: v4 | htdemucs_6s
vr_is_secondary_model_activate: True
vr_voc_inst_secondary_model_scale: 0.80
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: MDX-Net: UVR-MDX-NET-Voc_FT
demucs_other_secondary_model: Demucs: v3 | repro_mdx_a_hybrid
demucs_bass_secondary_model: Demucs: v4 | htdemucs_ft
demucs_drums_secondary_model: Demucs: v4 | htdemucs_6s
demucs_is_secondary_model_activate: True
demucs_voc_inst_secondary_model_scale: 0.80
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: MDX-Net: MDX23C-InstVoc HQ 2
is_demucs_pre_proc_model_activate: True
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: MDX23C_D1581.ckpt
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: False
is_spec_match: True
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: True
deverb_vocal_opt: All Vocal Types
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: MDX-Net: UVR-MDX-NET Karaoke 2
mdx_other_secondary_model: Demucs: v3 | repro_mdx_a_hybrid
mdx_bass_secondary_model: Demucs: v4 | htdemucs_ft
mdx_drums_secondary_model: Demucs: v4 | htdemucs_6s
mdx_is_secondary_model_activate: True
mdx_voc_inst_secondary_model_scale: 0.80
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Quadro M4000:0
help_hints_var: True
set_vocal_splitter: VR Arc: 6_HP-Karaoke-UVR
is_set_vocal_splitter: True
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: Vocals | open | 2024-06-11T21:35:22Z | 2024-06-11T21:49:21Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1402 | [] | dirtdoggy | 0 |
deezer/spleeter | deep-learning | 116 | [Discussion] FileNotFoundError When Trying to Train | ### **Steps:**
1. Installed using anaconda
2. Run as: spleeter train -p sirens_config.json -d sirens_train.csv
3. FileNotFoundError: [Errno 2] File b'configs/sirens_train.csv' does not exist: b'configs/sirens_train.csv'
sirens _train.csv definitely exists so I'm not sure what the issue is. any help would be great thanks! | closed | 2019-11-18T22:43:11Z | 2020-01-23T09:38:13Z | https://github.com/deezer/spleeter/issues/116 | [
"question"
] | clockworkred458 | 1 |
kizniche/Mycodo | automation | 670 | TH10 w/ AM2301. Mycodo reporting C as F | ## Mycodo Issue Report:
- Specific Mycodo Version:
Mycodo Version: 7.5.10
Python Version: 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
Database Version: 6333b0832b3d
Daemon Status: Running
Daemon Process ID: 759
Daemon RAM Usage: 68.756 MB
Daemon Virtualenv: Yes
Frontend RAM Usage: 53.084 MB
Frontend Virtualenv: Yes
#### Problem Description
TH10 + AM2301 with Tasmota 6.6.0. Mycodo reports humidity correctly but displays incorrect data for Temperature. When set to read C and Do Not Convert it lists the C temp much lower than actual. When Input set to Convert to Fahrenheit, it lists the correct temperature in Celcius but displays it as Fahrenheit.
### Errors
- List any errors you encountered.
- Copy and pasting crash logs, or link to any specific
code lines you've isolated (please use GitHub permalinks for this)
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. Add TH10 w/ AM2301 w/ Tasmota as input
2. Live data lists incorrect C temperature
3. Set input to Convert to Fahrenheit
4. Live data displays correct C but displayed as F
### Additional Notes
Images attached. Thanks for the great product!




| closed | 2019-07-09T21:16:18Z | 2019-07-17T11:09:17Z | https://github.com/kizniche/Mycodo/issues/670 | [
"bug"
] | toposwope | 49 |
collerek/ormar | pydantic | 885 | save_related doesn't work if id is uuid | **Describe the bug**
If you use Model.save_related and the model has a uuid pk instead of an int it doent save anything
(The save itself returns the numer of rows saved correctly)
**To Reproduce**
Copy code example of https://collerek.github.io/ormar/models/methods/#save_related
Code works as expected.
replace id with `id: UUID4 = ormar.UUID(primary_key=True, default=uuid4)`
Now the get() raises `ormar.exceptions.NoMatch`
**Expected behavior**
Expect the same behavior as with int pk.
**Versions (please complete the following information):**
- Database backend used postgress
- Python version python:3.10.8-slim-bullseye (docker image)
- `ormar` version 0.11.3
- `pydantic` version 1.10.2
- if applicable `fastapi` version 0.85.0
| closed | 2022-10-18T18:09:02Z | 2022-10-31T16:45:39Z | https://github.com/collerek/ormar/issues/885 | [
"bug"
] | eloi-martinez-qida | 2 |
vvbbnn00/WARP-Clash-API | flask | 152 | 你的ip是什么ip啊 | http://你的IP:21001
你的ip是什么ip啊
小白看不太懂
http://127.0.0.1:21001/api/clash?best=false&randomName=true&proxyFormat=full&ipv6=true
我得到这个链接,但是无法下载节点,把上面的链接打开提示{"code":403,"message":"Unauthorized"}
请问这是什么意思 | closed | 2024-03-16T06:19:47Z | 2024-11-24T06:07:01Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/152 | [
"enhancement"
] | hicaicai | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,127 | When training the GAN model, how many epochs do we need to train | Hello,
In your article, Cyclegan is trained a total of 200 epochs.
But when I read the latest GAN article-"Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation", the author compares some gan models, including Cyclegan. In the article, All models are trained over 100K iterations.
so,i want to konw , When training the GAN model, how many epochs do we need to train | closed | 2020-08-19T08:13:03Z | 2021-04-15T14:24:56Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1127 | [] | hankhaohao | 6 |
psf/requests | python | 6,145 | timeout parameter not applied to initial connection | If specifying the timeout parameter to requests.get(), an exception is raised after the specified timeout. If the code is executed without an internet connection, an exception is raised immediately. Python is at 3.10.
## Expected Result
It feels like the timeout should apply to this scenario as well.
## Actual Result
<!-- What happened instead. -->
## Reproduction Steps
```python
import requests
```
## System Information
$ python -m requests.help
```json
{
"paste": "here"
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2022-05-29T18:56:10Z | 2023-05-30T00:02:52Z | https://github.com/psf/requests/issues/6145 | [] | chatumao | 2 |
huggingface/pytorch-image-models | pytorch | 1,878 | [BUG] EfficientFormer TypeError: expected TensorOptions | **Describe the Bug**
TypeError occurred when implementing the EfficientFormer. The remaining tested models work well
**To Reproduce**
Run the below code in the training pipeline
```
class EfficientFormer(torch.nn.Module):
def __init__(self):
super(EfficientFormer, self).__init__()
self.model = timm.create_model('efficientformer_l1.snap_dist_in1k', num_classes=2)
def forward(self, x):
for name, param in self.model.named_parameters():
if 'head' not in name:
param.requires_grad = False
print(name, param.requires_grad)
return self.model(x)
```
**Expected Behavior**
```
C:\Users\e0575844\Anaconda3\envs\ET\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:3191.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "C:\Users\A\Desktop\CCCC\train.py", line 116, in <module>
main()
File "C:\Users\A\Desktop\CCCC\train.py", line 112, in main
train(**vars(args))
File "C:\Users\A\Desktop\CCCC\train.py", line 14, in train
model = model_sel(model, device)
File "C:\Users\A\Desktop\CCCC\utils\selection.py", line 35, in model_sel
model = model_dict[model_name]()
File "C:\Users\A\Desktop\CCCC\utils\models.py", line 352, in __init__
self.model = timm.create_model('efficientformer_l1.snap_dist_in1k', num_classes=2)
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\_factory.py", line 114, in create_model
model = create_fn(
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 551, in efficientformer_l1
return _create_efficientformer('efficientformer_l1', pretrained=pretrained, **dict(model_args, **kwargs))
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 537, in _create_efficientformer
model = build_model_with_cfg(
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\_builder.py", line 381, in build_model_with_cfg
model = model_cls(**kwargs)
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 389, in __init__
stage = EfficientFormerStage(
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 320, in __init__
MetaBlock1d(
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 220, in __init__
self.token_mixer = Attention(dim)
File "C:\Users\A\Anaconda3\envs\ET\lib\site-packages\timm\models\efficientformer.py", line 70, in __init__
self.register_buffer('attention_bias_idxs', torch.LongTensor(rel_pos))
TypeError: expected TensorOptions(dtype=__int64, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)) (got TensorOptions(dtype=__int64, device=cuda:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))
```
**Desktop**
- Windows 10
- timm==0.9.2
- PyTorch==1.13.1+cu116, cuDNN==8302 | closed | 2023-07-22T07:07:24Z | 2023-08-03T23:39:47Z | https://github.com/huggingface/pytorch-image-models/issues/1878 | [
"bug"
] | NUS-Tim | 1 |
jupyter-book/jupyter-book | jupyter | 2,150 | Can't build book on mac when folder name is not project name. | ### Describe the bug
I have a repository whose folder name is different than the package name. When building the book it fails because the virtual environment name is wrong (see below).
### Reproduce the bug
Have a repo with a different folder name than the package, and the build will fail. Changing the folder name to match the package, and the build succeeds. The bug only manifests on mac, on linux it works as expected.
### List your environment
```
# Platform: darwin; (macOS-14.1-arm64-arm-64bit)
# Sphinx version: 7.3.7
# Python version: 3.11.9 (CPython)
# Docutils version: 0.20.1
# Jinja2 version: 3.1.3
# Pygments version: 2.18.0
# Last messages:
#
#
# reading sources... [ 50%]
# markdown
#
#
# reading sources... [ 75%]
# markdown-notebooks
#
# /Users/bothg/Documents/flybody/flybody_internal/docs/markdown-notebooks.md: Executing notebook using local CWD [mystnb]
# Loaded extensions:
# sphinx.ext.mathjax (7.3.7)
# alabaster (0.7.16)
# sphinxcontrib.applehelp (1.0.8)
# sphinxcontrib.devhelp (1.0.6)
# sphinxcontrib.htmlhelp (2.0.5)
# sphinxcontrib.serializinghtml (1.1.10)
# sphinxcontrib.qthelp (1.0.7)
# sphinx_togglebutton (0.3.2)
# sphinx_copybutton (0.5.2)
# myst_nb (1.1.0)
# jupyter_book (1.0.0)
# sphinx_thebe (0.3.1)
# sphinx_comments (0.0.3)
# sphinx_external_toc (1.0.1)
# sphinx.ext.intersphinx (7.3.7)
# sphinx_design (0.5.0)
# sphinx_book_theme (unknown version)
# sphinxcontrib.bibtex (2.6.2)
# sphinx_jupyterbook_latex (unknown version)
# sphinx_multitoc_numbering (unknown version)
# pydata_sphinx_theme (unknown version)
# Traceback:
Traceback (most recent call last):
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_book/sphinx.py", line 167, in build_sphinx
app.build(force_all, filenames)
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/sphinx/application.py", line 351, in build
self.builder.build_update()
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/sphinx/builders/__init__.py", line 293, in build_update
self.build(to_build,
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/sphinx/builders/__init__.py", line 313, in build
updated_docnames = set(self.read())
^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/sphinx/builders/__init__.py", line 419, in read
self._read_serial(docnames)
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/sphinx/builders/__init__.py", line 440, in _read_serial
self.read_doc(docname)
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/sphinx/builders/__init__.py", line 497, in read_doc
publisher.publish()
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/docutils/core.py", line 234, in publish
self.document = self.reader.read(self.source, self.parser,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/sphinx/io.py", line 107, in read
self.parse()
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/docutils/readers/__init__.py", line 76, in parse
self.parser.parse(self.input, document)
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/myst_nb/sphinx_.py", line 152, in parse
with create_client(
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/myst_nb/core/execute/base.py", line 79, in __enter__
self.start_client()
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/myst_nb/core/execute/direct.py", line 40, in start_client
result = single_nb_execution(
^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_cache/executors/utils.py", line 58, in single_nb_execution
executenb(
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/nbclient/client.py", line 1314, in execute
return NotebookClient(nb=nb, resources=resources, km=km, **kwargs).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_core/utils/__init__.py", line 165, in wrapped
return loop.run_until_complete(inner)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/nbclient/client.py", line 693, in async_execute
async with self.async_setup_kernel(**kwargs):
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/nbclient/client.py", line 648, in async_setup_kernel
await self.async_start_new_kernel(**kwargs)
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/nbclient/client.py", line 550, in async_start_new_kernel
await ensure_async(self.km.start_kernel(extra_arguments=self.extra_arguments, **kwargs))
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_core/utils/__init__.py", line 198, in ensure_async
result = await obj
^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_client/manager.py", line 96, in wrapper
raise e
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_client/manager.py", line 87, in wrapper
out = await method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_client/manager.py", line 439, in _async_start_kernel
await self._async_launch_kernel(kernel_cmd, **kw)
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_client/manager.py", line 354, in _async_launch_kernel
connection_info = await self.provisioner.launch_kernel(kernel_cmd, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_client/provisioning/local_provisioner.py", line 210, in launch_kernel
self.process = launch_kernel(cmd, **scrubbed_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_client/launcher.py", line 170, in launch_kernel
raise ex
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/site-packages/jupyter_client/launcher.py", line 155, in launch_kernel
proc = Popen(cmd, **kwargs) # noqa
^^^^^^^^^^^^^^^^^^^^
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Users/bothg/.pyenv/versions/3.11.9/lib/python3.11/subprocess.py", line 1955, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bothg/Documents/flybody/flybody/.venv/bin/python'
``` | open | 2024-05-13T20:58:20Z | 2024-05-13T20:58:50Z | https://github.com/jupyter-book/jupyter-book/issues/2150 | [
"bug"
] | GJBoth | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 735 | run_clm_sft_with_peft.py脚本是不是不支持shareGPT那种形式的多轮数据训练? | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
模型训练与精调
### 基础模型
LLaMA-Plus-13B
### 操作系统
Linux
### 详细描述问题
run_clm_sft_with_peft.py脚本是不是不支持shareGPT那种形式的多轮数据训练?
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况
```
### 运行日志或截图
```
# 请在此处粘贴运行日志
``` | closed | 2023-07-11T05:51:49Z | 2023-07-21T22:02:09Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/735 | [
"stale"
] | xyfZzz | 4 |
netbox-community/netbox | django | 18,373 | Cannot Assign devices to Cluster | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v.4.2.1
### Python Version
3.11
### Steps to Reproduce
1. Create a new Cluster, fill out only required fields
2. Click on Assign Device to Cluster, and fill out only required fields
3. Click Add Devices
### Expected Behavior
Device(s) are assigned to Cluster
### Observed Behavior
Server Error from NetBox
```
<class 'AttributeError'>
'Cluster' object has no attribute 'site'
Python version: 3.11.2
NetBox version: 4.2.1
Plugins:
netbox_branching: 0.5.2
netbox_device_view: 0.1.7
netbox_reorder_rack: 1.1.3
``` | closed | 2025-01-09T21:45:21Z | 2025-01-17T13:35:18Z | https://github.com/netbox-community/netbox/issues/18373 | [
"type: bug",
"status: accepted",
"severity: medium"
] | joeladria | 0 |
pydantic/pydantic-ai | pydantic | 667 | Ollama: Stream Always Fails | Hi,
Using stream, structured, text with Ollama always fails.
With any models with tools.
**Code:**
```
from datetime import date
from pydantic import ValidationError
from typing_extensions import TypedDict
from pydantic_ai import Agent
class UserProfile(TypedDict, total=False):
name: str
dob: date
bio: str
# Any models with tools
#
agent = Agent('ollama:llama3.2', result_type=UserProfile)
async def main():
user_input = 'My name is Ben, I was born on January 28th 1990, I like the chain the dog and the pyramid.'
async with agent.run_stream(user_input) as result:
async for message, last in result.stream():
# ....
```
**Error:**
```
09:22:36.733 preparing model and tools run_step=1
09:22:36.734 model request run_step=1
09:22:38.064 handle model response
09:22:38.066 preparing model and tools run_step=2
09:22:38.067 model request run_step=2
09:22:39.605 handle model response
```
```
---> [21](vscode-notebook-cell:?execution_count=18&line=21) async with agent.run_stream(user_input) as result:
[22](vscode-notebook-cell:?execution_count=18&line=22) async for message in result.stream():
[23](vscode-notebook-cell:?execution_count=18&line=23) print(message)
File ~/.pyenv/versions/3.12.0/lib/python3.12/contextlib.py:204, in _AsyncGeneratorContextManager.__aenter__(self)
[202](~/.pyenv/versions/3.12.0/lib/python3.12/contextlib.py:202) del self.args, self.kwds, self.func
[203](~/.pyenv/versions/3.12.0/lib/python3.12/contextlib.py:203) try:
--> [204](~/.pyenv/versions/3.12.0/lib/python3.12/contextlib.py:204) return await anext(self.gen)
[205](~/.pyenv/versions/3.12.0/lib/python3.12/contextlib.py:205) except StopAsyncIteration:
[206](~/.pyenv/versions/3.12.0/lib/python3.12/contextlib.py:206) raise RuntimeError("generator didn't yield") from None
File ~:538, in Agent.run_stream(self, user_prompt, result_type, message_history, model, deps, model_settings, usage_limits, usage, infer_name)
[535](:535) model_req_span.__exit__(None, None, None)
[537](:537) with _logfire.span('handle model response') as handle_span:
--> [538](:538) maybe_final_result = await self._handle_streamed_model_response(
[539](:539) model_response, run_context, result_schema
[540](:540) )
[542](:542) # Check if we got a final result
[543](:543) if isinstance(maybe_final_result, _MarkFinalResult):
File ~:1202, in Agent._handle_streamed_model_response(self, model_response, run_context, result_schema)
[1200](:1200) return _MarkFinalResult(model_response, None)
[1201](:1201) else:
-> [1202](:1202) self._incr_result_retry(run_context)
[1203](:1203) response = _messages.RetryPromptPart(
[1204](:1204) content='Plain text responses are not permitted, please call one of the functions instead.',
[1205](:1205) )
[1206](:1206) # stream the response, so usage is correct
File ~:1270, in Agent._incr_result_retry(self, run_context)
[1268](:1268) run_context.retry += 1
[1269](:1269) if run_context.retry > self._max_result_retries:
-> [1270](:1270) raise exceptions.UnexpectedModelBehavior(
[1271](:1271) f'Exceeded maximum retries ({self._max_result_retries}) for result validation'
[1272](:1272) )
``` | closed | 2025-01-13T09:26:52Z | 2025-01-16T22:18:54Z | https://github.com/pydantic/pydantic-ai/issues/667 | [] | YanSte | 9 |
lux-org/lux | pandas | 111 | Display warning when no recommendations are generated | When no recommendations are generated (e.g., when [dataframe is small but not preaggregated](https://github.com/lux-org/lux/blob/master/lux/core/frame.py#L153), possibly other cases), we should display a warning that explains why the Lux view is not showing up.
Add an advanced ReadTheDoc page explaining default recommendation logic, including when recommendations are *not* displayed.
_Originally posted by @akanz1 in https://github.com/lux-org/lux/issues/110#issuecomment-706659586_ | closed | 2020-10-11T08:13:32Z | 2021-03-03T09:13:16Z | https://github.com/lux-org/lux/issues/111 | [
"easy"
] | dorisjlee | 2 |
desec-io/desec-stack | rest-api | 805 | webapp: in dev mode, reload does not work on /domains/foo.bar | This is because vite thinks the user wanted to navigate to a file.
Fixed in vite 5.0.0 (to be released): https://github.com/vitejs/vite/commit/1ae4cbd | closed | 2023-09-13T09:56:04Z | 2024-01-08T14:35:13Z | https://github.com/desec-io/desec-stack/issues/805 | [
"bug",
"prio: low",
"gui"
] | peterthomassen | 1 |
Significant-Gravitas/AutoGPT | python | 8,722 | Github Comment Block doesn't work for pull request urls | To reproduce, Put the following input into the block for the PR url `https://github.com/ntindle/gridfinity-space-optimizer/pull/2` and any comment text like "test"
Then try `https://github.com/ntindle/gridfinity-space-optimizer/issues/2` and any comment text like "test" | closed | 2024-11-19T17:30:12Z | 2024-12-17T22:46:30Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8722 | [] | ntindle | 1 |
JaidedAI/EasyOCR | machine-learning | 808 | RuntimeError: Error(s) in loading state_dict for CRAFT: | i have train detection model (CRAFT) for my custom dataset and in exp/custom_dataset folder i got weights ( train on google colab using GPU). when i am trying to use those train weights in my system with EasyOCR i got errors like this
RuntimeError: Error(s) in loading state_dict for CRAFT:
Missing key(s) in state_dict: "basenet.slice1.0.weight", "basenet.slice1.0.bias", "basenet.slice1.1.weight" Etc
Unexpected key(s) in state_dict: "iter", "craft", "optimizer", "scaler".

i have used the following code
```
import easyocr
import os
import pandas as pd
fol_path = "/home/satyam/PycharmProjects/PivotProject/NP_Images"
## reader = easyocr.Reader(['en'], model_storage_directory="/home/satyam/PycharmProjects/Custom_EasyOcr/model")
reader = easyocr.Reader(['en'])
```
| open | 2022-08-03T10:40:47Z | 2022-08-24T06:10:34Z | https://github.com/JaidedAI/EasyOCR/issues/808 | [] | ghost | 4 |
Lightning-AI/LitServe | rest-api | 344 | Improve the debugging experience of LitServe | <!--
⚠️ BEFORE SUBMITTING, READ:
We're excited for your request! However, here are things we are not interested in:
- Decorators.
- Doing the same thing in multiple ways.
- Adding more layers of abstraction... tree-depth should be 1 at most.
- Features that over-engineer or complicate the code internals.
- Linters, and crud that complicates projects.
-->
Right now, it is challenging to debug when the server is failing.
This arises from 2 main reasons:
1. It is not possible to put a breakpoint within the server
2. The error are quite critic and hard to follow.
Example: I copied a server from the docs and put together a client and got this. Quite unclear where the error is originated from.
```python
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
raise exc
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
await app(scope, receive, sender)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/litserve/server.py", line 644, in predict
load_and_raise(response)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/litserve/utils.py", line 41, in load_and_raise
raise exception
RuntimeError: a Tensor with 2 elements cannot be converted to Scalar
```
This was happening within the `encode_response` but directed me to the predict.
Here are 2 potential improvements direction to ease adoption:
1. This breakpoint enables to debug in LitServe
```python
import multiprocessing
import os
import pdb
import sys
_stdin = [None]
_stdin_lock = multiprocessing.Lock()
try:
_stdin_fd = sys.stdin.fileno()
except Exception:
_stdin_fd = None
# Taken from https://github.com/facebookresearch/metaseq/blob/main/metaseq/pdb.py
class MPPdb(pdb.Pdb):
"""A Pdb wrapper that works in a multiprocessing environment."""
def __init__(self) -> None:
pdb.Pdb.__init__(self, nosigint=True)
def _cmdloop(self) -> None:
stdin_back = sys.stdin
with _stdin_lock:
try:
if _stdin_fd is not None:
if not _stdin[0]:
_stdin[0] = os.fdopen(_stdin_fd)
sys.stdin = _stdin[0]
self.cmdloop()
finally:
sys.stdin = stdin_back
def set_trace() -> None:
pdb = MPPdb()
pdb.set_trace(sys._getframe().f_back)
```
2. Capturing the cleaned error trace would ease debugging
----
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
### Motivation
<!--
Please outline the motivation for the proposal.
Is your feature request related to a problem? e.g., I'm always frustrated when [...].
If this is related to another GitHub issue, please link here too...
-->
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| closed | 2024-10-25T10:37:11Z | 2024-12-09T17:42:49Z | https://github.com/Lightning-AI/LitServe/issues/344 | [
"enhancement"
] | tchaton | 1 |
allenai/allennlp | data-science | 5,131 | Coreference Resolution Model - Long sequence | Thank you for such a wonderful tool for coreference resolution!
I have been testing your coreference resolution implementation on different texts of varied lengths of around 5000+ words and coreferences are being identified and resolved. How does this work? If my understanding is right, SpanBERT works only till 512 tokens right?
Are we using [sliding window](https://github.com/allenai/allennlp/pull/2537) for embedding here as well?
P.S. I am new in this domain and trying to understand how coreference resolution works under the hood. | closed | 2021-04-19T14:37:52Z | 2022-06-13T16:15:00Z | https://github.com/allenai/allennlp/issues/5131 | [
"question",
"stale"
] | aakashb95 | 4 |
ClimbsRocks/auto_ml | scikit-learn | 156 | handle strings passed in as output for binary classification | ```
File "fraud_automl.py", line 20, in <module>
ml_predictor.score(df, df.is_fraud)
auto_ml/predictor.py", line 861, in score
score, probas = self._scorer(self.trained_pipeline, X_test, y_test, advanced_scoring=advanced_scoring)
auto_ml/utils_scoring.py", line 165, in brier_score_loss_wrapper
val = int(val)
ValueError: invalid literal for int() with base 10: 'Fraud'
``` | open | 2017-01-14T00:32:17Z | 2017-01-14T00:32:17Z | https://github.com/ClimbsRocks/auto_ml/issues/156 | [] | ClimbsRocks | 0 |
huggingface/diffusers | deep-learning | 11,041 | WAN2.1 apply_group_offloading **ERROR** result | ### Describe the bug
I am attempting to use the WAN 2.1 model from the diffusers library to complete an image-to-video task on an NVIDIA RTX 4090. To optimize memory usage, I chose the group offload method and intended to compare resource consumption across different configurations. However, during testing, I encountered two main issues:
1. When using the group_offload_leaf_stream method:
I received warnings that some layers were not executed during the forward pass:
```
It seems like some layers were not executed during the forward pass. This may lead to problems when applying lazy prefetching with automatic tracing and lead to device-mismatch related errors. Please make sure that all layers are executed during the forward pass. The following layers were not executed:
unexecuted_layers=['blocks.25.attn2.norm_added_q', 'blocks.10.attn2.norm_added_q', 'blocks.13.attn2.norm_added_q', 'blocks.11.attn2.norm_added_q', 'blocks.34.attn2.norm_added_q', 'blocks.0.attn2.norm_added_q', 'blocks.35.attn2.norm_added_q', 'blocks.33.attn2.norm_added_q', 'blocks.21.attn2.norm_added_q', 'blocks.20.attn2.norm_added_q', 'blocks.3.attn2.norm_added_q', 'blocks.7.attn2.norm_added_q', 'blocks.22.attn2.norm_added_q', 'blocks.14.attn2.norm_added_q', 'blocks.29.attn2.norm_added_q', 'blocks.9.attn2.norm_added_q', 'blocks.1.attn2.norm_added_q', 'blocks.37.attn2.norm_added_q', 'blocks.18.attn2.norm_added_q', 'blocks.30.attn2.norm_added_q', 'blocks.4.attn2.norm_added_q', 'blocks.32.attn2.norm_added_q', 'blocks.36.attn2.norm_added_q', 'blocks.26.attn2.norm_added_q', 'blocks.6.attn2.norm_added_q', 'blocks.38.attn2.norm_added_q', 'blocks.17.attn2.norm_added_q', 'blocks.12.attn2.norm_added_q', 'blocks.19.attn2.norm_added_q', 'blocks.16.attn2.norm_added_q', 'blocks.15.attn2.norm_added_q', 'blocks.28.attn2.norm_added_q', 'blocks.24.attn2.norm_added_q', 'blocks.31.attn2.norm_added_q', 'blocks.8.attn2.norm_added_q', 'blocks.5.attn2.norm_added_q', 'blocks.27.attn2.norm_added_q', 'blocks.2.attn2.norm_added_q', 'blocks.39.attn2.norm_added_q', 'blocks.23.attn2.norm_added_q']
```

This issue resulted in severe degradation of the generated output.
这是我选择的图像:

我得到了错误的视频:
https://github.com/user-attachments/assets/7a8b55a2-6a71-493a-b7ae-64566b321954
当我使用默认pipe即不采用 group_offload_leaf_stream我得到了正确的结果:
https://github.com/user-attachments/assets/9b54c2f2-fa93-422f-b3df-619ee96bb3c8
2.When using the group_offload_block_1_stream method:
I encountered a runtime error: "RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same". It appears that the VAE module was not correctly assigned to the GPU device.
```
Traceback (most recent call last):
File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 171, in <module>
main(args)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 143, in main
run_inference()
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/memory_profiler.py", line 1188, in wrapper
val = prof(func)(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/memory_profiler.py", line 761, in f
return func(*args, **kwds)
File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 130, in run_inference
output = pipe(
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 587, in __call__
latents, condition = self.prepare_latents(
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 392, in prepare_latents
latent_condition = retrieve_latents(self.vae.encode(video_condition), generator)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 795, in encode
h = self._encode(x)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 762, in _encode
out = self.encoder(x[:, :, :1, :, :], feat_cache=self._enc_feat_map, feat_idx=self._enc_conv_idx)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 439, in forward
x = self.conv_in(x, feat_cache[idx])
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 78, in forward
return super().forward(x)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 720, in _conv_forward
return F.conv3d(
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
```
Request for Help:
Are there recommended approaches to ensure all layers are properly executed, especially for the group_offload_leaf_stream method?
How can I resolve the device mismatch issue related to the VAE?
Any suggestions or guidance would be greatly appreciated!
### Reproduction
here is my code
```python
import argparse
import functools
import json
import os
import pathlib
import psutil
import time
import torch
from diffusers import FluxPipeline
from diffusers.hooks import apply_group_offloading
from memory_profiler import profile
import torch
import numpy as np
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import CLIPVisionModel
from diffusers import FlowMatchEulerDiscreteScheduler, UniPCMultistepScheduler, WanPipeline
def get_memory_usage():
process = psutil.Process(os.getpid())
mem_bytes = process.memory_info().rss
return mem_bytes
@profile(precision=2)
def apply_offload(pipe: FluxPipeline, method: str) -> None:
if method == "full_cuda":
pipe.to("cuda")
elif method == "model_offload":
pipe.enable_model_cpu_offload()
elif method == "sequential_offload":
pipe.enable_sequential_cpu_offload()
elif method == "group_offload_block_1":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="block_level",
num_blocks_per_group=1,
use_stream=False,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
elif method == "group_offload_leaf":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="leaf_level",
use_stream=False,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
elif method == "group_offload_block_1_stream":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="block_level",
num_blocks_per_group=1,
use_stream=True,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
elif method == "group_offload_leaf_stream":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="leaf_level",
use_stream=True,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
@profile(precision=2)
def load_pipeline():
model_id = "Wan2.1-I2V-14B-480P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(
model_id, subfolder="image_encoder", torch_dtype=torch.float32
)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
scheduler_b = UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=3.0)
pipe = WanImageToVideoPipeline.from_pretrained(
model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16, scheduler=scheduler_b
)
return pipe
@torch.no_grad()
def main(args):
os.makedirs(args.output_dir, exist_ok=True)
os.makedirs(f"./results/check-wanmulti-framework/{args.method}/", exist_ok=True)
pipe = load_pipeline()
apply_offload(pipe, args.method)
apply_offload_memory_usage = get_memory_usage()
torch.cuda.reset_peak_memory_stats()
cuda_model_memory = torch.cuda.max_memory_reserved()
output_dir = pathlib.Path(args.output_dir)
output_dir.mkdir(exist_ok=True, parents=True)
run_inference_memory_usage_list = []
def cpu_mem_callback():
nonlocal run_inference_memory_usage_list
run_inference_memory_usage_list.append(get_memory_usage())
@profile(precision=2)
def run_inference():
image = load_image("./dataset/character-img/imgs3/1.jpeg")
max_area = 480 * 832
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
prompt = (
"A person smile."
)
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
generator = torch.Generator("cuda").manual_seed(100)
output = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
height=height,
width=width,
num_frames=81,
guidance_scale=5.0,
generator=generator,
).frames[0]
export_to_video(output, f"./results/check-wanmulti-framework/{args.method}/wanx_diffusers.mp4", fps=16)
t1 = time.time()
run_inference()
torch.cuda.synchronize()
t2 = time.time()
cuda_inference_memory = torch.cuda.max_memory_reserved()
time_required = t2 - t1
# run_inference_memory_usage = sum(run_inference_memory_usage_list) / len(run_inference_memory_usage_list)
# print(f"Run inference memory usage list: {run_inference_memory_usage_list}")
info = {
"time": round(time_required, 2),
"cuda_model_memory": round(cuda_model_memory / 1024**3, 2),
"cuda_inference_memory": round(cuda_inference_memory / 1024**3, 2),
"cpu_offload_memory": round(apply_offload_memory_usage / 1024**3, 2),
}
with open(output_dir / f"memory_usage_{args.method}.json", "w") as f:
json.dump(info, f, indent=4)
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--method", type=str, default="full_cuda", choices=["full_cuda", "model_offload", "sequential_offload", "group_offload_block_1", "group_offload_leaf", "group_offload_block_1_stream", "group_offload_leaf_stream"])
parser.add_argument("--output_dir", type=str, default="./results/offload_profiling")
return parser.parse_args()
if __name__ == "__main__":
args = get_args()
main(args)
```
here is my environment
```
Package Version
--------------------------------- --------------------
absl-py 2.1.0
accelerate 1.4.0
addict 2.4.0
aiofiles 23.2.1
aiohappyeyeballs 2.4.3
aiohttp 3.10.10
aiosignal 1.3.1
airportsdata 20241001
albucore 0.0.17
albumentations 1.4.18
aliyun-python-sdk-core 2.16.0
aliyun-python-sdk-kms 2.16.5
altair 5.4.1
annotated-types 0.7.0
antlr4-python3-runtime 4.9.3
anyio 4.6.2.post1
astor 0.8.1
asttokens 2.4.1
astunparse 1.6.3
async-timeout 4.0.3
attrs 24.2.0
av 13.1.0
beautifulsoup4 4.12.3
blake3 1.0.4
blinker 1.9.0
boto3 1.35.60
botocore 1.35.60
braceexpand 0.1.7
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
click 8.1.7
clip 0.2.0
cloudpickle 3.1.0
coloredlogs 15.0.1
comm 0.2.2
compressed-tensors 0.8.0
ConfigArgParse 1.7
contourpy 1.3.0
controlnet_aux 0.0.7
cpm-kernels 1.0.11
crcmod 1.7
cryptography 44.0.1
cupy-cuda12x 13.3.0
cycler 0.12.1
Cython 3.0.12
dash 2.18.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dashscope 1.22.2
datasets 3.0.1
debugpy 1.8.10
decorator 4.4.2
decord 0.6.0
deepspeed 0.15.2
depyf 0.18.0
diffsynth 1.1.2
diffusers 0.33.0.dev0
dill 0.3.8
diskcache 5.6.3
distro 1.9.0
dnspython 2.7.0
docker-pycreds 0.4.0
easydict 1.13
einops 0.8.0
email_validator 2.2.0
eval_type_backport 0.2.0
exceptiongroup 1.2.2
executing 2.1.0
facexlib 0.3.0
fairscale 0.4.13
fastapi 0.115.2
fastjsonschema 2.20.0
fastrlock 0.8.3
ffmpy 0.4.0
filelock 3.16.1
filterpy 1.4.5
flash-attn 2.6.3
Flask 3.0.3
flatbuffers 24.3.25
fonttools 4.54.1
frozenlist 1.4.1
fsspec 2024.6.1
ftfy 6.3.0
func_timeout 4.3.5
future 1.0.0
fvcore 0.1.5.post20221221
gast 0.6.0
gguf 0.10.0
gitdb 4.0.11
GitPython 3.1.43
google-pasta 0.2.0
gradio 5.5.0
gradio_client 1.4.2
grpcio 1.66.2
h11 0.14.0
h5py 3.12.1
hjson 3.1.0
httpcore 1.0.6
httptools 0.6.4
httpx 0.27.2
huggingface-hub 0.29.1
humanfriendly 10.0
idna 3.10
imageio 2.36.0
imageio-ffmpeg 0.5.1
imgaug 0.4.0
importlib_metadata 8.5.0
iniconfig 2.0.0
interegular 0.3.3
iopath 0.1.10
ipykernel 6.29.5
ipython 8.29.0
ipywidgets 8.1.5
itsdangerous 2.2.0
jaxtyping 0.2.34
jedi 0.19.1
Jinja2 3.1.4
jiter 0.7.0
jmespath 0.10.0
joblib 1.4.2
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jupyter_client 8.6.3
jupyter_core 5.7.2
jupyterlab_widgets 3.0.13
keras 3.7.0
kiwisolver 1.4.7
lark 1.2.2
lazy_loader 0.4
libclang 18.1.1
libigl 2.5.1
linkify-it-py 2.0.3
llvmlite 0.43.0
lm-format-enforcer 0.10.9
lmdb 1.6.2
loguru 0.7.3
lvis 0.5.3
Markdown 3.7
markdown-it-py 2.2.0
MarkupSafe 2.1.5
matplotlib 3.9.2
matplotlib-inline 0.1.7
mdit-py-plugins 0.3.3
mdurl 0.1.2
memory-profiler 0.61.0
mistral_common 1.5.1
ml-dtypes 0.4.1
modelscope 1.23.2
moviepy 1.0.3
mpmath 1.3.0
msgpack 1.1.0
msgspec 0.18.6
multidict 6.1.0
multiprocess 0.70.16
namex 0.0.8
narwhals 1.10.0
natsort 8.4.0
nbformat 5.10.4
nest-asyncio 1.6.0
networkx 3.4.1
ninja 1.11.1.3
numba 0.60.0
numpy 1.26.4
nvdiffrast 0.3.3
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-ml-py 12.560.30
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
omegaconf 2.3.0
onnxruntime 1.20.0
open3d 0.18.0
openai 1.54.4
openai-clip 1.0.1
opencv-python 4.10.0.84
opencv-python-headless 4.10.0.84
opt_einsum 3.4.0
optree 0.13.1
orjson 3.10.7
oss2 2.19.1
outlines 0.0.46
packaging 24.1
pandas 2.2.3
parso 0.8.4
partial-json-parser 0.2.1.1.post4
peft 0.13.2
pexpect 4.9.0
pillow 10.4.0
pip 24.2
platformdirs 4.3.6
plotly 5.24.1
pluggy 1.5.0
pooch 1.8.2
portalocker 2.10.1
proglog 0.1.10
prometheus_client 0.21.0
prometheus-fastapi-instrumentator 7.0.0
prompt_toolkit 3.0.48
propcache 0.2.0
protobuf 5.28.2
psutil 6.0.0
ptyprocess 0.7.0
pudb 2024.1.2
pure_eval 0.2.3
py-cpuinfo 9.0.0
pyairports 2.1.1
pyarrow 17.0.0
pybind11 2.13.6
pycocoevalcap 1.2
pycocotools 2.0.8
pycountry 24.6.1
pycparser 2.22
pycryptodome 3.21.0
pydantic 2.9.2
pydantic_core 2.23.4
pydub 0.25.1
Pygments 2.18.0
pyiqa 0.1.10
PyMatting 1.1.12
PyMCubes 0.1.6
pyparsing 3.2.0
pyquaternion 0.9.9
pytest 8.3.4
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.12
pytorch3d 0.7.8
pytz 2024.2
PyYAML 6.0.2
pyzmq 26.2.0
qwen-vl-utils 0.0.10
ray 2.37.0
referencing 0.35.1
regex 2024.9.11
rembg 2.0.59
requests 2.32.3
requests-toolbelt 1.0.0
retrying 1.3.4
rich 13.9.2
rpds-py 0.20.0
ruff 0.6.9
s3transfer 0.10.3
safehttpx 0.1.1
safetensors 0.4.5
scikit-image 0.24.0
scikit-learn 1.5.2
scikit-video 1.1.11
scipy 1.14.1
semantic-version 2.10.0
sentencepiece 0.2.0
sentry-sdk 2.18.0
setproctitle 1.3.3
setuptools 75.2.0
shapely 2.0.7
shellingham 1.5.4
six 1.16.0
sk-video 1.1.10
smmap 5.0.1
sniffio 1.3.1
soupsieve 2.6
stack-data 0.6.3
starlette 0.40.0
SwissArmyTransformer 0.4.12
sympy 1.13.1
tabulate 0.9.0
tenacity 9.0.0
tensorboard 2.18.0
tensorboard-data-server 0.7.2
tensorboardX 2.6.2.2
tensorflow-io-gcs-filesystem 0.37.1
termcolor 2.5.0
thop 0.1.1.post2209072238
threadpoolctl 3.5.0
tifffile 2024.9.20
tiktoken 0.7.0
timm 1.0.11
tokenizers 0.20.3
tomesd 0.1.3
tomli 2.2.1
tomlkit 0.12.0
torch 2.6.0
torchaudio 2.6.0
torchdiffeq 0.2.4
torchsde 0.2.6
torchvision 0.21.0
tornado 6.4.2
tqdm 4.66.5
traitlets 5.14.3
trampoline 0.1.2
transformers 4.46.2
transformers-stream-generator 0.0.4
trimesh 4.5.2
triton 3.2.0
typeguard 2.13.3
typer 0.12.5
typing_extensions 4.12.2
tzdata 2024.2
uc-micro-py 1.0.3
urllib3 2.2.3
urwid 2.6.16
urwid_readline 0.15.1
uvicorn 0.32.0
uvloop 0.21.0
wandb 0.18.7
watchfiles 0.24.0
wcwidth 0.2.13
webdataset 0.2.100
websocket-client 1.8.0
websockets 12.0
Werkzeug 3.0.4
wheel 0.44.0
widgetsnbextension 4.0.13
wrapt 1.17.0
xatlas 0.0.9
xxhash 3.5.0
yacs 0.1.8
yapf 0.43.0
yarl 1.15.3
zipp 3.20.2
```
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.15
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.1
- Transformers version: 4.46.2
- Accelerate version: 1.4.0
- PEFT version: 0.13.2
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA A800-SXM4-80GB, 81251 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@DN6 @a-r-r-o-w | closed | 2025-03-12T08:49:48Z | 2025-03-18T09:14:11Z | https://github.com/huggingface/diffusers/issues/11041 | [
"bug"
] | Passenger12138 | 6 |
mljar/mljar-supervised | scikit-learn | 263 | Add more information to main Readme.md | - add optimized metric
- add total run time
- add validation metric
- add parameters used in AutoML | closed | 2020-12-07T08:52:30Z | 2021-04-17T12:36:25Z | https://github.com/mljar/mljar-supervised/issues/263 | [
"enhancement",
"help wanted",
"good first issue"
] | pplonski | 0 |
pallets/quart | asyncio | 292 | TypeError: cannot use a string pattern on a bytes-like object | I am having trouble storing flask session with `werkzeug v.3.0.1`. My production code was working fine till update the latest version. Now, I am encounter the following problem.
```
[ERROR] Error in ASGI Framework
Traceback (most recent call last):
File ".../lib/python3.11/site-packages/hypercorn/asyncio/task_group.py", line 27, in _handle
await app(scope, receive, send, sync_spawn, call_soon)
File ".../lib/python3.11/site-packages/hypercorn/app_wrappers.py", line 33, in __call__
await self.app(scope, receive, send)
File ".../lib/python3.11/site-packages/quart/app.py", line 1621, in __call__
await self.asgi_app(scope, receive, send)
File ".../lib/python3.11/site-packages/quart/app.py", line 1647, in asgi_app
await asgi_handler(receive, send)
File ".../lib/python3.11/site-packages/quart/asgi.py", line 52, in __call__
raise_task_exceptions(done)
File ".../lib/python3.11/site-packages/quart/utils.py", line 187, in raise_task_exceptions
raise task.exception()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart/asgi.py", line 100, in handle_request
response = await _handle_exception(self.app, error)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart/asgi.py", line 357, in _handle_exception
raise error
File ".../lib/python3.11/site-packages/quart/asgi.py", line 98, in handle_request
response = await self.app.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart/app.py", line 1363, in handle_request
return await self.handle_exception(error)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart/app.py", line 1359, in handle_request
return await self.full_dispatch_request(request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart_flask_patch/app.py", line 28, in new_full_dispatch_request
return await old_full_dispatch_request(self, request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart/app.py", line 1398, in full_dispatch_request
return await self.finalize_request(result, request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart/app.py", line 1522, in finalize_request
response = await self.process_response(response, request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/quart/app.py", line 1583, in process_response
await self.ensure_async(self.session_interface.save_session)(self, session_, response)
File ".../lib/python3.11/site-packages/quart_flask_patch/app.py", line 43, in _wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.11/site-packages/flask_session/sessions.py", line 362, in save_session
response.set_cookie(app.config["SESSION_COOKIE_NAME"], session_id,
File ".../lib/python3.11/site-packages/werkzeug/sansio/response.py", line 224, in set_cookie
dump_cookie(
File ".../lib/python3.11/site-packages/werkzeug/http.py", line 1303, in dump_cookie
if not _cookie_no_quote_re.fullmatch(value):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: cannot use a string pattern on a bytes-like object
```
Environment:
- Python version: 3.11
- Werkzeug version: 3.0.1
- Quart version: 0.19.3
- Flask-Session version: 0.5.0
Can anybody help me resolving the issue?
| closed | 2023-11-13T22:32:43Z | 2023-12-11T00:19:21Z | https://github.com/pallets/quart/issues/292 | [] | mmreza79 | 1 |
youfou/wxpy | api | 112 | 我想问一下怎么获取公众号的推送,有个article,但是不清楚怎么使用 | 想要获取推送,或者模拟点击阅读推送 | open | 2017-07-05T03:58:39Z | 2017-08-04T00:15:07Z | https://github.com/youfou/wxpy/issues/112 | [] | lucksufe | 2 |
wkentaro/labelme | computer-vision | 891 | [Feature]windows system | I use labelme.exe in win8 system(32bit), but it has a problem.

The labelme.exe is created by win10(64bit), how can i slove the issue.
| closed | 2021-07-20T05:37:07Z | 2022-11-21T10:51:23Z | https://github.com/wkentaro/labelme/issues/891 | [
"issue::bug"
] | Enn29 | 1 |
rougier/from-python-to-numpy | numpy | 85 | Glumpy section incomplete | The Glumpy section is missing some text:
**Glumpy**
Glumpy is an OpenGL-based interactive visualization library in Python. Its goal is to make it easy to create fast, scalable, beautiful, interactive and dynamic visualizations. The main documentation for the site is organized into a couple of sections:
**7.4 Conclusion** | open | 2019-07-15T14:25:15Z | 2019-07-15T20:31:15Z | https://github.com/rougier/from-python-to-numpy/issues/85 | [] | jmmcd | 1 |
robinhood/faust | asyncio | 294 | Using Faust with read-only access to Kafka? | ## Checklist
- [x] I have included information about relevant versions
- [x] I have verified that the issue persists when using the `master` branch of Faust.
I apologize in advance if this is not the right forum for this issue as I am not reporting a bug as much as asking advise. Say the word and I'll gladly move to your prefered medium (Slack, stack overflow, etc.)
I am **trying to use Faust as an async kafka consumer without creating any new topics**. (I've used [kafka-python](https://pypi.org/project/kafka-python/) in the past, but the consumer is not asynchronous. I wanted to run a servlet for `:9000/healthz/` for kubernetes and felt like I was reinventing the wheel a bit working with threads and flask.)
My problem is classic ETL: the records I am consuming can be processed individually, and their order does not matter-- I am simply filtering them, transforming them, batching them, and writing to elasticsearch. I do not need replicas of my service to communicate or share state with one another (so no real need for the `service-__assignor-__leader` topic). The topic I am consuming is on a Kafka cluster that I am"not supposed to" create new topics in. (I may not even be able to because of ACL restrictions). Is Faust the right tool for me, or is there something else I should use for such a simple task?
# Versions
* Python version 3.6
* Faust version 1.4.6
* Operating system Mac OSX
* Kafka version 2.11-2.1.0
| closed | 2019-02-14T15:46:47Z | 2019-02-21T18:31:20Z | https://github.com/robinhood/faust/issues/294 | [] | mckeown12 | 1 |
tableau/server-client-python | rest-api | 1,100 | Domain of the user is not populated | **Describe the bug**
I have a bunch of users imported into the server v2022.1.4. They are imported from an AD tree with multiple domains (multiforest).
**Versions**
Details of your environment, including:
- Tableau Server version = 2022.1.4
- Python version = 3.8
- TSC library version = 0.19
**To Reproduce**
import tableauserverclient as TSC
tableau_auth = TSC.TableauAuth('USERNAME', 'PASSWORD')
server = TSC.Server('https://SERVERURL')
with server.auth.sign_in(tableau_auth):
all_users, pagination_item = server.users.get()
print([user.domain_name for user in all_users])
**Results**
All rows have blank domain names, although querying the repo via the system_users table returns domain ids for the same users.
**NOTE:** Be careful not to post user names, passwords, auth tokens or any other private or sensitive information.
| open | 2022-09-02T01:34:26Z | 2022-10-07T23:31:11Z | https://github.com/tableau/server-client-python/issues/1100 | [
"docs"
] | ashterenberg-paramount | 2 |
yezyilomo/django-restql | graphql | 297 | Querying data not working as expected | While trying out querying data on the [playground](https://django-restql-playground.yezyilomo.me/#/get), I could not get the excepted result. For example, I chose **Course** model on the playground and tried to query by book ID, like this:
`{
id,
name,
code,
books(id:3) {
id,
title,
author,
genre {
id,
title,
description
}
}
}`
It returns all courses. But if I try to query by course id, like this:
`(id:1) {
id,
name,
code,
books {
id,
title,
author,
genre {
id,
title,
description
}
}
}`
It works as excepted. Tried it out on some of the offered models, but I could not get consistent result...sometimes it works as excepted, and in some cases (like this one) it does not. How does this exactly work? | open | 2021-12-16T15:18:42Z | 2022-01-07T05:09:19Z | https://github.com/yezyilomo/django-restql/issues/297 | [] | mario-maistra | 1 |
sigmavirus24/github3.py | rest-api | 970 | Add vulnerability alerts and automated security fixes | The github rest API has support for enabling [vulnerability alerts](https://developer.github.com/v3/repos/#check-if-vulnerability-alerts-are-enabled-for-a-repository) and [automated security fixes](https://developer.github.com/v3/repos/#enable-automated-security-fixes). | open | 2019-09-08T20:34:14Z | 2019-09-08T21:08:06Z | https://github.com/sigmavirus24/github3.py/issues/970 | [] | tedivm | 0 |
ansible/awx | django | 15,628 | Duplicate database entry on job launch | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
When launching two jobs (of two different job templates) on the same inventory, as it happens occasionally through scheduled tasks, both jobs seem to try to create the same database entry ("main_jobevent_20241108_17"), but only one can win and continue. The other job is marked as failed and can be relaunched.
### AWX version
24.6.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [X] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
Schedule two jobs for the exact same moment in time or call them through external tools via the API.
### Expected results
I would like to see both jobs to run through.
### Actual results
The following error message is displayed in the failing job:
```
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/django/db/backends/utils.py", line 87, in _execute
return self.cursor.execute(sql)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
raise ex.with_traceback(None)
psycopg.errors.DuplicateObject: type "main_jobevent_20241108_17" already exists
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/tasks/jobs.py", line 499, in run
self.pre_run_hook(self.instance, private_data_dir)
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/tasks/jobs.py", line 1066, in pre_run_hook
super(RunJob, self).pre_run_hook(job, private_data_dir)
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/tasks/jobs.py", line 427, in pre_run_hook
create_partition(instance.event_class._meta.db_table, start=instance.created)
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/utils/common.py", line 1154, in create_partition
cursor.execute(
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/django/db/backends/utils.py", line 87, in _execute
return self.cursor.execute(sql)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
raise ex.with_traceback(None)
django.db.utils.ProgrammingError: type "main_jobevent_20241108_17" already exists
```
### Additional information
There seems to have been a somewhat related issue (fixed), but the message was slightly different. I leave it to the experts whether this is a regression or something new: #14563. | open | 2024-11-11T09:53:07Z | 2025-01-19T14:04:23Z | https://github.com/ansible/awx/issues/15628 | [
"type:bug",
"needs_triage",
"community"
] | gendergap | 1 |
autokey/autokey | automation | 257 | Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'> | ## Classification:
Bug
## Reproducibility:
Always
## Version
AutoKey version: 0.95.6-1
Used GUI (Gtk, Qt, or both): both have this issue
If the problem is known to be present in more than one version, please list all of those.
Installed via: (PPA, pip3, …). Arch AUR
Linux Distribution: Arch
## Summary
Summary of the problem.
Program launches, icon in system tray. --verbose indicates crash in X loop. I will be attaching the output shortly. I use the GTK version primarily, but it appears the problem exists in the Qt version as well. The problem looks to be with the python-xlib package (I have 0.24-1 installed).
## Steps to Reproduce (if applicable)
Merely launching the program is enough to trigger it.
## Expected Results
The X loop shouldn't crash, and my abbreviations should work.
## Actual Results
The abbreviations don't work, and there's no indication why (unless I start the program from the console and use the --verbose option). Here's an example of that output:
```2019-02-10 12:02:52,383 INFO - root - Initialising application
2019-02-10 12:02:52,387 INFO - root - Initialise global hotkeys
2019-02-10 12:02:52,387 INFO - config-manager - Loading config from existing file: /home/trey/.config/autokey/autokey.json
2019-02-10 12:02:52,388 DEBUG - config-manager - Loading folder at '/home/trey/.config/autokey/data/My Phrases'
2019-02-10 12:02:52,389 DEBUG - config-manager - Loading folder at '/home/trey/.config/autokey/data/Sample Scripts'
2019-02-10 12:02:52,390 DEBUG - config-manager - Loading folder at '/home/trey/.config/autokey/data/Scripts'
2019-02-10 12:02:52,394 INFO - config-manager - Configuration changed - rebuilding in-memory structures
2019-02-10 12:02:52,394 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/My Phrases
2019-02-10 12:02:52,394 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/My Phrases/Addresses
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/My Phrases/Email
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Sample Scripts
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/Signatures
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/Intro notes
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/Timetracker
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/docs
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/docs/highlighters
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/brackets
2019-02-10 12:02:52,395 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/brackets/test-data
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/cursor
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/line
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/main
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/main/test-data
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/pattern
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/pattern/test-data
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/regexp
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/regexp/test-data
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/highlighters/root
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/images
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data/Scripts/upstream/tests
2019-02-10 12:02:52,396 INFO - config-manager - Successfully loaded configuration
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey/data
2019-02-10 12:02:52,396 DEBUG - inotify - Adding watch for /home/trey/.config/autokey
2019-02-10 12:02:52,397 DEBUG - config-manager - Global settings: {'isFirstRun': True, 'serviceRunning': True, 'menuTakesFocus': False, 'showTrayIcon': True, 'sortByUsageCount': True, 'promptToSave': True, 'enableQT4Workaround': False, 'interfaceType': 'XRecord', 'undoUsingBackspace': True, 'windowDefaultSize': [1918, 1054], 'hPanePosition': 707, 'columnWidths': [150, 50, 100], 'showToolbar': True, 'notificationIcon': 'autokey-status', 'workAroundApps': '.*VirtualBox.*|krdc.Krdc', 'triggerItemByInitial': True, 'scriptGlobals': {}}
2019-02-10 12:02:52,397 INFO - service - Starting service
2019-02-10 12:02:52,411 DEBUG - interface - Modifier masks: {<Key.SHIFT: '<shift>'>: 1, <Key.CONTROL: '<ctrl>'>: 4, <Key.ALT: '<alt>'>: 8, <Key.ALT_GR: '<alt_gr>'>: 128, <Key.SUPER: '<super>'>: 64, <Key.HYPER: '<hyper>'>: 64, <Key.META: '<meta>'>: 8, <Key.NUMLOCK: '<numlock>'>: 16}
2019-02-10 12:02:52,440 DEBUG - interface - Alt-Grid: XK_Multi_key, 65312
2019-02-10 12:02:52,440 DEBUG - interface - <map object at 0x7f0b3836d080>
2019-02-10 12:02:52,440 DEBUG - interface - X Server Keymap
2019-02-10 12:02:52,440 DEBUG - interface - [\]: [(51, 0), (51, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [|]: [(51, 1), (51, 3), (94, 4), (94, 6)]
2019-02-10 12:02:52,441 DEBUG - interface - [`]: [(49, 0), (49, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [1]: [(10, 0), (10, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [2]: [(11, 0), (11, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [3]: [(12, 0), (12, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [4]: [(13, 0), (13, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [5]: [(14, 0), (14, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [6]: [(15, 0), (15, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [7]: [(16, 0), (16, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [8]: [(17, 0), (17, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [9]: [(18, 0), (18, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [0]: [(19, 0), (19, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [-]: [(20, 0), (20, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [=]: [(21, 0), (21, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [~]: [(49, 1), (49, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [!]: [(10, 1), (10, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [@]: [(11, 1), (11, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [#]: [(12, 1), (12, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [$]: [(13, 1), (13, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [%]: [(14, 1), (14, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [^]: [(15, 1), (15, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [&]: [(16, 1), (16, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [*]: [(17, 1), (17, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [(]: [(187, 0), (18, 1), (187, 2), (18, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [)]: [(188, 0), (19, 1), (188, 2), (19, 3)]
2019-02-10 12:02:52,441 DEBUG - interface - [q]: [(24, 0), (24, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [w]: [(25, 0), (25, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [e]: [(26, 0), (26, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [r]: [(27, 0), (27, 2)]
2019-02-10 12:02:52,441 DEBUG - interface - [t]: [(28, 0), (28, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [y]: [(29, 0), (29, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [u]: [(30, 0), (30, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [i]: [(31, 0), (31, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [o]: [(32, 0), (32, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [p]: [(33, 0), (33, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [[]: [(34, 0), (34, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - []]: [(35, 0), (35, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [a]: [(38, 0), (38, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [s]: [(39, 0), (39, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [d]: [(40, 0), (40, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [f]: [(41, 0), (41, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [g]: [(42, 0), (42, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [h]: [(43, 0), (43, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [j]: [(44, 0), (44, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [k]: [(45, 0), (45, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [l]: [(46, 0), (46, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [;]: [(47, 0), (47, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [']: [(48, 0), (48, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [z]: [(52, 0), (52, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [x]: [(53, 0), (53, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [c]: [(54, 0), (54, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [v]: [(55, 0), (55, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [b]: [(56, 0), (56, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [n]: [(57, 0), (57, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [m]: [(58, 0), (58, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [,]: [(59, 0), (59, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [.]: [(60, 0), (60, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [/]: [(61, 0), (61, 2)]
2019-02-10 12:02:52,442 DEBUG - interface - [Q]: [(24, 1), (24, 3)]
2019-02-10 12:02:52,442 DEBUG - interface - [W]: [(25, 1), (25, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [E]: [(26, 1), (26, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [R]: [(27, 1), (27, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [T]: [(28, 1), (28, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [Y]: [(29, 1), (29, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [U]: [(30, 1), (30, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [I]: [(31, 1), (31, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [O]: [(32, 1), (32, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [P]: [(33, 1), (33, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [{]: [(34, 1), (34, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [}]: [(35, 1), (35, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [A]: [(38, 1), (38, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [S]: [(39, 1), (39, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [D]: [(40, 1), (40, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [F]: [(41, 1), (41, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [G]: [(42, 1), (42, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [H]: [(43, 1), (43, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [J]: [(44, 1), (44, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [K]: [(45, 1), (45, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [L]: [(46, 1), (46, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [:]: [(47, 1), (47, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - ["]: [(48, 1), (48, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [Z]: [(52, 1), (52, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [X]: [(53, 1), (53, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [C]: [(54, 1), (54, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [V]: [(55, 1), (55, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [B]: [(56, 1), (56, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [N]: [(57, 1), (57, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [M]: [(58, 1), (58, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [<]: [(94, 0), (59, 1), (94, 2), (59, 3)]
2019-02-10 12:02:52,443 DEBUG - interface - [>]: [(60, 1), (94, 1), (60, 3), (94, 3)]
2019-02-10 12:02:52,444 DEBUG - interface - [?]: [(61, 1), (61, 3)]
2019-02-10 12:02:52,444 DEBUG - iomediator - Set modifier Key.CAPSLOCK to False
2019-02-10 12:02:52,444 DEBUG - iomediator - Set modifier Key.NUMLOCK to False
2019-02-10 12:02:52,445 DEBUG - interface - Grabbing hotkey: ['<shift>', '<super>'] 'd'
2019-02-10 12:02:52,445 DEBUG - interface - Grabbing hotkey: ['<ctrl>', '<super>'] 'd'
2019-02-10 12:02:52,445 DEBUG - interface - Grabbing hotkey: ['<ctrl>', '<super>'] 'y'
2019-02-10 12:02:52,445 DEBUG - interface - Grabbing hotkey: ['<ctrl>'] '<f7>'
2019-02-10 12:02:52,445 DEBUG - interface - Grabbing hotkey: ['<ctrl>', '<shift>'] 'd'
2019-02-10 12:02:52,446 DEBUG - interface - __flushEvents: Entering event loop.
2019-02-10 12:02:52,446 INFO - iomediator - Created IoMediator instance, current interface is: <XRecordInterface(XInterface-thread, initial daemon)>
2019-02-10 12:02:52,446 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 30, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,450 INFO - interface - XRecord interface thread starting
2019-02-10 12:02:52,450 INFO - service - Service now marked as running
2019-02-10 12:02:52,452 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 31, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,454 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 32, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,455 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 33, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,455 DEBUG - phrase-menu - Sorting phrase menu by usage count
2019-02-10 12:02:52,455 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 34, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,455 DEBUG - phrase-menu - Triggering menu item by first initial
2019-02-10 12:02:52,455 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 35, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,456 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 36, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,456 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 37, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,457 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 38, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,457 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 39, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,457 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 40, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,457 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 41, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,458 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 42, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,458 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 43, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,458 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 44, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,458 DEBUG - root - Created DBus service
2019-02-10 12:02:52,458 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 45, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,458 INFO - root - Entering main()
2019-02-10 12:02:52,459 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 46, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,459 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 47, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,460 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 48, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,460 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 49, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,460 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 50, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,460 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 51, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,461 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 52, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,461 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 53, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,461 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 54, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,461 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 55, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,461 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 56, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,462 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 57, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,462 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 58, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:52,462 ERROR - interface - grab on window failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 389, in __recurseTree
window_info = self.get_window_info(window, False)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 59, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:53,451 ERROR - interface - Got BadWindow error while requesting window information.
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadWindow: <class 'Xlib.error.BadWindow'>: code = 3, resource_id = <class 'Xlib.xobject.resource.Resource'>(0x02800003), sequence_number = 60, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:53,520 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 453, in __grabHotkeysForWindow
window_info = self.get_window_info(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 61, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:53,520 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 453, in __grabHotkeysForWindow
window_info = self.get_window_info(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 62, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:55,163 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 64, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:55,329 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 66, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:55,735 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 68, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:56,143 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 70, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:56,339 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 72, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:56,545 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 74, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:56,847 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 76, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:58,981 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 78, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:59,337 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 80, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:59,553 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 82, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:59,747 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 84, major_opcode = 20, minor_opcode = 0
2019-02-10 12:02:59,957 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 86, major_opcode = 20, minor_opcode = 0
2019-02-10 12:03:00,165 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 88, major_opcode = 20, minor_opcode = 0
2019-02-10 12:03:05,570 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 90, major_opcode = 20, minor_opcode = 0
2019-02-10 12:03:06,153 ERROR - interface - Error in X event loop thread
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 242, in __eventLoop
method(*args)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 953, in __handleKeyPress
window_info = self.get_window_info(focus)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1078, in get_window_info
return self._get_window_info(window, traverse)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1084, in _get_window_info
new_wm_title = self._try_get_window_title(window)
File "/usr/lib/python3.7/site-packages/autokey/interface.py", line 1132, in _try_get_window_title
atom = window.get_property(self.__VisibleNameAtom, 0, 0, 255)
File "/usr/lib/python3.7/site-packages/Xlib/xobject/drawable.py", line 461, in get_property
long_length = length)
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1369, in __init__
self.reply()
File "/usr/lib/python3.7/site-packages/Xlib/protocol/rq.py", line 1389, in reply
raise self._error
Xlib.error.BadAtom: <class 'Xlib.error.BadAtom'>: code = 5, resource_id = 0, sequence_number = 92, major_opcode = 20, minor_opcode = 0
2019-02-10 12:03:08,007 DEBUG - iomediator - Key.CONTROL pressed
```
| closed | 2019-02-10T20:12:43Z | 2019-04-29T12:52:39Z | https://github.com/autokey/autokey/issues/257 | [] | tblancher | 17 |
plotly/dash-table | plotly | 170 | v2 sorting UI | What does the next generation of the sorting UI look like?
- With multi-sort, the ability to determine the order in which columns were sorted
- Anything else? | open | 2018-10-24T19:50:06Z | 2019-10-18T08:21:35Z | https://github.com/plotly/dash-table/issues/170 | [
"dash-type-epic"
] | chriddyp | 2 |
huggingface/datasets | pandas | 7,372 | Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets | ### Description
I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:
#### Code 1: Using `load_dataset`
```python
from datasets import Dataset, load_dataset
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_dataset("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 1350 samples.
- `test` has 150 samples.
#### Code 2: Using `load_from_disk`
```python
from datasets import Dataset, load_from_disk
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_from_disk("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 450 samples.
- `test` has 50 samples.
### Expected Behavior
I expected both `load_dataset` and `load_from_disk` to load the same dataset, as they are pointing to the same directory. However, the results differ significantly:
- `load_dataset` seems to merge all shards, resulting in a combined dataset.
- `load_from_disk` only loads the last saved dataset, ignoring previous shards.
### Questions
1. Is this behavior intentional? If so, could you clarify the difference between `load_dataset` and `load_from_disk` in the documentation?
2. If this is not intentional, could this be considered a bug?
3. What is the recommended way to handle cases where multiple datasets are saved to the same directory?
Thank you for your time and effort in maintaining this great library! I look forward to your feedback. | open | 2025-01-16T05:47:20Z | 2025-01-16T05:47:20Z | https://github.com/huggingface/datasets/issues/7372 | [] | gaohongkui | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 78 | windows4位量化 |
一步对FP16模型进行4位量化,生成量化模型文件路径为zh-models/7B/ggml-model-q4_0.bin。
./quantize ./zh-models/7B/ggml-model-f16.bin ./zh-models/7B/ggml-model-q4_0.bin 2
这一步在win操作系统该如何执行,问ChatGPT他要我自己写量化……前面都可以通过ChatGPT把命令转换成win可执行的,这里我真的看不懂了 | closed | 2023-04-07T22:20:02Z | 2023-05-19T22:02:46Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/78 | [
"stale"
] | ghost | 5 |
huggingface/transformers | deep-learning | 36,481 | Add type checking to CI | ### Feature request
Use, e.g., `pyright` in your CI.
### Motivation
We run `pyright` as part of our CI and find ourselves having to do a lot of `# type:ignore` because of type errors in `transformers`. It would be amazing if you would consider incrementally adopting something like `pyright` in your own CI. It could help catch some type errors for you (looks like there are quite a few open issues around type errors) and would help downstream developers with `transformers` as a dependency keep their projects type safe.
Overview of 30,105 errors and 228 warnings found by pyright:
|Rule | Number of Violations|
|------|---------------------|
|reportArgumentType |5738|
|reportAttributeAccessIssue |4248|
|reportPossiblyUnboundVariable |4055|
|reportMissingImports |3998|
|reportOptionalMemberAccess |3425|
|reportOperatorIssue |1289|
|reportReturnType |1117|
|reportOptionalOperand |1091|
|reportIncompatibleMethodOverride |1026|
|reportAssignmentType |873|
|reportOptionalSubscript |854|
|reportCallIssue |585|
|reportIndexIssue |580|
|reportGeneralTypeIssues |578|
|reportInvalidTypeForm |283|
|reportOptionalCall |224|
|reportMissingModuleSource |189|
|reportOptionalIterable |69|
|reportIncompatibleVariableOverride |32|
|reportSelfClsParameterName |19|
|reportUnusedExpression |12|
|reportInvalidTypeArguments |12|
|reportFunctionMemberAccess |11|
|reportUndefinedVariable |9|
|reportUnsupportedDunderAll |8|
|reportRedeclaration |5|
|reportUnboundVariable |2|
|reportUnhashable |1|
### Your contribution
I'd be happy to help if there were buy-in from the maintainers. I imagine this would be many PRs and done incrementally. By that I mean: we should
- decide on some subdirectories we care most about, and rules we care most about
- fix type errors there
- add pyright config to check only those rules and directories
- add pyright to the CI
- slowly expand coverage of directories and rules
This could also be a nice `good-first-issue` to encourage new contributions.
Finally, one way to reduce the amount of work is to first make your _linting_ much more aggressive by updating your `ruff` version and running autofixes. When I turn on a bunch of lint rules and run all fixes (and unsafe fixes) it already removes ~1000 `pyright` errors. | open | 2025-02-28T19:02:39Z | 2025-03-03T15:14:00Z | https://github.com/huggingface/transformers/issues/36481 | [
"Feature request"
] | dylwil3 | 1 |
kennethreitz/responder | flask | 134 | Method not allowed for CBV | I expect 405 status code when requesting `get` method for this view, but I got 200:
```python
from responder import API
api = API()
@api.route('/')
class Resource:
async def on_post(self, req, resp):
resp.text = await req.text
api.run()
```
```bash
curl '127.0.0.1:5042/' -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 5042 (#0)
> GET /dfsdf HTTP/1.1
> Host: 127.0.0.1:5042
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: uvicorn
< date: Tue, 23 Oct 2018 17:38:32 GMT
< content-type: application/json
< transfer-encoding: chunked
<
``` | closed | 2018-10-23T18:43:40Z | 2018-10-24T14:45:49Z | https://github.com/kennethreitz/responder/issues/134 | [] | Pentusha | 1 |
alirezamika/autoscraper | web-scraping | 88 | resolved | resolved | closed | 2023-04-01T21:06:32Z | 2023-04-05T05:09:18Z | https://github.com/alirezamika/autoscraper/issues/88 | [] | freckletonj | 0 |
google-deepmind/graph_nets | tensorflow | 155 | Problem with plot_compare_graphs function | I'm new in the field of Graph and Networks. Recently I came across your code and tired to run the code.
looking at the function like
plot_graphs_tuple(graphs_tuple_tf)
and plot_compare_graphs(output_graphs, labels=[
"Input graph",
"blocks.broadcast_globals_to_nodes",
"blocks.broadcast_globals_to_edges",
"blocks.broadcast_sender_nodes_to_edges",
"blocks.broadcast_receiver_nodes_to_edges"])
There is a function inside these called OrderedMultiDiGraph() inside graph_nx and I tried to search about that particular function unfortunately that function doesnot exists in Networkx but there is function called MultiDiGraph.order() and class called MultiDiGraph(...)
Hence it gives error of "AttributeError: module 'networkx' has no attribute 'OrderedMultiDiGraph'"
am I missing something or is it actually error ?
| open | 2024-01-17T09:25:22Z | 2024-01-17T10:17:53Z | https://github.com/google-deepmind/graph_nets/issues/155 | [] | SamitaAdhiakri | 1 |
ultralytics/ultralytics | pytorch | 19,410 | yolo model.train without yaml | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
I would like to request or inquire on a YOLO11 feature. I am currently fine tuning YOLO11 on a subset of open-images-v7. Locally, I had the subset of the images in train/images and train/labels (same for validation and test) in a yaml. I am using MLFlow via settings.update({"mlflow": True}), plots =True, save = True, imgsz = 640, patience, lr0, lrf, resume, project, and name (project and name for MLFlow). This was easy with YOLO(".pt").train(data="path/to/yaml", *args); my model trained with no issues.
However, now my images are no longer local. They are in an Azure Blob Container, and this is where they will remain because I will be deploying this model via Azure. Because my images are now in Azure and not local, my yaml file will not work and neither will model.train(*args). I lost all functionality for MLFlow, plotting, saving, image size conversion, etc.
I'm now pulling in the data from Azure Blob and turning my data into a DataLoader using PyTorch, but writing the training loop seemingly removes all of my functionality that was built in. I've seen that there's a way to integrate Azure ML with Ultralytics, but I couldn't find a clear way to maintain my current functionality. Is there any way to maintain this functionality using model.train(*args) instead of writing a custom training loop?
### Use case
I have images in an Azure Blob Container and would like to maintain the same functionality I had with the images local using a yaml.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-24T19:28:16Z | 2025-03-01T05:28:49Z | https://github.com/ultralytics/ultralytics/issues/19410 | [
"enhancement",
"question",
"devops"
] | scottschmidl | 24 |
scikit-learn-contrib/metric-learn | scikit-learn | 129 | We should avoid only testing hard values | Some of the tests rely on testing hard values, like the ones that failed in #127 due to scikit-learn's iris dataset update. They could probably fail again if for instance we used another initialisation or another optimizer for some algorithms, while the algorithm would still be valid. Therefore I think these tests could still be useful as benchmarking tasks, to ensure we keep making a good score for some basic tasks, but we should still probably rely more on testing toy examples, testing properties of the solution rather than hard values, that would work no matter the initialization or the optimisation procedure. | open | 2018-10-11T12:51:53Z | 2018-10-11T15:59:45Z | https://github.com/scikit-learn-contrib/metric-learn/issues/129 | [] | wdevazelhes | 1 |
tflearn/tflearn | tensorflow | 949 | Inaccurate responses predicted by tflearn | I am creating a chatbot using tflearn. But i am not getting desired responses of my queries. Is it due to training? If yes, then how much training should ideally be given to tflearn? | open | 2017-11-01T10:45:03Z | 2017-11-01T10:45:03Z | https://github.com/tflearn/tflearn/issues/949 | [] | ankushbandil | 0 |
Evil0ctal/Douyin_TikTok_Download_API | api | 324 | ## iOS Shortcut/快捷指令 | ## iOS Shortcut/快捷指令
> 快捷指令只需手动安装一次,往后在运行时会自动连接API-V1进行更新检查。
> The shortcut command only needs to be manually installed once, and will automatically connect to API-V1 for update checking at runtime.
> 如果你愿意分享你魔改的快捷方式(捷径),欢迎在下方留言,我会将你的快捷方式链接收录至此文档内,感谢你的工作(#^.^#)
> If you are willing to share the shortcut of your magic modification, please leave a message below, I will include your shortcut link in this document, thank you for your work (#^.^#)
[V6.0]
Release date: 2022/11/06
中文:
https://www.icloud.com/shortcuts/4465d514869e4ca585074d40328f3e0e
English:
https://www.icloud.com/shortcuts/58e3a2cbac784a6782f1031c6b1dd9f8
Note:
对最新的API-V1进行了适配,必须安装此更新才能正常使用捷径。
Note:
Adapted to the latest API-V1, this update must be installed to use shortcuts normally.
[V5.0]
Release date: 2022/07/18
中文/Chinese:
https://www.icloud.com/shortcuts/331073aca78345cf9ab4f73b6a457f97
英文/English:
https://www.icloud.com/shortcuts/83548306bc0c4f8ea563108f79c73f8d
[V4.0]
Release date: 2022/07/15
中文/Chinese:
https://www.icloud.com/shortcuts/25af5f6d9a9140e1a4e35c771313732f
英文/English:
https://www.icloud.com/shortcuts/0d37a661c1044ce4a428a84c13113c30
[V3.0]
Release date: 2022/04/16
中文/Chinese:
https://www.icloud.com/shortcuts/126820d2783748d1bdec95a223a02639
[V2.0]
Release date: 2022/04/06
中文/Chinese:
https://www.icloud.com/shortcuts/38df6ca6f54840e5af80b98bf52b9c3b
[V1.0]
Release date: 2022/02/06
中文/Chinese:
https://www.icloud.com/shortcuts/e8243369340548efa0d4c1888dd3c170
_Originally posted by @Evil0ctal in https://github.com/Evil0ctal/Douyin_TikTok_Download_API/discussions/104_ | closed | 2024-02-13T07:42:13Z | 2024-03-25T22:32:11Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/324 | [] | starry2240 | 1 |
horovod/horovod | deep-learning | 3,275 | Cannot install horovod[spark] for Tensorflow 2.6 | **Environment:**
1. Framework: TensorFlow
2. Framework version:2.6.2
3. Horovod version: 0.23
4. MPI version:4.1.1
5. CUDA version:N/A
6. NCCL version:N/A
7. Python version: 3.7
8. Spark / PySpark version: 2.4.5
9. Ray version:N/A
10. OS and version: RHEL 8.4
11. GCC version: 9.3.0
12. CMake version: 3.5.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? N/A
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A
4. Did you check if you question is answered in the [troubleshooting guide] (https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
```
Installing collected packages: pyparsing, pycparser, pyzmq, pyyaml, pyarrow, psutil, packaging, future, fsspec, diskcache, dill, cloudpickle, cffi, petastorm, horovod, h5py
Attempting uninstall: h5py
Found existing installation: h5py 3.1.0
Uninstalling h5py-3.1.0:
Successfully uninstalled h5py-3.1.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.6.2 requires h5py~=3.1.0, but you have h5py 2.10.0 which is incompatible.
```
**Reproduce Steps:**
1. `conda create -n horovod python=3.7`
2. `conda activate horovod`
3. `conda install pyspark=2.4.5 openmpi-mpicc cmake -c conda-forge`
4. `pip install tensorflow==2.6.2`
5. `HOROVOD_WITH_MPI=1 HOROVOD_WITH_TENSORFLOW=1 pip install horovod[spark]`
| closed | 2021-11-16T01:15:17Z | 2022-03-02T21:40:46Z | https://github.com/horovod/horovod/issues/3275 | [
"bug"
] | LifengWang | 8 |
huggingface/diffusers | pytorch | 10,475 | [SD3]The quality of the images generated by the inference is not as high as on the validation set during fine-tuning? | ### Describe the bug
Why is the quality of the graphs I generate with `StableDiffusion3Pipeline` not as good as the quality of the images in the validation set in the log generated when using dreambooth_lora for fine tuning?
Maybe I need some other plugin or parameter setting to maintain the same image quality as the validation set?
### Reproduction
```
# Here is my inference code:
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained('./diffusers/stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda')
pipe.load_lora_weights("./my_path/pytorch_lora_weights.safetensors", adapter_name="test_lora")
img = pipe(
"my prompt...",
generator=torch.manual_seed(1),
num_inference_steps=40,
guidance_scale=6
).images[0].save('/root/my_img.png')
```
### Logs
_No response_
### System Info
Diffuser Version: stable-diffusion-3-medium
CUDA Version: 12.4
GPU: NVIDIA A800 80GB
### Who can help?
_No response_ | closed | 2025-01-06T14:52:57Z | 2025-02-06T12:17:47Z | https://github.com/huggingface/diffusers/issues/10475 | [
"bug",
"stale"
] | ytwo-hub | 8 |
cobrateam/splinter | automation | 439 | visit function can block user's action | At first , we could have a look at the implementation of function visit.
<pre>
def visit(self, url):
self.connect(url)
self.ensure_success_response()
self.driver.get(url)
</pre>
There are three steps in this function,
first try to connect
second ensure_success_response , this steps is used to check response code.
at last , call selenium api to open url
About checking response code , my opinion is sometimes strict check isn't necessary.
example , use IIS to do authentification , we need open the page to test logon , but splinter check the response code and find it's 401, so raised , it's not a good idea.
it's the code of checking response
<pre>
if self.code in self.http_errors:
raise HttpResponseError(self.code, self.reason)
http_errors = (400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411,
412, 413, 414, 415, 416, 417, 500, 501, 502, 503, 504, 505)
</pre>
So , I think ,we can add an option to choose strict check or simple check .Anyway , visit function can't block user's action,
| closed | 2015-10-10T05:16:45Z | 2018-08-27T01:00:24Z | https://github.com/cobrateam/splinter/issues/439 | [
"bug"
] | ralph-nj | 1 |
xlwings/xlwings | automation | 1,842 | avoid wb.fullname complexity if not needed on "Book(filename)" | If one provides just the name of the workbook (not the fullname) to get a reference to an opened workbook, it should be possible to avoid relying on the "not yet 100% robust" sharepoint/local name resolution.
It is a matter of swapping the order of the tests in https://github.com/xlwings/xlwings/blob/main/xlwings/main.py#L823
from
```
if (
wb.fullname.lower() == fullname
or wb.name.lower() == fullname
):
```
to
```
if (
wb.name.lower() == fullname
or wb.fullname.lower() == fullname
):
``` | closed | 2022-02-21T15:13:07Z | 2022-03-29T14:26:00Z | https://github.com/xlwings/xlwings/issues/1842 | [] | sdementen | 7 |
pydantic/FastUI | fastapi | 75 | Chart component | Hi,
I think it would be great if we had a chart component (e.g. think 2D bar charts or scatter plots).
Guess the mechanism of supplying data in the backend could be similar as it is implemented for the `Table` component.
For frontend implementation I think we could leverage something like [Chart.js](https://www.chartjs.org/docs/latest/) to do the charting work itself for us.
Not sure if this use case is common enough such that it would fit into FastUI itself or whether this should rather be considered a custom component and be separate from the package.
Any thoughts? | open | 2023-12-05T19:44:58Z | 2024-07-16T08:04:56Z | https://github.com/pydantic/FastUI/issues/75 | [
"enhancement"
] | Dejiah | 10 |
CTFd/CTFd | flask | 2,702 | Password Change on Login | We should have the ability to mark a user as needing to change their password the next time they log in. | open | 2025-02-13T19:59:30Z | 2025-02-13T19:59:30Z | https://github.com/CTFd/CTFd/issues/2702 | [] | ColdHeat | 0 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 956 | Tacotron 2 implementation by bluefish? | Does anyone have the Tacotron 2 implementation of the code which was posted by bluefish on his repository? It is now deleted. | closed | 2021-12-22T07:59:35Z | 2021-12-28T12:34:21Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/956 | [] | Fzr2k | 0 |
explosion/spaCy | deep-learning | 13,248 | Cannot train Arabic models with a custom tokenizer | This issue was initially about a possible bug in the _training pipeline_, related to the _parser_ (see below). But now I believe that posing preliminary questions is more appropriate:
- is it possible to create a completely _custom tokenizer_, which does not define custom rules and a few methods, but just redefines the main `__call__` method?
- in that case, where can I find documentation on how the tokenizer should use the Vocabulary API to feed the vocabulary while tokenizing?
### Some context information
In the discussion _Arabic language support_, comment _[I'm willing to prototype a spaCy language model for Arabic (SMA)](https://github.com/explosion/spaCy/discussions/7146#discussioncomment-8094879)_, I reported on the choice of a _training set_ and on the unsatisfactory training results obtained using the native spaCy _tokenizer_. Then, I reported on the integration/adaptation of an alternative tokenizer whose output, according to the printout of the _debug data_ command, shows a better alignment with the tokens in the training set (after a minor modification of the training set itself).
With the [subsequent comment](https://github.com/explosion/spaCy/discussions/7146#discussioncomment-8115239), in the same discussion, I reported on
1. an exception emitted by a parser-related module of the spaCy training software, when executing the _train_ command with the same data and configuration as _debug data_;
2. the very bad results (low overall _score_) obtained with a reduced configuration, excluding the parser.
Here below is an excerpt of the _Traceback_ related to the exception (point 1). You can find the full Traceback in the discussion to which I refer.
```(omissis)
⚠ Aborting and saving the final best model. Encountered exception:
KeyError("[E900] Could not run the full pipeline for evaluation. If you
specified frozen components, make sure they were already initialized and
trained. Full pipeline: ['tok2vec', 'tagger', 'morphologizer',
'trainable_lemmatizer', 'parser']")
Traceback (most recent call last):
File "C:\language310\lib\site-packages\spacy\training\loop.py", line 298, in evaluate
scores = nlp.evaluate(dev_corpus(nlp))
File "C:\language310\lib\site-packages\spacy\language.py", line 1459, in evaluate
for eg, doc in zip(examples, docs):
File "C:\language310\lib\site-packages\spacy\language.py", line 1618, in pipe
for doc in docs:
File "C:\language310\lib\site-packages\spacy\util.py", line 1685, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy\pipeline\transition_parser.pyx", line 255, in pipe
File "C:\language310\lib\site-packages\spacy\util.py", line 1704, in raise_error
raise e
File "spacy\pipeline\transition_parser.pyx", line 252, in spacy.pipeline.transition_parser.Parser.pipe
File "spacy\pipeline\transition_parser.pyx", line 345, in spacy.pipeline.transition_parser.Parser.set_annotations
File "spacy\pipeline\_parser_internals\nonproj.pyx", line 176, in spacy.pipeline._parser_internals.nonproj.deprojectivize
File "spacy\pipeline\_parser_internals\nonproj.pyx", line 181, in spacy.pipeline._parser_internals.nonproj.deprojectivize
File "spacy\strings.pyx", line 160, in spacy.strings.StringStore.__getitem__
KeyError: "[E018] Can't retrieve string for hash '8206900633647566924'. This usually refers to an issue with the `Vocab` or `StringStore`."
The above exception was the direct cause of the following exception:
(omissis)
```
### My Environment
* Operating System: Windows 11
* Python Version Used: 3.10
* spaCy Version Used: 3.7
| open | 2024-01-18T21:32:32Z | 2024-02-09T22:44:40Z | https://github.com/explosion/spaCy/issues/13248 | [
"lang / ar",
"feat / tokenizer"
] | gtoffoli | 3 |
sinaptik-ai/pandas-ai | data-visualization | 955 | Regarding saving relative path in json reponse | ### System Info
MAC os.
### 🐛 Describe the bug
When i am passing query to user for creating graph it will show local file path response in that i want relative file path response. so please tell me for this what i have to do
smart_df = SmartDataframe(df, config={"llm": llm, "enable_cache": False, "save_logs": False, "save_charts": True })
this line of code i have done this. | closed | 2024-02-23T12:58:25Z | 2024-06-08T16:03:43Z | https://github.com/sinaptik-ai/pandas-ai/issues/955 | [] | shreya386 | 0 |
uriyyo/fastapi-pagination | fastapi | 655 | adding TimeGenerated | Hi
I want to add GeneratedTime after items, pages and etc for a response, can you hint to how to do it? because I try different ways but I can't. | closed | 2023-05-08T09:57:40Z | 2023-09-09T17:16:20Z | https://github.com/uriyyo/fastapi-pagination/issues/655 | [
"question"
] | donmemedo | 6 |
pytest-dev/pytest-xdist | pytest | 200 | broken wheel metadata | due to the conditional in setup.py we have broken wheel metadata | closed | 2017-08-04T19:15:32Z | 2022-08-16T07:56:04Z | https://github.com/pytest-dev/pytest-xdist/issues/200 | [] | RonnyPfannschmidt | 0 |
trevismd/statannotations | seaborn | 101 | how add statistics from in a catplot with different columns ? | Hello,
Thank you for your work. I've created a nice catplot with different columns. But I cannot add statistics with Annotator as you did in your example 2facets.png How did you do ?
Thanks in advance,
Jessica | open | 2023-01-12T13:09:33Z | 2023-01-17T08:35:17Z | https://github.com/trevismd/statannotations/issues/101 | [] | jlebenberg | 2 |
nschloe/tikzplotlib | matplotlib | 206 | Separation between groupplots | Setting the separation between subplots in Python/Matplotlib is not respected by the generated pgfplots code.
Minimal example:
```
from matplotlib import pyplot as plt
from matplotlib2tikz import save as tikz_save
plt.subplot(2,2,1)
plt.plot([0, 10], [0, 10])
plt.setp(plt.gca().get_xticklabels(), visible=False) # make x tick labels invisible
plt.subplot(2,2,2)
plt.plot([0, 10], [0, 10])
plt.setp(plt.gca().get_xticklabels(), visible=False) # make x tick labels invisible
plt.setp(plt.gca().get_yticklabels(), visible=False) # make y tick labels invisible
plt.subplot(2,2,3)
plt.plot([0, 10], [0, 10])
plt.subplot(2,2,4)
plt.plot([0, 10], [0, 10])
plt.setp(plt.gca().get_yticklabels(), visible=False) # make y tick labels invisible
plt.subplots_adjust(hspace=.0) # remove vertical gap between subplots
plt.subplots_adjust(wspace=.0) # remove horizontal gap between subplots
#plt.savefig("matplotlib2tikz_groupplots_space.png", dpi=300)
tikz_save("matplotlib2tikz_groupplots_space.tikz")
plt.show()
```
Matplotlib shows the plot as follows:

The generated TiKZ plot looks like this:

The generated pgfplots code is:
```
% This file was created by matplotlib2tikz v0.6.13.
\begin{tikzpicture}
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\begin{groupplot}[group style={group size=2 by 2}]
\nextgroupplot[
xmin=-0.5, xmax=10.5,
ymin=-0.5, ymax=10.5,
xtick={-2.5,0,2.5,5,7.5,10,12.5},
xticklabels={},
tick align=outside,
tick pos=left,
x grid style={lightgray!92.026143790849673!black},
y grid style={lightgray!92.026143790849673!black}
]
\addplot [semithick, color0, forget plot]
table {%
0 0
10 10
};
\nextgroupplot[
xmin=-0.5, xmax=10.5,
ymin=-0.5, ymax=10.5,
xtick={-2.5,0,2.5,5,7.5,10,12.5},
xticklabels={},
ytick={-2,0,2,4,6,8,10,12},
yticklabels={},
tick align=outside,
tick pos=left,
x grid style={lightgray!92.026143790849673!black},
y grid style={lightgray!92.026143790849673!black}
]
\addplot [semithick, color0, forget plot]
table {%
0 0
10 10
};
\nextgroupplot[
xmin=-0.5, xmax=10.5,
ymin=-0.5, ymax=10.5,
tick align=outside,
tick pos=left,
x grid style={lightgray!92.026143790849673!black},
y grid style={lightgray!92.026143790849673!black}
]
\addplot [semithick, color0, forget plot]
table {%
0 0
10 10
};
\nextgroupplot[
xmin=-0.5, xmax=10.5,
ymin=-0.5, ymax=10.5,
ytick={-2,0,2,4,6,8,10,12},
yticklabels={},
tick align=outside,
tick pos=left,
x grid style={lightgray!92.026143790849673!black},
y grid style={lightgray!92.026143790849673!black}
]
\addplot [semithick, color0, forget plot]
table {%
0 0
10 10
};
\end{groupplot}
\end{tikzpicture}
```
The pgfplots code to respect the vertical separation set via Python `plt.subplots_adjust(hspace=.0)` is to set the group style `vertical sep` to zero, and similarly `plt.subplots_adjust(wspace=.0)` should result in the group style option `horizontal sep=0pt`. Thus the correct line 6 of the generated code would be:
`\begin{groupplot}[group style={group size=2 by 2,vertical sep=0pt,horizontal sep=0pt}]`. | open | 2017-10-10T17:13:55Z | 2020-01-16T11:04:54Z | https://github.com/nschloe/tikzplotlib/issues/206 | [] | andreassch | 2 |
ipython/ipython | jupyter | 13,877 | pvlib.pvsystem.retrieve_sam | I tried to call a PV Module as below code , But it given an erroe
-->
KeyError Traceback (most recent call last)
~\miniconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3799 try:
-> 3800 return self._engine.get_loc(casted_key)
3801 except KeyError as err:
~\miniconda3\lib\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
~\miniconda3\lib\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'LONGi_Green_Energy_Technology_Co___Ltd__LR4_72HPH_450M'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_17908\2790595467.py in <cell line: 23>()
21
22 modules = pvlib.pvsystem.retrieve_sam('CECMod')
---> 23 M = modules['LONGi_Green_Energy_Technology_Co___Ltd__LR4_72HPH_450M']
24
25 print (M)
~\miniconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
3803 if self.columns.nlevels > 1:
3804 return self._getitem_multilevel(key)
-> 3805 indexer = self.columns.get_loc(key)
3806 if is_integer(indexer):
3807 indexer = [indexer]
~\miniconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3800 return self._engine.get_loc(casted_key)
3801 except KeyError as err:
-> 3802 raise KeyError(key) from err
3803 except TypeError:
3804 # If we have a listlike key, _check_indexing_error will raise
KeyError: 'LONGi_Green_Energy_Technology_Co___Ltd__LR4_72HPH_450M
| open | 2022-12-26T18:54:06Z | 2022-12-26T18:54:06Z | https://github.com/ipython/ipython/issues/13877 | [] | mhmidat | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.