repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | numpy | 6,650 | AttributeError: 'InMemoryTable' object has no attribute '_batches' | ### Describe the bug
```
Traceback (most recent call last):
File "finetune.py", line 103, in <module>
main(args)
File "finetune.py", line 45, in main
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map
{
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp>
k: dataset.map(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single
arrow_formatted_shard = shard.with_format("arrow")
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format
dataset = copy.deepcopy(self)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__
memo[id(self._batches)] = list(self._batches)
AttributeError: 'InMemoryTable' object has no attribute '_batches'
```
### Steps to reproduce the bug
I'm running an MLOps flow using AzureML.
The error appears when I run the following function in my training script:
```python
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
seq_length),
batched=True,
batch_size=batch_size,
remove_columns=['col1', 'col2'])
```
```python
def tokenize_function(tok, seq_length, example)
# Pad so that each batch has the same sequence length
inp = tok(example['col1'], padding=True, truncation=True)
outp = tok(example['col2'], padding="max_length", max_length=seq_length)
res = {
'input_ids': inp['input_ids'],
'attention_mask': inp['attention_mask'],
'decoder_input_ids': outp['input_ids'],
'labels': outp['input_ids'],
'decoder_attention_mask': outp['attention_mask']
}
return res
```
### Expected behavior
Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23.
### Environment info
datasets 2.16.1
transformers 4.35.2
pyarrow 15.0.0
pyarrow-hotfix 0.6
torch 2.0.1
I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time. | open | 2024-02-08T17:11:26Z | 2024-02-21T00:34:41Z | https://github.com/huggingface/datasets/issues/6650 | [] | matsuobasho | 3 |
schenkd/nginx-ui | flask | 51 | Is this maintained? | Hey folks,
is this repo still being maintained or does anybody know which fork is?
Kind regards,
Marcel | open | 2022-11-01T21:15:54Z | 2024-05-25T00:14:43Z | https://github.com/schenkd/nginx-ui/issues/51 | [] | Segelzwerg | 1 |
ipython/ipython | jupyter | 14,075 | Add %mamba magic to install packages in current kernel | If you wish, you could define a `%mamba` magic similarly to how `%pip` and `%conda` are defined in #11524
_Originally posted by @jakevdp in https://github.com/ipython/ipython/issues/9517#issuecomment-848985175_
| closed | 2023-05-11T20:02:00Z | 2023-10-01T17:18:28Z | https://github.com/ipython/ipython/issues/14075 | [] | dshean | 10 |
chatanywhere/GPT_API_free | api | 178 | vscode codegpt插件使用 | ** 描述bug**
请问最新版的vscode codegpt插件如何使用该api?
我查询了该插件目录,并未找到hostname字段并可修改为api.chatanywhere.com.cn
在之前的codegpt版本中我是可以正常使用的。
**Screenshots 截图**

**Tools or Programming Language 使用的工具或编程语言**
win10 os
| closed | 2024-01-17T14:39:32Z | 2024-03-25T19:42:21Z | https://github.com/chatanywhere/GPT_API_free/issues/178 | [] | gumbp | 3 |
ivy-llc/ivy | pytorch | 27,865 | Fix Frontend Failing Test: paddle - non_linear_activation_functions.torch.nn.functional.hardswish | closed | 2024-01-07T23:07:22Z | 2024-01-07T23:35:57Z | https://github.com/ivy-llc/ivy/issues/27865 | [
"Sub Task"
] | NripeshN | 0 |
|
miguelgrinberg/Flask-SocketIO | flask | 2,067 | Is Eventlet Still The Best Option? | I'm quoting from documentation:
"[eventlet is the best performant option, with support for long-polling and WebSocket transports.](https://flask-socketio.readthedocs.io/en/latest/intro.html)"
But when I check Eventlet's documetation I see:
"[New usages of eventlet are now heavily discouraged!](https://eventlet.readthedocs.io/en/latest/)"
I'm wondering if I'm starting to use Flask-SocketIO in a new project, should I still go with Eventlet?
If no, what is the recommendation here? | closed | 2024-05-31T18:08:52Z | 2024-05-31T18:38:50Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/2067 | [] | ebram96 | 1 |
graphql-python/graphene-django | graphql | 1,467 | Caching attribute on query resolver | I have a query that calculates 1 value on each resolver value like this:
```
class ProductType(ObjectType):
product = graphene.GlobalID(required=True)
@staticmethod
def resolve_products(root, info):
q = calculate()
def resolve_item(root, info):
q = calculate()
```
Have some method to calculate only 1 time and cache it on the resolver.
| open | 2023-10-01T15:06:21Z | 2023-10-01T15:06:21Z | https://github.com/graphql-python/graphene-django/issues/1467 | [
"🐛bug"
] | manhtd98 | 0 |
nltk/nltk | nlp | 3,057 | NLTK data path issue coming in GCP / cloud | I have deployed one docker-based Microservice in the GCP cloud.
The basic structure in the docker file is as follows
```
FROM python:3.9
RUN apt-get update && apt-get upgrade -y
RUN pip3.9 install nltk==3.6.6
WORKDIR /home/
ADD <Remote customized NLTK path> .
RUN unzip nltk_data.zip
RUN rm -rf nltk_data.zip
RUN ln -s /home/nltk_data /root/nltk_data
```
And then I run one code segment in the MS codebase like
```
import nltk
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemmatizer.lemmatize("rocks","n"))
lemmatizer.lemmatize("corpora", "n"))
```
But it shows some errors like this
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/util.py", line 84, in __load
root = nltk.data.find(f"{self.subdir}/{zip_name}")
File "/usr/local/lib/python3.9/site-packages/nltk/data.py", line 583, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource omw-1.4 not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('omw-1.4')
For more information see: https://www.nltk.org/data.html
Attempted to load corpora/omw-1.4.zip/omw-1.4/
Searched in:
- '/root/nltk_data'
- '/usr/local/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
**********************************************************************
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/app/onto_logger.py", line 259, in add
res = func(*args, **kwargs)
File "/app/ontology_api_development.py", line 15726, in healthcheck
ol.out_logger.info("rocks is {}".format(lemmatizer.lemmatize("rocks","n")), extra=ol.outlogger_dict)
File "/usr/local/lib/python3.9/site-packages/nltk/stem/wordnet.py", line 45, in lemmatize
lemmas = wn._morphy(word, pos)
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/util.py", line 121, in __getattr__
self.__load()
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/util.py", line 89, in __load
corpus = self.__reader_cls(root, *self.__args, **self.__kwargs)
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/reader/wordnet.py", line 1176, in __init__
self.provenances = self.omw_prov()
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/reader/wordnet.py", line 1285, in omw_prov
fileids = self._omw_reader.fileids()
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/util.py", line 121, in __getattr__
self.__load()
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/util.py", line 86, in __load
raise e
File "/usr/local/lib/python3.9/site-packages/nltk/corpus/util.py", line 81, in __load
root = nltk.data.find(f"{self.subdir}/{self.__name}")
File "/usr/local/lib/python3.9/site-packages/nltk/data.py", line 583, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource omw-1.4 not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('omw-1.4')
For more information see: https://www.nltk.org/data.html
Attempted to load corpora/omw-1.4
Searched in:
- '/root/nltk_data'
- '/usr/local/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
**********************************************************************
Transient error StatusCode.UNAVAILABLE encountered while exporting traces, retrying in Nones.
```
Package versions I used in GCP cloud are
```
Python: 3.9.13
OS: Ubuntu non-distroless image
Flask==2.0.3
markupsafe==2.1.1
Flask-Cors==3.0.9
nltk==3.6.6
```
**The same code with the same package versions is working perfectly in my local ubuntu machine without any errors.**
**And the same code with nltk==3.6.5 working perfectly in my local ubuntu machine and GCP also without any errors**
But I must work on nltk==3.6.6 itself. So errors seems like there is some new nltk data path change pipeline in 3.6.6 version. Please help me to resolve this.
| closed | 2022-09-30T13:33:49Z | 2022-12-16T10:20:28Z | https://github.com/nltk/nltk/issues/3057 | [] | pradeepdev-1995 | 10 |
jina-ai/serve | machine-learning | 5,729 | docs: create new sub-section or relocate retries section | Currently the [retry handling section](https://docs.jina.ai/concepts/client/callbacks/#transient-fault-handling-with-retries) is put under the `Callbacks` sub section which is not correct. | closed | 2023-03-01T08:35:34Z | 2023-06-15T00:19:46Z | https://github.com/jina-ai/serve/issues/5729 | [
"Stale"
] | girishc13 | 1 |
mwaskom/seaborn | pandas | 3,654 | countplot taking long time for Series and not Pandas | In my Jupyter Notebook and Google Colab running version 13.2 Latest, I noticed that the countplot is going in a ever ending loop
```
import pandas as pd
from tensorflow.keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
pd.Series(y_train).value_counts() # Results immediate
sns.countplot(pd.Series(y_train)) # Takes Forever for Series
sns.countplot(pd.Series(y_train, name='no').to_frame(), x='no', hue='no') #Completes fast when in Pandas DataFrame
```
| closed | 2024-03-14T03:37:31Z | 2024-03-15T03:15:42Z | https://github.com/mwaskom/seaborn/issues/3654 | [] | sarath-mec | 1 |
microsoft/nni | data-science | 5,770 | Anyone have some idea of time series forecasting problem using DARTS strategy over the search space of Recurrent neural networks? | **Describe the issue**:
I go through different 3 types of Neural Networks for the forecasting problem. All of them have the similar structure: Recurrent layer, and two dense layer. However when I tried with just modify the Recurrent layer to Layerchoice, there always have some problem like
File [~/Documents/GitHub/NAS/nni/nni/nas/experiment/experiment.py:270](https://file+.vscode-resource.vscode-cdn.net/Users/franciszhang/Documents/GitHub/NAS/notebooks/~/Documents/GitHub/NAS/nni/nni/nas/experiment/experiment.py:270), in NasExperiment.start(self, port, debug, run_mode)
...
[1079](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/NAS/lib/python3.11/site-packages/torch/nn/modules/rnn.py:1079) f"For unbatched 2-D input, hx should also be 2-D but got {hx.dim()}-D tensor")
[1080](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/NAS/lib/python3.11/site-packages/torch/nn/modules/rnn.py:1080) hx = hx.unsqueeze(1)
[1081](https://file+.vscode-resource.vscode-cdn.net/opt/anaconda3/envs/NAS/lib/python3.11/site-packages/torch/nn/modules/rnn.py:1081) else:
RuntimeError: For unbatched 2-D input, hx should also be 2-D but got 3-D tensor.
Where
X_train, X_test, y_train, y_test = train_test_split(X_tensor, Y_tensor, random_state = 0)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
is
torch.Size([9656, 300]) torch.Size([3219, 300]) torch.Size([9656]) torch.Size([3219])
and the Dataloader is set as
train_dataset = TensorDataset(X_train, y_train.unsqueeze(1))
test_dataset = TensorDataset(X_test, y_test.unsqueeze(1))
train_loader = DataLoader(train_dataset, batch_size=128, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=128, shuffle=False)
I'm not sure what is the exact issue happening here.
**Environment**:
- NNI version: 3.0rc1
- Training service (local|remote|pai|aml|etc): local
- Client OS: macOS Sonoma 14.4 (23E214)
- Server OS (for remote mode only): N/A
- Python version: 3.11.8
- PyTorch/TensorFlow version: 2.2.1
- Is conda/virtualenv/venv used?: conda has been used
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space: RNN, GRU, and LSTM if the output problem can be solved
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2024-04-18T05:40:50Z | 2024-04-18T05:40:50Z | https://github.com/microsoft/nni/issues/5770 | [] | franciszh0716 | 0 |
fastapi-users/fastapi-users | asyncio | 550 | Unable to use with both Routers and MongoDB | ## Describe the bug
[Details in this StackOverflow question](https://stackoverflow.com/questions/65930477/how-can-i-use-fastapi-routers-with-fastapi-users-and-mongodb). Essentially, fastapi-users relies on a MongoDB client instance, which could be set up globally in one's `main.py`. However, if I want to split up my app into Router files, each file would need to be able to get this MongoDB Client object (rather than making a new one) to ensure it's all working in the same thread. This would be done using a FastAPI `startup` trigger. Because such Router endpoints rely on fastapi-users for auth, then I need to somehow instantiate that in main.py too, which doesn't yet have access to a DB client object since it won't be made until the app has started up.
## To Reproduce
Steps to reproduce the behavior:
1. Make app with Routers
2. Ensure each router relies on fastapi-users
3. Use `startup` trigger to make MongoDB client
4. Observe chaos
3. Scroll down to '....'
4. See error
## Expected behavior
Allow `FastAPIUsers()` to take a DB-contention-generating function rather than a DB connection itself (ie a sort of promise)
## Configuration
- Python version : 3.9
- FastAPI version : 0.63.0
- FastAPI Users version : 5.0.0
| closed | 2021-03-15T15:35:41Z | 2021-03-20T09:10:41Z | https://github.com/fastapi-users/fastapi-users/issues/550 | [
"bug"
] | hamx0r | 1 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 309 | 数据集可以只采用thchs30进行训练和预测吗? | open | 2022-12-20T10:07:21Z | 2022-12-20T10:07:21Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/309 | [] | Frederick666666 | 0 |
|
JaidedAI/EasyOCR | deep-learning | 407 | Can we run EasyOCR to just predict numbers with highest accuracy by removing english characters? | First of all, Thanks to the developer team at EasyOCR for their great efforts!
I need a little help here.
I am trying to use EasyOCR to extract numbers from images which are like OCR-A font along with usual numbers. if there could be a way to remove predictions for english letters, i think my task will be done.
Thanks again! | closed | 2021-03-30T21:36:52Z | 2021-05-18T09:50:01Z | https://github.com/JaidedAI/EasyOCR/issues/407 | [] | sgupta65lti | 1 |
collerek/ormar | sqlalchemy | 1,381 | Random error when prefetching models | **Describe the bug**
I am seeing random errors when prefetching a model with ormar 0.20.1
Identified the bug in ```translate_list_to_dict``` using a dict as a default argument
**To Reproduce**
```translate_list_to_dict(["aa", "aa__inner", "bb"], default={})```
**Expected behavior**
ret = {"aa": {"inner": {}}, "bb": {}}
| closed | 2024-07-13T18:29:16Z | 2024-12-04T14:54:24Z | https://github.com/collerek/ormar/issues/1381 | [
"bug"
] | cadlagtrader | 1 |
hzwer/ECCV2022-RIFE | computer-vision | 200 | weird errors | ```
~/opencv/morefps/arXiv2020-RIFE main ?1 python3 inference_video.py --exp=2 --video=/home/france1/Downloads/demo.mp4 INT ✘
Traceback (most recent call last):
File "/home/france1/opencv/morefps/arXiv2020-RIFE/inference_video.py", line 90, in <module>
model.load_model(args.modelDir, -1)
File "/home/france1/opencv/morefps/arXiv2020-RIFE/model/oldmodel/RIFE_HDv2.py", line 163, in load_model
self.flownet.load_state_dict(
File "/home/france1/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for IFNet:
Missing key(s) in state_dict: "block0.convblock.0.0.weight", "block0.convblock.0.0.bias", "block0.convblock.0.1.weight", "block0.convblock.1.0.weight", "block0.convblock.1.0.bias", "block0.convblock.1.1.weight", "block0.convblock.2.0.weight", "block0.convblock.2.0.bias", "block0.convblock.2.1.weight", "block0.convblock.3.0.weight", "block0.convblock.3.0.bias", "block0.convblock.3.1.weight", "block0.convblock.4.0.weight", "block0.convblock.4.0.bias", "block0.convblock.4.1.weight", "block0.convblock.5.0.weight", "block0.convblock.5.0.bias", "block0.convblock.5.1.weight", "block0.conv1.weight", "block0.conv1.bias", "block1.convblock.0.0.weight", "block1.convblock.0.0.bias", "block1.convblock.0.1.weight", "block1.convblock.1.0.weight", "block1.convblock.1.0.bias", "block1.convblock.1.1.weight", "block1.convblock.2.0.weight", "block1.convblock.2.0.bias", "block1.convblock.2.1.weight", "block1.convblock.3.0.weight", "block1.convblock.3.0.bias", "block1.convblock.3.1.weight", "block1.convblock.4.0.weight", "block1.convblock.4.0.bias", "block1.convblock.4.1.weight", "block1.convblock.5.0.weight", "block1.convblock.5.0.bias", "block1.convblock.5.1.weight", "block1.conv1.weight", "block1.conv1.bias", "block2.convblock.0.0.weight", "block2.convblock.0.0.bias", "block2.convblock.0.1.weight", "block2.convblock.1.0.weight", "block2.convblock.1.0.bias", "block2.convblock.1.1.weight", "block2.convblock.2.0.weight", "block2.convblock.2.0.bias", "block2.convblock.2.1.weight", "block2.convblock.3.0.weight", "block2.convblock.3.0.bias", "block2.convblock.3.1.weight", "block2.convblock.4.0.weight", "block2.convblock.4.0.bias", "block2.convblock.4.1.weight", "block2.convblock.5.0.weight", "block2.convblock.5.0.bias", "block2.convblock.5.1.weight", "block2.conv1.weight", "block2.conv1.bias", "block3.conv0.0.0.weight", "block3.conv0.0.0.bias", "block3.conv0.0.1.weight", "block3.conv0.1.0.weight", "block3.conv0.1.0.bias", "block3.conv0.1.1.weight", "block3.convblock.0.0.weight", "block3.convblock.0.0.bias", "block3.convblock.0.1.weight", "block3.convblock.1.0.weight", "block3.convblock.1.0.bias", "block3.convblock.1.1.weight", "block3.convblock.2.0.weight", "block3.convblock.2.0.bias", "block3.convblock.2.1.weight", "block3.convblock.3.0.weight", "block3.convblock.3.0.bias", "block3.convblock.3.1.weight", "block3.convblock.4.0.weight", "block3.convblock.4.0.bias", "block3.convblock.4.1.weight", "block3.convblock.5.0.weight", "block3.convblock.5.0.bias", "block3.convblock.5.1.weight", "block3.conv1.weight", "block3.conv1.bias".
Unexpected key(s) in state_dict: "block_tea.conv0.0.0.weight", "block_tea.conv0.0.0.bias", "block_tea.conv0.0.1.weight", "block_tea.conv0.1.0.weight", "block_tea.conv0.1.0.bias", "block_tea.conv0.1.1.weight", "block_tea.convblock0.0.0.weight", "block_tea.convblock0.0.0.bias", "block_tea.convblock0.0.1.weight", "block_tea.convblock0.1.0.weight", "block_tea.convblock0.1.0.bias", "block_tea.convblock0.1.1.weight", "block_tea.convblock1.0.0.weight", "block_tea.convblock1.0.0.bias", "block_tea.convblock1.0.1.weight", "block_tea.convblock1.1.0.weight", "block_tea.convblock1.1.0.bias", "block_tea.convblock1.1.1.weight", "block_tea.convblock2.0.0.weight", "block_tea.convblock2.0.0.bias", "block_tea.convblock2.0.1.weight", "block_tea.convblock2.1.0.weight", "block_tea.convblock2.1.0.bias", "block_tea.convblock2.1.1.weight", "block_tea.convblock3.0.0.weight", "block_tea.convblock3.0.0.bias", "block_tea.convblock3.0.1.weight", "block_tea.convblock3.1.0.weight", "block_tea.convblock3.1.0.bias", "block_tea.convblock3.1.1.weight", "block_tea.conv1.0.weight", "block_tea.conv1.0.bias", "block_tea.conv1.1.weight", "block_tea.conv1.2.weight", "block_tea.conv1.2.bias", "block_tea.conv2.0.weight", "block_tea.conv2.0.bias", "block_tea.conv2.1.weight", "block_tea.conv2.2.weight", "block_tea.conv2.2.bias", "block0.convblock0.0.0.weight", "block0.convblock0.0.0.bias", "block0.convblock0.0.1.weight", "block0.convblock0.1.0.weight", "block0.convblock0.1.0.bias", "block0.convblock0.1.1.weight", "block0.convblock1.0.0.weight", "block0.convblock1.0.0.bias", "block0.convblock1.0.1.weight", "block0.convblock1.1.0.weight", "block0.convblock1.1.0.bias", "block0.convblock1.1.1.weight", "block0.convblock2.0.0.weight", "block0.convblock2.0.0.bias", "block0.convblock2.0.1.weight", "block0.convblock2.1.0.weight", "block0.convblock2.1.0.bias", "block0.convblock2.1.1.weight", "block0.convblock3.0.0.weight", "block0.convblock3.0.0.bias", "block0.convblock3.0.1.weight", "block0.convblock3.1.0.weight", "block0.convblock3.1.0.bias", "block0.convblock3.1.1.weight", "block0.conv2.0.weight", "block0.conv2.0.bias", "block0.conv2.1.weight", "block0.conv2.2.weight", "block0.conv2.2.bias", "block0.conv1.0.weight", "block0.conv1.0.bias", "block0.conv1.1.weight", "block0.conv1.2.weight", "block0.conv1.2.bias", "block1.convblock0.0.0.weight", "block1.convblock0.0.0.bias", "block1.convblock0.0.1.weight", "block1.convblock0.1.0.weight", "block1.convblock0.1.0.bias", "block1.convblock0.1.1.weight", "block1.convblock1.0.0.weight", "block1.convblock1.0.0.bias", "block1.convblock1.0.1.weight", "block1.convblock1.1.0.weight", "block1.convblock1.1.0.bias", "block1.convblock1.1.1.weight", "block1.convblock2.0.0.weight", "block1.convblock2.0.0.bias", "block1.convblock2.0.1.weight", "block1.convblock2.1.0.weight", "block1.convblock2.1.0.bias", "block1.convblock2.1.1.weight", "block1.convblock3.0.0.weight", "block1.convblock3.0.0.bias", "block1.convblock3.0.1.weight", "block1.convblock3.1.0.weight", "block1.convblock3.1.0.bias", "block1.convblock3.1.1.weight", "block1.conv2.0.weight", "block1.conv2.0.bias", "block1.conv2.1.weight", "block1.conv2.2.weight", "block1.conv2.2.bias", "block1.conv1.0.weight", "block1.conv1.0.bias", "block1.conv1.1.weight", "block1.conv1.2.weight", "block1.conv1.2.bias", "block2.convblock0.0.0.weight", "block2.convblock0.0.0.bias", "block2.convblock0.0.1.weight", "block2.convblock0.1.0.weight", "block2.convblock0.1.0.bias", "block2.convblock0.1.1.weight", "block2.convblock1.0.0.weight", "block2.convblock1.0.0.bias", "block2.convblock1.0.1.weight", "block2.convblock1.1.0.weight", "block2.convblock1.1.0.bias", "block2.convblock1.1.1.weight", "block2.convblock2.0.0.weight", "block2.convblock2.0.0.bias", "block2.convblock2.0.1.weight", "block2.convblock2.1.0.weight", "block2.convblock2.1.0.bias", "block2.convblock2.1.1.weight", "block2.convblock3.0.0.weight", "block2.convblock3.0.0.bias", "block2.convblock3.0.1.weight", "block2.convblock3.1.0.weight", "block2.convblock3.1.0.bias", "block2.convblock3.1.1.weight", "block2.conv2.0.weight", "block2.conv2.0.bias", "block2.conv2.1.weight", "block2.conv2.2.weight", "block2.conv2.2.bias", "block2.conv1.0.weight", "block2.conv1.0.bias", "block2.conv1.1.weight", "block2.conv1.2.weight", "block2.conv1.2.bias".
size mismatch for block0.conv0.0.0.weight: copying a param with shape torch.Size([45, 11, 3, 3]) from checkpoint, the shape in current model is torch.Size([192, 6, 3, 3]).
size mismatch for block0.conv0.0.0.bias: copying a param with shape torch.Size([45]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for block0.conv0.0.1.weight: copying a param with shape torch.Size([45]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for block0.conv0.1.0.weight: copying a param with shape torch.Size([90, 45, 3, 3]) from checkpoint, the shape in current model is torch.Size([384, 192, 3, 3]).
size mismatch for block0.conv0.1.0.bias: copying a param with shape torch.Size([90]) from checkpoint, the shape in current model is torch.Size([384]).
size mismatch for block0.conv0.1.1.weight: copying a param with shape torch.Size([90]) from checkpoint, the shape in current model is torch.Size([384]).
size mismatch for block1.conv0.0.0.weight: copying a param with shape torch.Size([45, 11, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 10, 3, 3]).
size mismatch for block1.conv0.0.0.bias: copying a param with shape torch.Size([45]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for block1.conv0.0.1.weight: copying a param with shape torch.Size([45]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for block1.conv0.1.0.weight: copying a param with shape torch.Size([90, 45, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for block1.conv0.1.0.bias: copying a param with shape torch.Size([90]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for block1.conv0.1.1.weight: copying a param with shape torch.Size([90]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for block2.conv0.0.0.weight: copying a param with shape torch.Size([45, 11, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 10, 3, 3]).
size mismatch for block2.conv0.0.0.bias: copying a param with shape torch.Size([45]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for block2.conv0.0.1.weight: copying a param with shape torch.Size([45]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for block2.conv0.1.0.weight: copying a param with shape torch.Size([90, 45, 3, 3]) from checkpoint, the shape in current model is torch.Size([192, 96, 3, 3]).
size mismatch for block2.conv0.1.0.bias: copying a param with shape torch.Size([90]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for block2.conv0.1.1.weight: copying a param with shape torch.Size([90]) from checkpoint, the shape in current model is torch.Size([192]).
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/france1/opencv/morefps/arXiv2020-RIFE/inference_video.py", line 93, in <module>
from train_log.oldmodel.RIFE_HDv3 import Model
ModuleNotFoundError: No module named 'train_log.oldmodel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/france1/opencv/morefps/arXiv2020-RIFE/inference_video.py", line 100, in <module>
model.load_model(args.modelDir, -1)
File "/home/france1/opencv/morefps/arXiv2020-RIFE/model/oldmodel/RIFE_HD.py", line 178, in load_model
self.flownet.load_state_dict(
File "/home/france1/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for IFNet:
Missing key(s) in state_dict: "block0.conv0.0.weight", "block0.conv0.1.weight", "block0.conv0.1.bias", "block0.conv0.1.running_mean", "block0.conv0.1.running_var", "block0.conv0.2.weight", "block0.res0.conv1.0.weight", "block0.res0.conv1.1.weight", "block0.res0.conv1.1.bias", "block0.res0.conv1.1.running_mean", "block0.res0.conv1.1.running_var", "block0.res0.conv1.2.weight", "block0.res0.conv2.0.weight", "block0.res0.conv2.1.weight", "block0.res0.conv2.1.bias", "block0.res0.conv2.1.running_mean", "block0.res0.conv2.1.running_var", "block0.res0.relu1.weight", "block0.res0.relu2.weight", "block0.res0.fc1.weight", "block0.res0.fc2.weight", "block0.res1.conv1.0.weight", "block0.res1.conv1.1.weight", "block0.res1.conv1.1.bias", "block0.res1.conv1.1.running_mean", "block0.res1.conv1.1.running_var", "block0.res1.conv1.2.weight", "block0.res1.conv2.0.weight", "block0.res1.conv2.1.weight", "block0.res1.conv2.1.bias", "block0.res1.conv2.1.running_mean", "block0.res1.conv2.1.running_var", "block0.res1.relu1.weight", "block0.res1.relu2.weight", "block0.res1.fc1.weight", "block0.res1.fc2.weight", "block0.res2.conv1.0.weight", "block0.res2.conv1.1.weight", "block0.res2.conv1.1.bias", "block0.res2.conv1.1.running_mean", "block0.res2.conv1.1.running_var", "block0.res2.conv1.2.weight", "block0.res2.conv2.0.weight", "block0.res2.conv2.1.weight", "block0.res2.conv2.1.bias", "block0.res2.conv2.1.running_mean", "block0.res2.conv2.1.running_var", "block0.res2.relu1.weight", "block0.res2.relu2.weight", "block0.res2.fc1.weight", "block0.res2.fc2.weight", "block0.res3.conv1.0.weight", "block0.res3.conv1.1.weight", "block0.res3.conv1.1.bias", "block0.res3.conv1.1.running_mean", "block0.res3.conv1.1.running_var", "block0.res3.conv1.2.weight", "block0.res3.conv2.0.weight", "block0.res3.conv2.1.weight", "block0.res3.conv2.1.bias", "block0.res3.conv2.1.running_mean", "block0.res3.conv2.1.running_var", "block0.res3.relu1.weight", "block0.res3.relu2.weight", "block0.res3.fc1.weight", "block0.res3.fc2.weight", "block0.res4.conv1.0.weight", "block0.res4.conv1.1.weight", "block0.res4.conv1.1.bias", "block0.res4.conv1.1.running_mean", "block0.res4.conv1.1.running_var", "block0.res4.conv1.2.weight", "block0.res4.conv2.0.weight", "block0.res4.conv2.1.weight", "block0.res4.conv2.1.bias", "block0.res4.conv2.1.running_mean", "block0.res4.conv2.1.running_var", "block0.res4.relu1.weight", "block0.res4.relu2.weight", "block0.res4.fc1.weight", "block0.res4.fc2.weight", "block0.res5.conv1.0.weight", "block0.res5.conv1.1.weight", "block0.res5.conv1.1.bias", "block0.res5.conv1.1.running_mean", "block0.res5.conv1.1.running_var", "block0.res5.conv1.2.weight", "block0.res5.conv2.0.weight", "block0.res5.conv2.1.weight", "block0.res5.conv2.1.bias", "block0.res5.conv2.1.running_mean", "block0.res5.conv2.1.running_var", "block0.res5.relu1.weight", "block0.res5.relu2.weight", "block0.res5.fc1.weight", "block0.res5.fc2.weight", "block0.conv1.weight", "block0.conv1.bias", "block1.conv0.0.weight", "block1.conv0.1.weight", "block1.conv0.1.bias", "block1.conv0.1.running_mean", "block1.conv0.1.running_var", "block1.conv0.2.weight", "block1.res0.conv1.0.weight", "block1.res0.conv1.1.weight", "block1.res0.conv1.1.bias", "block1.res0.conv1.1.running_mean", "block1.res0.conv1.1.running_var", "block1.res0.conv1.2.weight", "block1.res0.conv2.0.weight", "block1.res0.conv2.1.weight", "block1.res0.conv2.1.bias", "block1.res0.conv2.1.running_mean", "block1.res0.conv2.1.running_var", "block1.res0.relu1.weight", "block1.res0.relu2.weight", "block1.res0.fc1.weight", "block1.res0.fc2.weight", "block1.res1.conv1.0.weight", "block1.res1.conv1.1.weight", "block1.res1.conv1.1.bias", "block1.res1.conv1.1.running_mean", "block1.res1.conv1.1.running_var", "block1.res1.conv1.2.weight", "block1.res1.conv2.0.weight", "block1.res1.conv2.1.weight", "block1.res1.conv2.1.bias", "block1.res1.conv2.1.running_mean", "block1.res1.conv2.1.running_var", "block1.res1.relu1.weight", "block1.res1.relu2.weight", "block1.res1.fc1.weight", "block1.res1.fc2.weight", "block1.res2.conv1.0.weight", "block1.res2.conv1.1.weight", "block1.res2.conv1.1.bias", "block1.res2.conv1.1.running_mean", "block1.res2.conv1.1.running_var", "block1.res2.conv1.2.weight", "block1.res2.conv2.0.weight", "block1.res2.conv2.1.weight", "block1.res2.conv2.1.bias", "block1.res2.conv2.1.running_mean", "block1.res2.conv2.1.running_var", "block1.res2.relu1.weight", "block1.res2.relu2.weight", "block1.res2.fc1.weight", "block1.res2.fc2.weight", "block1.res3.conv1.0.weight", "block1.res3.conv1.1.weight", "block1.res3.conv1.1.bias", "block1.res3.conv1.1.running_mean", "block1.res3.conv1.1.running_var", "block1.res3.conv1.2.weight", "block1.res3.conv2.0.weight", "block1.res3.conv2.1.weight", "block1.res3.conv2.1.bias", "block1.res3.conv2.1.running_mean", "block1.res3.conv2.1.running_var", "block1.res3.relu1.weight", "block1.res3.relu2.weight", "block1.res3.fc1.weight", "block1.res3.fc2.weight", "block1.res4.conv1.0.weight", "block1.res4.conv1.1.weight", "block1.res4.conv1.1.bias", "block1.res4.conv1.1.running_mean", "block1.res4.conv1.1.running_var", "block1.res4.conv1.2.weight", "block1.res4.conv2.0.weight", "block1.res4.conv2.1.weight", "block1.res4.conv2.1.bias", "block1.res4.conv2.1.running_mean", "block1.res4.conv2.1.running_var", "block1.res4.relu1.weight", "block1.res4.relu2.weight", "block1.res4.fc1.weight", "block1.res4.fc2.weight", "block1.res5.conv1.0.weight", "block1.res5.conv1.1.weight", "block1.res5.conv1.1.bias", "block1.res5.conv1.1.running_mean", "block1.res5.conv1.1.running_var", "block1.res5.conv1.2.weight", "block1.res5.conv2.0.weight", "block1.res5.conv2.1.weight", "block1.res5.conv2.1.bias", "block1.res5.conv2.1.running_mean", "block1.res5.conv2.1.running_var", "block1.res5.relu1.weight", "block1.res5.relu2.weight", "block1.res5.fc1.weight", "block1.res5.fc2.weight", "block1.conv1.weight", "block1.conv1.bias", "block2.conv0.0.weight", "block2.conv0.1.weight", "block2.conv0.1.bias", "block2.conv0.1.running_mean", "block2.conv0.1.running_var", "block2.conv0.2.weight", "block2.res0.conv1.0.weight", "block2.res0.conv1.1.weight", "block2.res0.conv1.1.bias", "block2.res0.conv1.1.running_mean", "block2.res0.conv1.1.running_var", "block2.res0.conv1.2.weight", "block2.res0.conv2.0.weight", "block2.res0.conv2.1.weight", "block2.res0.conv2.1.bias", "block2.res0.conv2.1.running_mean", "block2.res0.conv2.1.running_var", "block2.res0.relu1.weight", "block2.res0.relu2.weight", "block2.res0.fc1.weight", "block2.res0.fc2.weight", "block2.res1.conv1.0.weight", "block2.res1.conv1.1.weight", "block2.res1.conv1.1.bias", "block2.res1.conv1.1.running_mean", "block2.res1.conv1.1.running_var", "block2.res1.conv1.2.weight", "block2.res1.conv2.0.weight", "block2.res1.conv2.1.weight", "block2.res1.conv2.1.bias", "block2.res1.conv2.1.running_mean", "block2.res1.conv2.1.running_var", "block2.res1.relu1.weight", "block2.res1.relu2.weight", "block2.res1.fc1.weight", "block2.res1.fc2.weight", "block2.res2.conv1.0.weight", "block2.res2.conv1.1.weight", "block2.res2.conv1.1.bias", "block2.res2.conv1.1.running_mean", "block2.res2.conv1.1.running_var", "block2.res2.conv1.2.weight", "block2.res2.conv2.0.weight", "block2.res2.conv2.1.weight", "block2.res2.conv2.1.bias", "block2.res2.conv2.1.running_mean", "block2.res2.conv2.1.running_var", "block2.res2.relu1.weight", "block2.res2.relu2.weight", "block2.res2.fc1.weight", "block2.res2.fc2.weight", "block2.res3.conv1.0.weight", "block2.res3.conv1.1.weight", "block2.res3.conv1.1.bias", "block2.res3.conv1.1.running_mean", "block2.res3.conv1.1.running_var", "block2.res3.conv1.2.weight", "block2.res3.conv2.0.weight", "block2.res3.conv2.1.weight", "block2.res3.conv2.1.bias", "block2.res3.conv2.1.running_mean", "block2.res3.conv2.1.running_var", "block2.res3.relu1.weight", "block2.res3.relu2.weight", "block2.res3.fc1.weight", "block2.res3.fc2.weight", "block2.res4.conv1.0.weight", "block2.res4.conv1.1.weight", "block2.res4.conv1.1.bias", "block2.res4.conv1.1.running_mean", "block2.res4.conv1.1.running_var", "block2.res4.conv1.2.weight", "block2.res4.conv2.0.weight", "block2.res4.conv2.1.weight", "block2.res4.conv2.1.bias", "block2.res4.conv2.1.running_mean", "block2.res4.conv2.1.running_var", "block2.res4.relu1.weight", "block2.res4.relu2.weight", "block2.res4.fc1.weight", "block2.res4.fc2.weight", "block2.res5.conv1.0.weight", "block2.res5.conv1.1.weight", "block2.res5.conv1.1.bias", "block2.res5.conv1.1.running_mean", "block2.res5.conv1.1.running_var", "block2.res5.conv1.2.weight", "block2.res5.conv2.0.weight", "block2.res5.conv2.1.weight", "block2.res5.conv2.1.bias", "block2.res5.conv2.1.running_mean", "block2.res5.conv2.1.running_var", "block2.res5.relu1.weight", "block2.res5.relu2.weight", "block2.res5.fc1.weight", "block2.res5.fc2.weight", "block2.conv1.weight", "block2.conv1.bias", "block3.conv0.0.weight", "block3.conv0.1.weight", "block3.conv0.1.bias", "block3.conv0.1.running_mean", "block3.conv0.1.running_var", "block3.conv0.2.weight", "block3.res0.conv1.0.weight", "block3.res0.conv1.1.weight", "block3.res0.conv1.1.bias", "block3.res0.conv1.1.running_mean", "block3.res0.conv1.1.running_var", "block3.res0.conv1.2.weight", "block3.res0.conv2.0.weight", "block3.res0.conv2.1.weight", "block3.res0.conv2.1.bias", "block3.res0.conv2.1.running_mean", "block3.res0.conv2.1.running_var", "block3.res0.relu1.weight", "block3.res0.relu2.weight", "block3.res0.fc1.weight", "block3.res0.fc2.weight", "block3.res1.conv1.0.weight", "block3.res1.conv1.1.weight", "block3.res1.conv1.1.bias", "block3.res1.conv1.1.running_mean", "block3.res1.conv1.1.running_var", "block3.res1.conv1.2.weight", "block3.res1.conv2.0.weight", "block3.res1.conv2.1.weight", "block3.res1.conv2.1.bias", "block3.res1.conv2.1.running_mean", "block3.res1.conv2.1.running_var", "block3.res1.relu1.weight", "block3.res1.relu2.weight", "block3.res1.fc1.weight", "block3.res1.fc2.weight", "block3.res2.conv1.0.weight", "block3.res2.conv1.1.weight", "block3.res2.conv1.1.bias", "block3.res2.conv1.1.running_mean", "block3.res2.conv1.1.running_var", "block3.res2.conv1.2.weight", "block3.res2.conv2.0.weight", "block3.res2.conv2.1.weight", "block3.res2.conv2.1.bias", "block3.res2.conv2.1.running_mean", "block3.res2.conv2.1.running_var", "block3.res2.relu1.weight", "block3.res2.relu2.weight", "block3.res2.fc1.weight", "block3.res2.fc2.weight", "block3.res3.conv1.0.weight", "block3.res3.conv1.1.weight", "block3.res3.conv1.1.bias", "block3.res3.conv1.1.running_mean", "block3.res3.conv1.1.running_var", "block3.res3.conv1.2.weight", "block3.res3.conv2.0.weight", "block3.res3.conv2.1.weight", "block3.res3.conv2.1.bias", "block3.res3.conv2.1.running_mean", "block3.res3.conv2.1.running_var", "block3.res3.relu1.weight", "block3.res3.relu2.weight", "block3.res3.fc1.weight", "block3.res3.fc2.weight", "block3.res4.conv1.0.weight", "block3.res4.conv1.1.weight", "block3.res4.conv1.1.bias", "block3.res4.conv1.1.running_mean", "block3.res4.conv1.1.running_var", "block3.res4.conv1.2.weight", "block3.res4.conv2.0.weight", "block3.res4.conv2.1.weight", "block3.res4.conv2.1.bias", "block3.res4.conv2.1.running_mean", "block3.res4.conv2.1.running_var", "block3.res4.relu1.weight", "block3.res4.relu2.weight", "block3.res4.fc1.weight", "block3.res4.fc2.weight", "block3.res5.conv1.0.weight", "block3.res5.conv1.1.weight", "block3.res5.conv1.1.bias", "block3.res5.conv1.1.running_mean", "block3.res5.conv1.1.running_var", "block3.res5.conv1.2.weight", "block3.res5.conv2.0.weight", "block3.res5.conv2.1.weight", "block3.res5.conv2.1.bias", "block3.res5.conv2.1.running_mean", "block3.res5.conv2.1.running_var", "block3.res5.relu1.weight", "block3.res5.relu2.weight", "block3.res5.fc1.weight", "block3.res5.fc2.weight", "block3.conv1.weight", "block3.conv1.bias".
Unexpected key(s) in state_dict: "block_tea.conv0.0.0.weight", "block_tea.conv0.0.0.bias", "block_tea.conv0.0.1.weight", "block_tea.conv0.1.0.weight", "block_tea.conv0.1.0.bias", "block_tea.conv0.1.1.weight", "block_tea.convblock0.0.0.weight", "block_tea.convblock0.0.0.bias", "block_tea.convblock0.0.1.weight", "block_tea.convblock0.1.0.weight", "block_tea.convblock0.1.0.bias", "block_tea.convblock0.1.1.weight", "block_tea.convblock1.0.0.weight", "block_tea.convblock1.0.0.bias", "block_tea.convblock1.0.1.weight", "block_tea.convblock1.1.0.weight", "block_tea.convblock1.1.0.bias", "block_tea.convblock1.1.1.weight", "block_tea.convblock2.0.0.weight", "block_tea.convblock2.0.0.bias", "block_tea.convblock2.0.1.weight", "block_tea.convblock2.1.0.weight", "block_tea.convblock2.1.0.bias", "block_tea.convblock2.1.1.weight", "block_tea.convblock3.0.0.weight", "block_tea.convblock3.0.0.bias", "block_tea.convblock3.0.1.weight", "block_tea.convblock3.1.0.weight", "block_tea.convblock3.1.0.bias", "block_tea.convblock3.1.1.weight", "block_tea.conv1.0.weight", "block_tea.conv1.0.bias", "block_tea.conv1.1.weight", "block_tea.conv1.2.weight", "block_tea.conv1.2.bias", "block_tea.conv2.0.weight", "block_tea.conv2.0.bias", "block_tea.conv2.1.weight", "block_tea.conv2.2.weight", "block_tea.conv2.2.bias", "block0.convblock0.0.0.weight", "block0.convblock0.0.0.bias", "block0.convblock0.0.1.weight", "block0.convblock0.1.0.weight", "block0.convblock0.1.0.bias", "block0.convblock0.1.1.weight", "block0.convblock1.0.0.weight", "block0.convblock1.0.0.bias", "block0.convblock1.0.1.weight", "block0.convblock1.1.0.weight", "block0.convblock1.1.0.bias", "block0.convblock1.1.1.weight", "block0.convblock2.0.0.weight", "block0.convblock2.0.0.bias", "block0.convblock2.0.1.weight", "block0.convblock2.1.0.weight", "block0.convblock2.1.0.bias", "block0.convblock2.1.1.weight", "block0.convblock3.0.0.weight", "block0.convblock3.0.0.bias", "block0.convblock3.0.1.weight", "block0.convblock3.1.0.weight", "block0.convblock3.1.0.bias", "block0.convblock3.1.1.weight", "block0.conv2.0.weight", "block0.conv2.0.bias", "block0.conv2.1.weight", "block0.conv2.2.weight", "block0.conv2.2.bias", "block0.conv0.0.0.weight", "block0.conv0.0.0.bias", "block0.conv0.0.1.weight", "block0.conv0.1.0.weight", "block0.conv0.1.0.bias", "block0.conv0.1.1.weight", "block0.conv1.0.weight", "block0.conv1.0.bias", "block0.conv1.1.weight", "block0.conv1.2.weight", "block0.conv1.2.bias", "block1.convblock0.0.0.weight", "block1.convblock0.0.0.bias", "block1.convblock0.0.1.weight", "block1.convblock0.1.0.weight", "block1.convblock0.1.0.bias", "block1.convblock0.1.1.weight", "block1.convblock1.0.0.weight", "block1.convblock1.0.0.bias", "block1.convblock1.0.1.weight", "block1.convblock1.1.0.weight", "block1.convblock1.1.0.bias", "block1.convblock1.1.1.weight", "block1.convblock2.0.0.weight", "block1.convblock2.0.0.bias", "block1.convblock2.0.1.weight", "block1.convblock2.1.0.weight", "block1.convblock2.1.0.bias", "block1.convblock2.1.1.weight", "block1.convblock3.0.0.weight", "block1.convblock3.0.0.bias", "block1.convblock3.0.1.weight", "block1.convblock3.1.0.weight", "block1.convblock3.1.0.bias", "block1.convblock3.1.1.weight", "block1.conv2.0.weight", "block1.conv2.0.bias", "block1.conv2.1.weight", "block1.conv2.2.weight", "block1.conv2.2.bias", "block1.conv0.0.0.weight", "block1.conv0.0.0.bias", "block1.conv0.0.1.weight", "block1.conv0.1.0.weight", "block1.conv0.1.0.bias", "block1.conv0.1.1.weight", "block1.conv1.0.weight", "block1.conv1.0.bias", "block1.conv1.1.weight", "block1.conv1.2.weight", "block1.conv1.2.bias", "block2.convblock0.0.0.weight", "block2.convblock0.0.0.bias", "block2.convblock0.0.1.weight", "block2.convblock0.1.0.weight", "block2.convblock0.1.0.bias", "block2.convblock0.1.1.weight", "block2.convblock1.0.0.weight", "block2.convblock1.0.0.bias", "block2.convblock1.0.1.weight", "block2.convblock1.1.0.weight", "block2.convblock1.1.0.bias", "block2.convblock1.1.1.weight", "block2.convblock2.0.0.weight", "block2.convblock2.0.0.bias", "block2.convblock2.0.1.weight", "block2.convblock2.1.0.weight", "block2.convblock2.1.0.bias", "block2.convblock2.1.1.weight", "block2.convblock3.0.0.weight", "block2.convblock3.0.0.bias", "block2.convblock3.0.1.weight", "block2.convblock3.1.0.weight", "block2.convblock3.1.0.bias", "block2.convblock3.1.1.weight", "block2.conv2.0.weight", "block2.conv2.0.bias", "block2.conv2.1.weight", "block2.conv2.2.weight", "block2.conv2.2.bias", "block2.conv0.0.0.weight", "block2.conv0.0.0.bias", "block2.conv0.0.1.weight", "block2.conv0.1.0.weight", "block2.conv0.1.0.bias", "block2.conv0.1.1.weight", "block2.conv1.0.weight", "block2.conv1.0.bias", "block2.conv1.1.weight", "block2.conv1.2.weight", "block2.conv1.2.bias".
```
What's that? I followed the instroductions but.. | closed | 2021-09-27T12:22:07Z | 2021-09-29T05:39:49Z | https://github.com/hzwer/ECCV2022-RIFE/issues/200 | [] | arch-user-france1 | 4 |
axnsan12/drf-yasg | django | 832 | I have the same issue | # Bug Report
Internal Server Error http://127.0.0.1:8000/api/v1/docs/?format=openapi
## Description
Here are my URL patterns:
urlpatterns_api = [
path('', ApiHome, name="api home"),
re_path(r'^docs/$', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'),
re_path(r'^docs(?P<format>\.json|\.yaml)$', schema_view.without_ui(cache_timeout=0), name='schema-json'),
re_path(r'^redoc/$', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'),
]
<!-- edit: --> A clear and concise description of the problem...
## Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- edit: --> Yes, the previous version in which this bug was not present was: ...
## Minimal Reproduction
```code
```
## Stack trace / Error message
```code
```
<!-- If the issue is accompanied by an exception or an error, please share it below: -->
## Your Environment
```code
```
| open | 2023-01-24T06:56:30Z | 2025-03-07T12:10:51Z | https://github.com/axnsan12/drf-yasg/issues/832 | [
"triage"
] | calvincaniSA | 0 |
ultralytics/yolov5 | deep-learning | 12,725 | train.py in YOLOv5 no information is displayed, program executes with no error messages, but weights are not saved | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training
### Bug
I am running the command:
!python train.py --img 256 --epochs 1 --batch-size 16 --data dataset.yml --weights yolov5n.pt
The command is able to execute and finish, but while it executes no information is displayed, and after it finishes no weights are saved unders runs/train/exp. There is no error message displayed either. Perhaps is there something wrong with the way I've organized my data?

### Environment
-YOLO: YOLOv5
-Python 3.11.5
-OS: Windows
### Minimal Reproducible Example
!python train.py --img 256 --epochs 1 --batch-size 16 --data dataset.yml --weights yolov5n.pt
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-02-09T22:50:47Z | 2024-10-20T19:39:25Z | https://github.com/ultralytics/yolov5/issues/12725 | [
"bug",
"Stale"
] | artduurrr | 7 |
pytest-dev/pytest-cov | pytest | 322 | spurious files marked as uncovered _pytest/capture and pytest_cov/embed | when running with `pytest --cov=/src/redacted/ -n auto`
```
[run]
branch = True
concurrency =
multiprocessing
# pytest-twisted runs a twisted reactor in a greenlet
greenlet
```
```
pytest-4.5.0, py-1.8.0, pluggy-0.9.0
plugins: xdist-1.29.0, twisted-1.10, lazy-fixture-0.5.2, forked-1.0.2, cov-2.7.1
```
```
[2019-08-15T09:09:59.851Z] /opt/redacted/python/testingenv/lib/python2.7/site-packages/_pytest/capture.py 469 468 126 0 1% 5-730, 733-844
[2019-08-15T09:09:59.851Z] /opt/redacted/python/testingenv/lib/python2.7/site-packages/pytest_cov/embed.py 44 31 12 3 29% 16-22, 24-36, 52, 56, 69-97, 46->exit, 51->52, 55->56
```
I'm currently struggling to make a minimal reproducible example, if anyone has any advice I'd be grateful
I fixed it by passing source in the coveragerc:
```
[run]
source = /src/redacted/
branch = True
concurrency =
multiprocessing
# pytest-twisted runs a twisted reactor in a greenlet
greenlet
``` | open | 2019-08-15T14:20:49Z | 2019-08-16T15:38:10Z | https://github.com/pytest-dev/pytest-cov/issues/322 | [] | graingert | 12 |
stanfordnlp/stanza | nlp | 426 | Failed to download model - The connection to nlp.stanford.edu timed out | **Describe the bug**
I 'm trying to download a model using ```stanza.download("el")```. It fails with the following error:
```
Downloading https://raw.githubusercontent.com/stanfordnlp/stanza-resources/master/resources_1.0.0.json: 120kB [00:00, 597kB/s]
2020-08-12 17:36:49 INFO: Downloading default packages for language: el (Greek)...
....
....
....
requests.exceptions.ConnectionError: HTTPConnectionPool(host='nlp.stanford.edu', port=80): Max retries exceeded with url: /software/stanza/1.0.0/el/default.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x12f8052e0>: Failed to establish a new connection: [Errno 60] Operation timed out'))
```
**To Reproduce**
Steps to reproduce the behavior:
1. ```stanza.download("el")```
2. Run your script and it will fail with the error above.
Alternatively you can try to load `https://nlp.stanford.edu/software/stanza/1.0.0/en/default.zip`
and the same will happen.
**Expected behavior**
I would expect the model to be downloaded successfully.
**Environment (please complete the following information):**
- OS: MacOS
- Python version: Python 3.8.5 from Anaconda
- Stanza version: 1.0.1
**Additional context**
I'm in Cyprus trying to load this model and after I read that people in China had similar issues I used a VPN but the same problem persists.
Is it possible that the resources server is down? I should also mention that in the last few days I could download models just fine so I'm suspecting this is a temp issue with the server.
Thanks | closed | 2020-08-12T14:29:42Z | 2022-01-30T17:39:34Z | https://github.com/stanfordnlp/stanza/issues/426 | [
"bug"
] | giorgosera | 5 |
docarray/docarray | fastapi | 1,661 | chore: draft release note v0.34.0 | # Release Note
This release contains 2 breaking changes, 3 new features, 11 bug fixes, and 2 documentation improvements.
## :bomb: Breaking Changes
### Terminate Python 3.7 support
:warning: :warning: DocArray will now require Python 3.8. We can no longer assure compatibility with Python 3.7.
We decided to drop it for two reasons:
* Several dependencies of DocArray require Python 3.8.
Python [long-term support for 3.7 is ending](https://endoflife.date/python) this week. This means there will no longer
be security updates for Python 3.7, making this a good time for us to change our requirements.
### Changes to `DocVec` Protobuf definition (#1639)
In order to fix a bug in the `DocVec` protobuf serialization described in [#1561](https://github.com/docarray/docarray/issues/1561),
we have changed the `DocVec` .proto definition.
This means that **`DocVec` objects serialized with DocArray v0.33.0 or earlier cannot be deserialized with DocArray
v.0.34.0 or later, and vice versa**.
:warning: :warning: **We strongly recommend** that everyone using Protobuf with `DocVec` upgrade to DocArray v0.34.0 or
later.
## 🆕 Features
### Allow users to check if a Document is already indexed in a DocIndex (#1633)
You can now check if a Document has already been indexed by using the `in` keyword:
```python
from docarray.index import InMemoryExactNNIndex
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
import numpy as np
class MyDoc(BaseDoc):
text: str
embedding: NdArray[128]
docs = DocList[MyDoc](
[MyDoc(text="Example text", embedding=np.random.rand(128))
for _ in range(2000)])
index = InMemoryExactNNIndex[MyDoc](docs)
assert docs[0] in index
assert MyDoc(text='New text', embedding=np.random.rand(128)) not in index
```
### Support subindexes in `InMemoryExactNNIndex` (#1617)
You can now use the [find_subindex](https://docs.docarray.org/user_guide/storing/docindex/#nested-data-with-subindex)
method with the ExactNNSearch DocIndex.
```python
from docarray.index import InMemoryExactNNIndex
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
import numpy as np
class MyDoc(BaseDoc):
text: str
embedding: NdArray[128]
docs = DocList[MyDoc](
[MyDoc(text="Example text", embedding=np.random.rand(128))
for _ in range(2000)])
index = InMemoryExactNNIndex[MyDoc](docs)
assert docs[0] in index
assert MyDoc(text='New text', embedding=np.random.rand(128)) not in index
```
### Flexible tensor types for protobuf deserialization (#1645)
You can deserialize any `DocVec` protobuf message to any tensor type,
by passing the `tensor_type` parameter to `from_protobuf`.
This means that you can choose at deserialization time if you are working with numpy, PyTorch, or TensorFlow tensors.
```python
class MyDoc(BaseDoc):
tensor: TensorFlowTensor
da = DocVec[MyDoc](...) # doesn't matter what tensor_type is here
proto = da.to_protobuf()
da_after = DocVec[MyDoc].from_protobuf(proto, tensor_type=TensorFlowTensor)
assert isinstance(da_after.tensor, TensorFlowTensor)
```
## ⚙ Refactoring
### Add `DBConfig` to `InMemoryExactNNSearch`
`InMemoryExactNNsearch` used to get a single parameter `index_file_path` as a constructor parameter, unlike the rest of
the Indexers who accepted their own `DBConfig`. Now `index_file_path` is part of the `DBConfig` which allows to
initialize from it.
This will allow us to extend this config if more parameters are needed.
The parameters of `DBConfig` can be passed at construction time as `**kwargs` making this change compatible with old
usage.
These two initializations are equivalent.
```python
from docarray.index import InMemoryExactNNIndex
db_config = InMemoryExactNNIndex.DBConfig(index_file_path='index.bin')
index = InMemoryExactNNIndex[MyDoc](db_config=db_config)
index = InMemoryExactNNIndex[MyDoc](index_file_path='index.bin')
```
## 🐞 Bug Fixes
### Allow protobuf deserialization of `BaseDoc` with `Union` type (#1655)
Serialization of `BaseDoc` types who have `Union` types parameter of Python native types is supported.
```python
from docarray import BaseDoc
from typing import Union
class MyDoc(BaseDoc):
union_field: Union[int, str]
docs1 = DocList[MyDoc]([MyDoc(union_field="hello")])
docs2 = DocList[BasisUnion].from_dataframe(docs_basic.to_dataframe())
assert docs1 == docs2
```
When these `Union` types involve other `BaseDoc` types, an exception is thrown.
```python
class CustomDoc(BaseDoc):
ud: Union[TextDoc, ImageDoc] = TextDoc(text='union type')
docs = DocList[CustomDoc]([CustomDoc(ud=TextDoc(text='union type'))])
# raises an Exception
DocList[CustomDoc].from_dataframe(docs.to_dataframe())
```
### Cast limit to integer when passed to `HNSWDocumentIndex` (#1657, #1656)
If you call `find` or `find_batched` on an `HNSWDocumentIndex`, the `limit` parameter will automatically be cast to
`integer`.
### Moved `default_column_config` from `RuntimeConfig` to `DBconfig` (#1648)
`default_column_config` contains specific configuration information about the columns and tables inside the backend's
database. This was previously put inside `RuntimeConfig` which caused an error because this information is required at
initialization time. This information has been moved inside `DBConfig` so you can edit it there.
```python
from docarray.index import HNSWDocumentIndex
import numpy as np
db_config = HNSWDocumentIndex.DBConfig()
db_conf.default_column_config.get(np.ndarray).update({'ef': 2500})
index = HNSWDocumentIndex[MyDoc](db_config=db_config)
```
### Fix issue with Protobuf (de)serialization for DocVec (#1639)
This bug caused raw Protobuf objects to be stored as DocVec columns after they were deserialized from Protobuf, making the
data essentially inaccessible. This has now been fixed, and `DocVec` objects are identical before and after (de)serialization.
### Fix order of returned matches when `find` and `filter` combination used in `InMemoryExactNNIndex` (#1642)
Hybrid search (find+filter) for `InMemoryExactNNIndex` was prioritizing low similarities (lower scores) for returned
matches. Fixed by adding an option to sort matches in a reverse order based on their scores.
```python
# prepare a query
q_doc = MyDoc(embedding=np.random.rand(128), text='query')
query = (
db.build_query()
.find(query=q_doc, search_field='embedding')
.filter(filter_query={'text': {'$exists': True}})
.build()
)
results = db.execute_query(query)
# Before: results was sorted from worst to best matches
# Now: It's sorted in the correct order, showing better matches first
```
### Working with external Qdrant collections (#1632)
When using `QdrandDocumentIndex` to connect to a Qdrant DB initialized outside of `docarray` raised a `KeyError`.
This has been fixed, and now you can use `QdrantDocumentIndex` to connect to externally initialized collections.
## Other bug fixes
- Update text search to match Weaviate client's new sig (#1654)
- Fix `DocVec` equality (#1641, #1663)
- Fix exception when `summary()` called for `LegacyDocument`. (#1637)
- Fix `DocList` and `DocVec` coersion. (#1568)
- Fix `update()` on `BaseDoc` with tensors fields (#1628)
## 📗 Documentation Improvements
- Enhance DocVec section (#1658)
- Qdrant in memory usage (#1634)
## 🤟 Contributors
We would like to thank all contributors to this release:
- Johannes Messner (@JohannesMessner)
- Nikolas Pitsillos (@npitsillos)
- Shukri (@hsm207)
- Kacper Łukawski (@kacperlukawski)
- Aman Agarwal (@agaraman0)
- maxwelljin (@maxwelljin)
- samsja (@samsja)
- Saba Sturua (@jupyterjazz)
- Joan Fontanals (@JoanFM)
| closed | 2023-06-19T13:01:46Z | 2023-06-21T08:25:02Z | https://github.com/docarray/docarray/issues/1661 | [] | samsja | 0 |
stanford-oval/storm | nlp | 193 | [BUG] Outputting numerous 403 errors when running Co-STORM | **Describe the bug**
When running the Co-STORM example, I get numerous 403 errors output in the terminal. These errors are then followed by some trafilatura errors and errors complaining about 'The API deployment for this resource does not exist'.
Despite all this, the final report is seemingly output just fine. The only issue is the terminal is impossible to follow as a result of the errors.
This issue is similar to [#133](https://github.com/stanford-oval/storm/issues/133) where I also commented as I experienced similar results in the past with STORM. I have tried multiple networks, and this has not had an impact.
Are the 403 errors a result of these sites not allowing scraping and hence not included in the final report?
**To Reproduce**
Report following things
1. Setup environment according to [run_costorm_gpt.py](https://github.com/stanford-oval/storm/blob/main/examples/costorm_examples/run_costorm_gpt.py#L1-L14)
2. Run it
**Screenshots**
_Error while requesting URL 403_

followed by
_Trafilatura errors and 'An error occurred for text: root, <topic_here>' with 404 code_

**Environment:**
- OS: Ubuntu [WSL2]
- Retriever: Bing (though I tested with You.com and got a similar result)
- LLM Provider: Azure OpenAI
| closed | 2024-10-01T22:06:31Z | 2024-10-28T15:51:53Z | https://github.com/stanford-oval/storm/issues/193 | [] | ColtonBehannon | 1 |
minimaxir/textgenrnn | tensorflow | 121 | a info keras | open | 2019-04-12T22:35:22Z | 2019-04-19T19:54:19Z | https://github.com/minimaxir/textgenrnn/issues/121 | [] | phoenixfire71 | 2 |
|
mherrmann/helium | web-scraping | 107 | Issue when using Helium with Tkinter | Can't use the library at all when combining with tkinter and selenium:
raceback (most recent call last):
File "C:\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 553, in _clicked
self._command()
File "c:\Users\danie\Documents\Coding\ADvantage\userinterface.py", line 301, in start_advertising
Facebook_Init(self.text_to_paste, self.path_to_images, self.username, self.password)
File "c:\Users\danie\Documents\Coding\ADvantage\open_facebook.py", line 22, in __init__
self.open_facebook(text, path, username, password)
File "c:\Users\danie\Documents\Coding\ADvantage\open_facebook.py", line 36, in open_facebook
self.driver = webdriver.Chrome(options=option)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 76, in __init__
RemoteWebDriver.__init__(
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 319, in execute
response = self.command_executor.execute(driver_command, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 374, in execute
return self._request(command_info[0], url, body=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 397, in _request
resp = self._conn.request(method, url, body=body, headers=headers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\_request_methods.py", line 118, in request
return self.request_encode_body(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\_request_methods.py", line 217, in request_encode_body
return self.urlopen(method, url, **extra_kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\poolmanager.py", line 422, in urlopen
conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\poolmanager.py", line 303, in connection_from_host
return self.connection_from_context(request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\poolmanager.py", line 328, in connection_from_context
return self.connection_from_pool_key(pool_key, request_context=request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\poolmanager.py", line 351, in connection_from_pool_key
pool = self._new_pool(scheme, host, port, request_context=request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\poolmanager.py", line 265, in _new_pool
return pool_cls(host, port, **request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 196, in __init__
timeout = Timeout.from_float(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\util\timeout.py", line 190, in from_float
return Timeout(read=timeout, connect=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\util\timeout.py", line 119, in __init__
self._connect = self._validate_timeout(connect, "connect")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\util\timeout.py", line 156, in _validate_timeout
raise ValueError(
ValueError: Timeout value connect was <object object at 0x00000229CA994C60>, but it must be an int, float or None.
Doing this destroys my Selenium package, having have to uninstall selenium and reinstall it for Selenium to work again | open | 2023-05-23T11:47:30Z | 2023-06-08T10:52:08Z | https://github.com/mherrmann/helium/issues/107 | [] | DanielBooysenjr | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 105 | Remove default values for optional fields in dynamically generated resume | ## **Steps to Reproduce**
1. Edit the **`plain_text_resume.yaml`** file
2. Remove the **`exams`** section under **`education_details`**
3. Generate a resume using the AI job applier
## **Current Behavior**
The dynamically generated resume includes default values for courses and GPAs that were not provided in the YAML file. For example:
```
Introduction to Machine Learning → GPA: 3.7/4.0
Data Structures and Algorithms → GPA: 3.5/4.0
Database Management Systems → GPA: 3.6/4.0
Software Engineering Principles → GPA: 3.8/4.0
Business Analytics → GPA: 3.5/4.0
```
## **Expected Behavior**
When optional fields like `exams`are empty or not provided, the AI should not generate any default values. These sections should be omitted entirely from the generated resume.
This looks to be coming from the LLM - since no classes/grades are provided (`exams=[]`), it is making them up and adding them into the resume. This should be an optional field, along with some others like `Certifications`.
open_ai_calls.json
```
{
"model": "gpt-4o-mini-2024-07-18",
"time": "2024-08-28 07:55:26",
"prompts": {
"prompt_1": "\\nAct as an HR expert and resume writer with a specialization in creating ATS-friendly resumes. Your task is to articulate the educational background for a resume, ensuring it aligns with the provided job description. For each educational entry, ensure you include:\\n\\n1. **Institution Name and Location**: Specify the university or educational institution’s name and location.\\n2. **Degree and Field of Study**: Clearly indicate the degree earned and the field of study.\\n3. **GPA**: Include your GPA if it is strong and relevant.\\n4. **Relevant Coursework**: List key courses with their grades to showcase your academic strengths.\\n\\nEnsure the information is clearly presented and emphasizes academic achievements that align with the job description.\\n\\n- **My information:** \\n [Education(degree='Bachelor Of Science', university='University of Utah', gpa='3.6/4', graduation_year='2017', field_of_study='Business', exam=[])]\\n\\n- **Job Description:** \\n # Analysis of Senior Software Engineer - AI Applications Position\\n\\n## Technical Skills\\n- **Machine Learning and AI**: Expertise in current and emerging AI technologies, with a focus on utilizing Large Language Models (LLMs).\\n- **Programming Languages**: Proficiency in modern languages such as Python, JavaScript, Go, or Rust.\\n- **Microservices Architecture**: Experience in building and understanding microservices, including trade-offs involved.\\n- **Data Handling**: Skills in data extraction, transformation, and loading (ETL) processes; normalizing, cleansing, and validating data.\\n- **Database Management**: Ability to design and implement database schemas, optimize queries, and manage overall database performance.\\n- **API Integration**: Experience in integrating with backend services and APIs, including designing and implementing APIs for data access.\\n- **Model Development**: Knowledge of machine learning frameworks to develop model training pipelines and monitor model performance.\\n\\n## Soft Skills\\n- **Problem-Solving**: Proven ability to navigate ambiguous or undefined problems and think abstractly to derive solutions.\\n- **Communication**: Strong capability to articulate technical challenges and solutions to non-technical stakeholders effectively.\\n- **Collaboration**: Ability to work effectively in teams, collaborating with engineers, product managers, subject matter experts, and designers.\\n- **Project Management**: Skills in self-organization, prioritizing tasks, and managing resources.\\n\\n## Educational Qualifications and Certifications\\n- **Degree**: Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.\\n- **Experience**: Minimum of 8 years of relevant work experience in software engineering, machine learning, or data science.\\n\\n## Professional Experience\\n- **Software Development**: Significant experience in software engineering and development, particularly with modern programming languages.\\n- **AI Applications**: Previous involvement in developing AI applications and tools, specifically around machine learning and natural language processing.\\n- **Data Management**: Proven experience in handling data manipulation, normalization, and validation within software applications.\\n\\n## Role Evolution\\n- **AI Advancements**: As AI technologies, particularly in natural language processing, continue to evolve, the demand for engineers skilled in these areas will grow. Familiarity with newer models and frameworks will become increasingly important.\\n- **Interdisciplinary Skills**: Future roles may require a greater focus on collaboration across disciplines, such as legal and financial sectors, thus necessitating stronger domain knowledge beyond pure technical skills.\\n- **Cloud Services and Open Source**: The continued shift towards cloud-based services and open-source technologies will require engineers to stay updated with relevant tools and platforms, adapting their workflows and skills accordingly.\\n- **User-Centric Design**: As products become more user-friendly and centered around client engagement, skills in user experience design may become beneficial for software engineers in this role.\\n\\nThis comprehensive overview outlines the critical competencies required for the Senior Software Engineer - AI Applications role, serving as a guideline for the recruitment process to identify the most qualified candidates.\\n\\n- **Template to Use**\\n```\\n<section id=\\"education\\">\\n <h2>Education</h2>\\n <div class=\\"entry\\">\\n <div class=\\"entry-header\\">\\n <span class=\\"entry-name\\">[University Name]</span>\\n <span class=\\"entry-location\\">[Location] </span>\\n </div>\\n <div class=\\"entry-details\\">\\n <span class=\\"entry-title\\">[Degree] in [Field of Study] | GPA: [Your GPA]/4.0</span>\\n <span class=\\"entry-year\\">[Start Year] – [End Year] </span>\\n </div>\\n <ul class=\\"compact-list\\">\\n <li>[Course Name] → GPA: [Grade]/4.0</li>\\n <li>[Course Name] → GPA: [Grade]/4.0</li>\\n <li>[Course Name] → GPA: [Grade]/4.0</li>\\n <li>[Course Name] → GPA: [Grade]/4.0</li>\\n <li>[Course Name] → GPA: [Grade]/4.0</li>\\n </ul>\\n </div>\\n</section>\\n```\\nThe results should be provided in html format, Provide only the html code for the resume, without any explanations or additional text and also without ```html ```"
},
"replies": "<section id=\\"education\\">\\n <h2>Education</h2>\\n <div class=\\"entry\\">\\n <div class=\\"entry-header\\">\\n <span class=\\"entry-name\\">University of Utah</span>\\n <span class=\\"entry-location\\">Salt Lake City, UT</span>\\n </div>\\n <div class=\\"entry-details\\">\\n <span class=\\"entry-title\\">Bachelor Of Science in Business | GPA: 3.6/4.0</span>\\n <span class=\\"entry-year\\">2013 – 2017</span>\\n </div>\\n <ul class=\\"compact-list\\">\\n <li>Introduction to Machine Learning → GPA: 3.7/4.0</li>\\n <li>Data Structures and Algorithms → GPA: 3.5/4.0</li>\\n <li>Database Management Systems → GPA: 3.6/4.0</li>\\n <li>Software Engineering Principles → GPA: 3.8/4.0</li>\\n <li>Business Analytics → GPA: 3.5/4.0</li>\\n </ul>\\n </div>\\n</section>",
"total_tokens": 1324,
"input_tokens": 1079,
"output_tokens": 245,
"total_cost": 0.00030885
}
``` | closed | 2024-08-28T14:18:25Z | 2024-10-23T23:42:12Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/105 | [] | zackcpetersen | 4 |
huggingface/transformers | pytorch | 36,817 | Add EuroBert Model To Config | ### Model description
I would like to have the EuroBert model added to the config (configuration_auto.py) :)
Especially the 210M version:
https://huggingface.co/EuroBERT
This would probably solve an issue in Flair:
https://github.com/flairNLP/flair/issues/3630
```
File "C:\Users\nick\PycharmProjects\flair\.venv\Lib\site-packages\flair\embeddings\transformer.py", line 1350, in from_params
config_class = CONFIG_MAPPING[model_type]
~~~~~~~~~~~~~~^^^^^^^^^^^^
File "C:\Users\nick\PycharmProjects\flair\.venv\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 794, in __getitem__
raise KeyError(key)
KeyError: 'eurobert'
```
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
@tomaarsen | open | 2025-03-19T09:56:20Z | 2025-03-19T15:27:30Z | https://github.com/huggingface/transformers/issues/36817 | [
"New model"
] | zynos | 1 |
biolab/orange3 | scikit-learn | 6,831 | TypeError: entry_points() got an unexpected keyword argument 'group' | I tried installing Orange on Zorin OS using these commands:
```
sudo apt-get install python3-pyqt5.qtsvg
pip3 install PyQt5 PyQtWebEngine
pip3 install pyqt5==5.15
pip3 install orange3
python3 -m Orange.canvas
```
And I'm getting this error:
stefan@glitch:~/dev/orange$ python3 -m Orange.canvas
```
/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/__main__.py:332: DeprecationWarning: sipPyTypeDict() is deprecated, the extension module should use sipPyTypeDictRef() instead
class SendUsageStatistics(QThread):
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/__main__.py", line 467, in <module>
sys.exit(main())
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/__main__.py", line 463, in main
return OMain().run(argv)
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/__main__.py", line 353, in run
super().run(argv)
File "/home/stefan/.local/lib/python3.8/site-packages/orangecanvas/main.py", line 199, in run
self.setup_application()
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/__main__.py", line 416, in setup_application
self._pull_notifs = pull_notifications()
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/__main__.py", line 194, in pull_notifications
installed_list = [ep.dist for ep in config.addon_entry_points()
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/config.py", line 285, in addon_entry_points
return Config.addon_entry_points()
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/config.py", line 156, in addon_entry_points
return Config.widgets_entry_points()
File "/home/stefan/.local/lib/python3.8/site-packages/Orange/canvas/config.py", line 148, in widgets_entry_points
entry_points(group=WIDGETS_ENTRY),
TypeError: entry_points() got an unexpected keyword argument 'group'
```
Any ideas? | closed | 2024-06-13T12:49:01Z | 2024-07-11T17:08:39Z | https://github.com/biolab/orange3/issues/6831 | [
"bug report"
] | stefan-reich | 2 |
django-oscar/django-oscar | django | 3,451 | Category management is broken for django-oscar 2.1.0 | ### Issue Summary
After upgrading django-oscar to 2.1.0, categories seem to not work anymore. It is impossible to neither add new category nor change the existing ones. It's also affecting child categories as I couldn't delete them.
To be sure I've created new environment with fresh install and unfortunatelly it was occuring too. It is impossible to create first category whatsoever.
Everything related to django-oscar upgrade went without any problems. This is first time I'm experiencing something serious with oscar.
Remark: url for accessing dashboard is changed in my case
Exception comes to this:
```(1093, "You can't specify target table 'catalogue_category' for update in FROM clause")```
Here is the executed query:
```
(b'UPDATE `catalogue_category` SET `ancestors_are_public` = NOT EXISTS(SELECT U' b'0.`id` FROM `catalogue_category` U0 WHERE (U0.`depth` < `catalogue_category`' b'.`depth` AND U0.`is_public` = 0 AND `catalogue_category`.`path` LIKE BINARY ' b"CONCAT(REPLACE(REPLACE(REPLACE(U0.`path`, '\\\\', '\\\\\\\\'), '%', '\\%')," b" '_', '\\_'), '%'))) WHERE `catalogue_category`.`id` = 1")
```
Full traceback:
```python
Traceback (most recent call last):
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute
return self.cursor.execute(query, args)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
The above exception ((1093, "You can't specify target table 'catalogue_category' for update in FROM clause")) was the direct cause of the following exception:
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python3.6/contextlib.py", line 52, in inner
return func(*args, **kwds)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/views/generic/base.py", line 97, in dispatch
return handler(request, *args, **kwargs)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/views/generic/edit.py", line 172, in post
return super().post(request, *args, **kwargs)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/views/generic/edit.py", line 142, in post
return self.form_valid(form)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/views/generic/edit.py", line 125, in form_valid
self.object = form.save()
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/treebeard/forms.py", line 147, in save
self.instance = self._meta.model.add_root(**cl_data)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/treebeard/mp_tree.py", line 625, in add_root
return MP_AddRootHandler(cls, **kwargs).process()
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/treebeard/mp_tree.py", line 345, in process
newobj.save()
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/oscar/apps/catalogue/abstract_models.py", line 209, in save
super().save(*args, **kwargs)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/models/base.py", line 746, in save
force_update=force_update, update_fields=update_fields)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/models/base.py", line 795, in save_base
update_fields=update_fields, raw=raw, using=using,
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/dispatch/dispatcher.py", line 175, in send
for receiver in self._live_receivers(sender)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/dispatch/dispatcher.py", line 175, in <listcomp>
for receiver in self._live_receivers(sender)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/oscar/apps/catalogue/receivers.py", line 37, in post_save_set_ancestors_are_public
instance.set_ancestors_are_public()
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/oscar/apps/catalogue/abstract_models.py", line 220, in set_ancestors_are_public
included_in_non_public_subtree.values("id"), negated=True))
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/models/query.py", line 752, in update
rows = query.get_compiler(self.db).execute_sql(CURSOR)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1500, in execute_sql
cursor = super().execute_sql(result_type)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1152, in execute_sql
cursor.execute(sql, params)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute
return self.cursor.execute(query, args)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/develucas/.virtualenv/temp/lib/python3.6/site-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
Exception Type: OperationalError at /shkl6943pn6cy6sx/catalogue/categories/create/
Exception Value: (1093, "You can't specify target table 'catalogue_category' for update in FROM clause")
```
### Steps to Reproduce
1. Create new environment
2. Install django-oscar with all dependencies
3. Apply migrations, create superuser and populate countries
3. Go to dashboard
4. Create new category
### Technical details
* Django Version: 3.0.8
* Python Version: 3.6.8
* Oscar Version: 2.1.0 | closed | 2020-07-16T13:50:15Z | 2020-07-17T13:04:40Z | https://github.com/django-oscar/django-oscar/issues/3451 | [] | develucas | 3 |
FactoryBoy/factory_boy | sqlalchemy | 879 | Factory with sqlalchemy_get_or_create calls post_generation allways with create as True | #### Description
Factory with sqlalchemy_get_or_create calls post_generation allways with create as True.
#### To Reproduce
Running the following example:
```py
from sqlalchemy import Column, Unicode, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, scoped_session, sessionmaker
from sqlalchemy.sql.schema import ForeignKey
engine = create_engine('sqlite://')
session = scoped_session(sessionmaker(bind=engine))
Base = declarative_base()
class Child(Base):
""" A SQLAlchemy simple model class who represents a child """
__tablename__ = 'ChildTable'
name = Column(Unicode(20), primary_key=True)
class Parent(Base):
""" A SQLAlchemy simple model class who represents a parent """
__tablename__ = 'ParentTable'
name = Column(Unicode(20), primary_key=True)
child_name = Column(ForeignKey("ChildTable.name"), nullable=False)
child = relationship(Child)
Base.metadata.create_all(engine)
import factory
class FactoryChild(factory.alchemy.SQLAlchemyModelFactory):
class Meta:
model = Child
sqlalchemy_session = session # the SQLAlchemy session object
sqlalchemy_get_or_create = ('name',)
@factory.post_generation
def print_create(self, create, _, **kwargs):
print(f"Value for 'create' in 'post_generation': {create}")
class FactoryParent(factory.alchemy.SQLAlchemyModelFactory):
class Meta:
model = Parent
sqlalchemy_session = session # the SQLAlchemy session object
sqlalchemy_get_or_create = ('name',)
name = factory.Sequence(lambda n: f"Parent_{n}")
child = factory.SubFactory(FactoryChild)
if __name__ == "__main__":
child_1 = FactoryChild.create(name="John")
parent_1 = FactoryParent.create(child__name="John")
```
##### The issue
This produces the following output:
```bash
....
Value for 'create' in 'post_generation': True
Value for 'create' in 'post_generation': True
```
First we have created a child "John", therefore `post_generation` is called once with `create=True` as expected.
After we create a parent which should include the already created child "John". To do so, we indicate the `child__name` matches the name of the already existing child. However, when `post_factory` on the child is executed this second time, ´create´ is still ´True´, although no new child is created.
#### Notes
As the name of the feature is `sqlalchemy_get_or_create`, I understand that when `get` is used, then `create` is not (as it is _or_ condition). In such case should not better run post_generation with `create=False`?
| closed | 2021-08-06T10:37:19Z | 2024-09-04T06:36:39Z | https://github.com/FactoryBoy/factory_boy/issues/879 | [] | BorjaEst | 0 |
elesiuta/picosnitch | plotly | 41 | Inaccurate Received Bytes | Hi,
To test out **picosnitch**, I installed it and ran the following in a terminal:
```
wget https://ash-speed.hetzner.com/100MB.bin
```
After the download completes, the TUI shows a lot less than 100MB. Running the following SQL on `snitch.db` gave the following:
```
> SELECT SUM(recv) FROM connections WHERE name = "wget";
+-----------+
| SUM(recv) |
+-----------+
| 1234048 |
+-----------+
1 row in set
Time: 0.007s
```
So it claims that only 1234048 bytes were downloaded while 100MB were actually downloaded.
I re-ran the test after stopping the service, deleting the files in `~/.config/picosnitch`, starting the service and then re-running the command. This time, it gave me the following:
```
> SELECT SUM(recv) FROM connections WHERE name = "wget";
+-----------+
| SUM(recv) |
+-----------+
| 1210048 |
+-----------+
1 row in set
Time: 0.004s
```
So again, it doesn't give the real value. Is there a reason for this? I tested this on Rocky 9 (and testing it on Fedora 40 gave similar results). This is using the latest version (installing it using **pip** for Rocky 9 and using the package manager for Fedora). Also, this is the same behavior for other applications as well, not just `wget`. Thanks
| open | 2024-09-29T12:02:35Z | 2024-09-29T12:02:35Z | https://github.com/elesiuta/picosnitch/issues/41 | [] | ossie-git | 0 |
PokeAPI/pokeapi | api | 245 | Download of Default Front Image slow | Hello everyone,
First of all I would like to thank you for your great API, it is very useful to me.
I'm currently creating an application which helps building Pokemon Teams.
For this I retrieve all Pokemon with their default sprites and save them into an XML file for not needing to contact the API too often.
Unfortunately retrieving all pictures is very slow, it took almost 14 minutes to retrieve the first 250 sprites.
Now I wonder, whether I'm doing something wrong and if I could speed things up.
I'm using HttpClient with GetAsync for retrieving the images.
Thanks for your help.
Kind regards,
Florian
| closed | 2016-08-01T20:18:51Z | 2016-08-01T21:48:19Z | https://github.com/PokeAPI/pokeapi/issues/245 | [] | DigitalFlow | 2 |
deeppavlov/DeepPavlov | nlp | 1,430 | Multi class emotion classification for text in russian | Как использовать BERT Classifier для multi class классификаций текста? У меня есть свой датасет, нужно тренировать модель на этом датасете.
Пример Input:
Я сегодня чувствую себя не очень хорошо
Output:
Sadness
Классов должно быть 5 или 6
Знаю что есть rusentiment_bert.json. Это как я понимаю pretrained и здесь только Positive neutral negative speech skip, а мне надо чтобы были эмоций типа (радость, печаль итп)
Мне получается нужно быть изменить конфиг rusentiment_bert.json? Если да – то как и что надо изменить для настройки данной модели?
Прошу помочь c гайденсом как работает весь процесс.
| closed | 2021-04-14T22:16:44Z | 2021-04-19T13:17:03Z | https://github.com/deeppavlov/DeepPavlov/issues/1430 | [
"enhancement"
] | MuhammedTech | 1 |
babysor/MockingBird | pytorch | 551 | 有没有提供一个便于使用的函数用于合成? | 自己想做一些魔改,但代码没有看懂 | closed | 2022-05-14T02:38:11Z | 2022-05-14T04:07:12Z | https://github.com/babysor/MockingBird/issues/551 | [] | SunchinSekian | 1 |
hzwer/ECCV2022-RIFE | computer-vision | 282 | 如何对任意时刻进行插帧 | 注意到RIFE设计为对t=0.5处进行插帧,再利用迭代方式获取更多帧数。如果我想在任意时刻进行插帧,请问如何修改? | open | 2022-09-29T15:14:29Z | 2022-12-16T08:25:37Z | https://github.com/hzwer/ECCV2022-RIFE/issues/282 | [] | QMME | 4 |
FlareSolverr/FlareSolverr | api | 1,069 | Exception: Error getting browser User-Agent. HTTP Error 404: Not Found | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.13
- Last working FlareSolverr version: 3.3.13
- Operating system: W10
- Are you using Docker: no
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- URL to test this issue: None
```
### Description
Since today, launching FlareSolverr results in a 404 error.
It seems that this error was normally fixed in [v3.3.0](https://github.com/FlareSolverr/FlareSolverr/releases/tag/v3.3.0) but I don't know why it suddenly appear again.
### Logged Error Messages
```text
python src\flaresolverr.py
2024-02-16 23:43:52 INFO ReqId 5968 FlareSolverr 3.3.13
2024-02-16 23:43:52 DEBUG ReqId 5968 Debug log enabled
2024-02-16 23:43:52 INFO ReqId 5968 Testing web browser installation...
2024-02-16 23:43:52 INFO ReqId 5968 Platform: Windows-10-10.0.19045-SP0
2024-02-16 23:43:52 INFO ReqId 5968 Chrome / Chromium path: C:\Program Files\Google\Chrome\Application\chrome.exe
2024-02-16 23:43:52 INFO ReqId 5968 Chrome / Chromium major version: 121
2024-02-16 23:43:52 INFO ReqId 5968 Launching web browser...
2024-02-16 23:43:52 DEBUG ReqId 5968 Launching web browser...
Traceback (most recent call last):
File "C:\Users\XXX\FlareSolverr\src\utils.py", line 302, in get_user_agent
driver = get_webdriver()
^^^^^^^^^^^^^^^
File "C:\Users\XXX\FlareSolverr\src\utils.py", line 183, in get_webdriver
driver = uc.Chrome(options=options, browser_executable_path=browser_executable_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\XXX\FlareSolverr\src\undetected_chromedriver\__init__.py", line 259, in __init__
self.patcher.auto()
File "C:\Users\XXX\FlareSolverr\src\undetected_chromedriver\patcher.py", line 188, in auto
self.unzip_package(self.fetch_package())
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\XXX\FlareSolverr\src\undetected_chromedriver\patcher.py", line 297, in fetch_package
return urlretrieve(download_url)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\urllib\request.py", line 241, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\urllib\request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\urllib\request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\urllib\request.py", line 563, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\urllib\request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\XXX\FlareSolverr\src\flaresolverr.py", line 105, in <module>
flaresolverr_service.test_browser_installation()
File "C:\Users\XXX\FlareSolverr\src\flaresolverr_service.py", line 72, in test_browser_installation
user_agent = utils.get_user_agent()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\XXX\FlareSolverr\src\utils.py", line 308, in get_user_agent
raise Exception("Error getting browser User-Agent. " + str(e))
Exception: Error getting browser User-Agent. HTTP Error 404: Not Found
```
### Screenshots
_No response_ | closed | 2024-02-16T22:47:05Z | 2024-02-17T07:53:48Z | https://github.com/FlareSolverr/FlareSolverr/issues/1069 | [] | pokemaster974 | 4 |
miguelgrinberg/Flask-Migrate | flask | 227 | flask_migrate.upgrade(directory=..) breaks logging | On app start, we do migrations before the actual app gets running.
Procedure:
```
* create_app
* register extensions (DB, migrate, cache, ...)
* ping dependencies
* migrate
<<< HERE IS THE PROBLEM >>>
* start app
```
Note: do not worry why migration is done on every app start or about the concurrency during migration.
Problem:
before `flask_migrate.upgrade(directory=..) ` call all loggers of the app are working well, after the call, none of logger can log anything. I check already all attributes of logger and its handler and they are unchanged.
When disabling the migration call, then logs works; with migration - after the call no logging at all.
E.g. ` _logger.info('released DB lock')` at the bottom in the snippet is not logging anymore.
Snippet
```
_logger.info('DB lock acquired')
try:
with _app.app_context():
flask_migrate.upgrade(directory=_migrate.directory)
except Exception as e:
_logger.error('DB upgrade failed %s :: %s' % (type(e), e))
raise e
finally:
print('-release...')
_lock.release()
print('-releaseD!!!...')
_logger.info('released DB lock')
```
Console
```
[2018-09-13 12:49:38,121] INFO: === perform DB upgrade
[2018-09-13 12:49:38,121] INFO: > migrations: /home/roman/repos/myrepo/server/pyserver/migrations
<logging.Logger object at 0x7f95bb734f98>
10
[<logging.StreamHandler object at 0x7f95bb748c88>, <logstash.handler_tcp.TCPLogstashHandler object at 0x7f95bb748320>]
[2018-09-13 12:49:38,121] INFO: DB lock acquired
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
-release...
-releaseD!!!...
<logging.Logger object at 0x7f95bb734f98>
10
[<logging.StreamHandler object at 0x7f95bb748c88>, <logstash.handler_tcp.TCPLogstashHandler object at 0x7f95bb748320>]
* Tip: There are .env files present. Do "pip install python-dotenv" to use them.
* Serving Flask app "microservice_datacollection" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
```
Any help is welcome!
thx, r
| closed | 2018-09-13T10:59:39Z | 2021-10-19T06:04:17Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/227 | [
"question"
] | roman-telepathy-ai | 11 |
xuebinqin/U-2-Net | computer-vision | 340 | How to get the evaluation metrics value? | Hello, I'm doing research on salient object detection. However, I'm new to the field and deep learning in general.
I trained u2net but I don't know how to plot the training curve, and also the values for the evaluation metrics (PR curve, F-measure, MAE, Sm, etc. Could you provide the code to plot the training curve and to get the value for the evaluation metrics, please?
Thank you | open | 2022-11-20T14:29:09Z | 2023-08-01T13:01:23Z | https://github.com/xuebinqin/U-2-Net/issues/340 | [] | AbrahamMulat | 3 |
ageitgey/face_recognition | python | 946 | how to activate a function only once? | Hello,
i tried this code and its awesome.
I added a code
```python
if name is 'Bill Gates' and not printed:
printed=True
print("Hello Bill!")
if name is not 'Bill Gates':
printed=False
```
printed is False by default.
I want when Bill is in detection to say just once "Hello BIll" (or do some functions just once, like save date and time when its detected). When again BIll is in detection, do the same.
| open | 2019-10-07T20:07:47Z | 2023-03-04T20:17:20Z | https://github.com/ageitgey/face_recognition/issues/946 | [] | vmaksimovic | 2 |
vitalik/django-ninja | django | 1,145 | [BUG] Handler functions with same path but different http method can't use different path variable names | Given two handler functions:
This works fine:
```
@router.get(path='/{category_slug}', response=CategorySchema)
def get_category(request, category_slug: str):
@router.delete(path='/{category_slug}', response={204: None})
def deactivate_category(request, category_slug: str)
```
But this doesn't and results in a 405 error when calling the delete handler:
```
@router.get(path='/{category_slug}', response=CategorySchema)
def get_category(request, category_slug: str):
@router.delete(path='/{slug}', response={204: None})
def deactivate_category(request, category_slug: str)
```
The only difference is that in the second example, I renamed the path variable to `slug` instead of `category_slug` for the delete handler. | open | 2024-05-01T21:12:47Z | 2024-05-02T22:59:19Z | https://github.com/vitalik/django-ninja/issues/1145 | [] | Marclev78 | 0 |
huggingface/transformers | tensorflow | 36,252 | `padding_side` is of type `bool` when it should be `Literal['right', 'left']` | ### System Info
main branch
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L2816
### Expected behavior
Should be `Literal['right', 'left']` as mentioned in the docs.
https://huggingface.co/docs/transformers/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.__call__.padding_side
| closed | 2025-02-18T10:12:48Z | 2025-03-03T00:55:44Z | https://github.com/huggingface/transformers/issues/36252 | [
"bug"
] | winstxnhdw | 2 |
joeyespo/grip | flask | 333 | CSS styling missing | Hi I'm having trouble converting Markdown files to html/pdf. I installed grip on Ubuntu 20.04 by running:
```
sudo apt install grip
```
and then, when I try to display a Markdown file, like the README of this repo:
```
grip grip.md
```
I get this result:

The console output is the following:
```
* Serving Flask app "grip.app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://localhost:6419/ (Press CTRL+C to quit)
127.0.0.1 - - [20/Jan/2021 12:46:54] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [20/Jan/2021 12:46:54] "GET /__/grip/static/favicon.ico HTTP/1.1" 200 -
```
Thanks in advance :)
| open | 2021-01-20T11:55:19Z | 2021-11-02T15:31:34Z | https://github.com/joeyespo/grip/issues/333 | [] | Victorlouisdg | 7 |
harry0703/MoneyPrinterTurbo | automation | 236 | 合并视频时报错 |

| closed | 2024-04-11T12:24:52Z | 2024-04-14T10:50:06Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/236 | [] | Vinci0007 | 4 |
robinhood/faust | asyncio | 649 | autodiscover does not actually log errors on discovery | ## Checklist
- [x] I have included information about relevant versions
- [x] I have verified that the issue persists when using the `master` branch of Faust.
## Steps to reproduce
```python
# proj/app.py
import faust
app = faust.App(
'proj',
version=1,
autodiscover=['proj.users'],
origin='proj' # imported name for this project (import proj -> "proj")
)
```
```python
# proj/users.py
raise ImportError()
```
## Expected behavior
A "WARNING" level log message along the lines of `Autodiscovery importing module %r raised error: %r`, as indicated in https://github.com/robinhood/faust/blob/master/faust/app/base.py#L724
## Actual behavior
Silence/no errors - because the logger hasn't been set up yet.
If I set up a logger before calling `app.main()`, then I can see the WARNING messages, but if I rely on faust to set them up, these messages are swallowed
# Versions
* Python version: 3.8.5
* Faust version: 1.10.4
* Operating system: macOS 10.15.6
* Kafka version: N/A
* RocksDB version (if applicable): N/A | open | 2020-09-12T14:38:32Z | 2022-08-21T11:55:19Z | https://github.com/robinhood/faust/issues/649 | [] | jmaroeder | 3 |
tensorlayer/TensorLayer | tensorflow | 584 | Improve `tl.models` | We now only support MobileNetV1, SqueezeNet and VGG16. However, we need to support all of these: https://github.com/tensorlayer/pretrained-models#cnn-for-imagenet | closed | 2018-05-16T12:02:58Z | 2019-05-13T15:22:08Z | https://github.com/tensorlayer/TensorLayer/issues/584 | [
"help_wanted",
"feature_request"
] | DEKHTIARJonathan | 1 |
jowilf/starlette-admin | sqlalchemy | 352 | Enhancement: Batch Actions on Searched and Selected records. Keep the selection in memory while paginating. | **Is your feature request related to a problem? Please describe.**
Let's say that I have a table with 10K records and pagination limit is set to 50 records per page for the view.
What do I do to apply a batch action to 1K records? I would have to paginate 20 times to accomplish this, that's OK but what if that action requeires some input (form attribute in the decorator)? I'd still need to paginate 20 times and enter the required fields, 20 times...
**Describe the solution you'd like**
Cache the IDs of the selected records while paginating in a view and perform the batch action for all IDs in the cache. | open | 2023-10-23T01:39:34Z | 2024-01-29T23:18:01Z | https://github.com/jowilf/starlette-admin/issues/352 | [
"enhancement"
] | hasansezertasan | 0 |
schemathesis/schemathesis | graphql | 2,312 | [BUG] `negative_data_rejection` false positive for URL parameters | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
I really like that you added negative data rejection, it's a really valuable addition to this library.
Unfortunately, it fails for URL parameters when it shouldn't.
E.g. if you have a Path like `/projects/?slug={slug}` or `/project/{slug}`, with `slug` being a string, it will generate negative examples like `null` or `false`, which is perfectly sensible for JSON payloads, but not for URLs, since everything there is a string. And for a slug of a project, `"false"` could be an acceptable project slug for a project called `"False"`.
So an API accepting `false` in this case is valid, and should not count as failed negative data rejection.
Another, related but slightly different bug is with this API spec:
```
/namespaces:
get:
summary: Get all namespaces
parameters:
- in: query
description: Result's page number starting from 1
name: page
required: false
schema:
type: integer
minimum: 1
default: 1
- in: query
description: The number of results per page
name: per_page
required: false
schema:
type: integer
minimum: 1
maximum: 100
default: 20
responses:
"200":
description: List of namespaces
```
which fails with
```
E schemathesis.exceptions.CheckFailed:
E
E 1. Accepted negative data
E
E Negative data was not rejected as expected by the API
E
E [200] OK:
E
E `[{"id":"01J1SA7WBQA366YZHNA33Q8P9G","name":"Admin Doe","slug":"admin.doe","created_by":"admin","namespace_kind":"user"},{"id":"01J1SAQ4BG7915N27WKVC5N1AJ","name":"\udbff\udfb8\u00b8Q\u00cb\u00ab\u00eb","slug":"pz","created_by":"admin","namespace_kind":"group"}]`
E
E Reproduce with:
E
E curl -X GET -H 'host: mockserver:1234' -H 'authorization: Bearer {"is_admin": true, "id": "admin", "name": "Admin Doe", "first_name": "Admin", "last_name": "Doe", "email": "admin.doe@gmail.com", "full_name": "Admin Doe"}' 'http://localhost/api/data/namespaces?=null&page=1'
E
E Falsifying example: async_run(
E sanic_client=<sanic_testing.testing.SanicASGITestClient object at 0x73c6ad790260>,
E admin_headers={'Authorization': 'Bearer {"is_admin": true, "id": "admin", "name": "Admin Doe", "first_name": "Admin", "last_name": "Doe", "email": "admin.doe@gmail.com", "full_name": "Admin Doe"}'},
E case=Case(query={'': 'null', 'page': 1}),
E )
```
which is a bit weird test data and I think most servers will just ignore an empty key in the URL, so not really a failure.
The actual tests that we run can be found at https://github.com/SwissDataScienceCenter/renku-data-services/blob/11029e0fcc0f8e0298448eacd67d1314fff37ec2/test/bases/renku_data_services/data_api/test_schemathesis.py
### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
Apispec like
```
---
openapi: 3.0.2
paths:
"/project/{slug}":
get:
parameters:
- in: path
name: slug
required: true
schema:
type: string
responses:
"200":
content:
text/plain:
schema:
type: string
default:
content:
text/string:
schema:
type: string
```
with an API that accepts string as slug.
This will fail with something like:
```
schemathesis.exceptions.CheckFailed:
E
E 1. Accepted negative data
E
E Negative data was not rejected as expected by the API
E
E [200] OK:
E
E `...`
E
E Reproduce with:
E
E curl -X DELETE -H 'host: mockserver:1234' -H 'authorization: ***"is_admin": true, "id": "admin", "name": "Admin Doe", "first_name": "Admin", "last_name": "Doe", "email": "admin.doe@gmail.com", "full_name": "Admin Doe"}' 'http://localhost/project/false'
E
E Falsifying example: async_run(
E sanic_client=<sanic_testing.testing.SanicASGITestClient object at 0x7f6fa5123f80>,
E admin_headers={'Authorization': '***"is_admin": true, "id": "admin", "name": "Admin Doe", "first_name": "Admin", "last_name": "Doe", "email": "admin.doe@gmail.com", "full_name": "Admin Doe"}'},
E case=Case(query={'slug': 'false'}),
E )
```
### Expected behavior
URL parameter should have a reduce set of negative examples for strings, like empty string, but not things like `false`, `true`, `null`.
### Environment
```
- OS: Linux
- Python version: 3.12
- Schemathesis version: 3.31.0
- Spec version: Open API 3.0.2
```
| closed | 2024-07-02T09:34:38Z | 2024-07-16T11:15:11Z | https://github.com/schemathesis/schemathesis/issues/2312 | [
"Priority: High",
"Type: Bug",
"Core: Data Generation"
] | Panaetius | 6 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 739 | [BUG]: Installation: error: Preparing metadata (pyproject.toml) ... error | ### Describe the bug
error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [6 lines of output] Cargo, the Rust package manager, is not installed or is not on PATH. This package requires Rust and Cargo to compile extensions. Install it through the system's package manager or via https://rustup.rs/ Checking for Rust toolchain.... [end of output] note: This error originates from a subprocess, and is likely not a problem with pip.
### Steps to reproduce
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Branch
None
### Branch name
_No response_
### Python version
_No response_
### LLM Used
_No response_
### Model used
_No response_
### Additional context
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip. | closed | 2024-11-04T07:24:48Z | 2024-11-04T13:25:23Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/739 | [
"bug"
] | agispas | 0 |
flasgger/flasgger | api | 369 | api definitions appear twice when basePath is not '/' | if you define basePath: '/base_api/' in the template file
and the a few APIs like
```
@swag_from(template_file='swg_def.yaml')
def api_1(*args, **kw):
pass
```
you get both "/api_1" and "/base_api/api_1" in the api_docs descriptions. Only the first one is valid. | open | 2020-03-08T14:21:42Z | 2020-03-08T14:21:42Z | https://github.com/flasgger/flasgger/issues/369 | [] | rejoc | 0 |
deepfakes/faceswap | deep-learning | 580 | python faceswap.py convert ,all the image were No alignment found for 887051092_0.jpg, skipping | 
| closed | 2019-01-08T05:33:19Z | 2022-01-17T07:30:20Z | https://github.com/deepfakes/faceswap/issues/580 | [] | horizonheart | 4 |
sqlalchemy/alembic | sqlalchemy | 1,544 | Support for auto-generated SQL migrations in online mode | **Describe the use case**
Not everyone wants to use an ORM to add and edit database migrations. For enterprise graded production databases, reading raw SQL might even be more useful for database administrators. Alembic does currently not support that, other then the [offline mode](https://alembic.sqlalchemy.org/en/latest/offline.html).
<!-- A clear and concise description of what the SQL or database capability is. -->
**Databases / Backends / Drivers targeted**
To start with, PostgreSQL
<!-- what database(s) is this for? What drivers? -->
**Example Use**
<!-- provide a clear example of what the SQL looks like, or what the DBAPI code looks like -->
After configuration, I foresee something like the following steps.
1. A new revision should be created `alembic revision --autogenerate --sql -m "Added account table"`, which will create one more SQL file containing the code for upgrading and downgrading the database. Similar to the regular migration files, this will also hold metadata.
2. Then, using the metadata, we can simply run `alembic upgrade head`, which will take the files into account.
| closed | 2024-09-25T12:46:41Z | 2024-09-25T13:32:52Z | https://github.com/sqlalchemy/alembic/issues/1544 | [
"use case"
] | basvandriel | 0 |
scikit-hep/awkward | numpy | 3,414 | Revisit `.raw()/._raw()` for `VirtualArray`s | In https://github.com/scikit-hep/awkward/pull/3364, we make `.raw()/._raw()` calls materialize. Some more thinking can be put into whether this should be the case. Perhaps we need a materializing and a non materializing `raw` for different use cases? | closed | 2025-03-07T14:34:12Z | 2025-03-18T14:39:20Z | https://github.com/scikit-hep/awkward/issues/3414 | [] | ikrommyd | 1 |
proplot-dev/proplot | matplotlib | 397 | GeoAxes features request | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
Hello, I am a regular user of proplot with which I can draw beautiful geoscience plots. CartesianAxes provides powerful contol on axes major and minor ticks. But for GeoAxes, it seems that proplot only provides control of lat and lon grid lines. If I want to show lat and lon ticks for NCL style geographic plots, I can only call the matplotlib method ax.tick_params(). As shown below, the code will become redundant and inconsistent with the proplot style. If the GeoAxes.format can control the lat and lon ticks, I think it will be a nice feature.
### Steps to reproduce
```python
# your code here
# we should be able to copy-paste this into python and exactly reproduce your bug
import proplot as pplt
import cartopy.crs as ccrs
import matplotlib.ticker as ticker
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
fig = pplt.figure(refwidth=5)
axs = fig.subplots(nrows=1, ncols=1, proj="cyl", proj_kw={"lon_0": 200}, right="3em")
axs.format(
suptitle="Figure with NCL sytle by matplotlib",
suptitlesize=12,
suptitlepad=10,
land=True,
landcolor="k",
coast=True,
coastlinewidth=1,
reso="lo",
latlim=(-90, 90),
lonlim=(0, 360),
grid=False,
)
axs.set_xticks(np.arange(0, 381, 60), crs=ccrs.PlateCarree())
axs.set_yticks(np.arange(-90, 91, 30), crs=ccrs.PlateCarree())
lon_formatter = LongitudeFormatter(zero_direction_label=False)
lat_formatter = LatitudeFormatter()
axs.xaxis.set_major_formatter(lon_formatter)
axs.yaxis.set_major_formatter(lat_formatter)
axs.xaxis.set_tick_params(labelsize=10, pad=5)
axs.yaxis.set_tick_params(labelsize=10, pad=5)
axs.xaxis.set_minor_locator(ticker.IndexLocator(base=30, offset=10))
axs.yaxis.set_minor_locator(ticker.MultipleLocator(15))
axs.tick_params(
which="major",
direction="out",
length=10,
width=1.5,
colors="k",
top="on",
bottom="on",
left="on",
right="on",
)
axs.tick_params(
which="minor",
direction="out",
length=5,
width=0.8,
top="on",
bottom="on",
left="on",
right="on",
)
axs.grid(False)
```
**Expected behavior**:

### Proplot version
matplotlib.__version__ = 3.2.2
proplot.version = 0.9.5 | closed | 2022-11-11T04:11:17Z | 2023-03-29T00:25:57Z | https://github.com/proplot-dev/proplot/issues/397 | [
"duplicate",
"feature"
] | Dearsomeone | 1 |
STVIR/pysot | computer-vision | 102 | How to achieve multiple targets tracking | Is there a way to achieve simultaneous tracking of multiple targets? Thanks | closed | 2019-07-11T08:09:41Z | 2020-04-23T14:34:48Z | https://github.com/STVIR/pysot/issues/102 | [
"wontfix"
] | pangr | 7 |
mirumee/ariadne | api | 11 | make_executable_schema resolvers arg should accept dict of dicts or list of dicts of dicts | The `make_executable_schema` utility should optionally take list of dicts of dicts (AKA "resolvers map"), this would allow larger projects to easily split and compose resolvers as needed:
```python
from ariadne import make_executable_schema
from products.graphql import resolvers as products_resolvers
from users.graphql import resolvers as users_resolvers
typedefs = "..."
resolvers = [products_resolvers, users_resolvers]
schema = make_executable_schema(typedefs, resolvers)
```
This task will likely require #13 to be done first, so we are 100% certain that all resolver mappings are dicts. | closed | 2018-08-02T16:26:14Z | 2018-10-01T09:26:40Z | https://github.com/mirumee/ariadne/issues/11 | [
"help wanted",
"roadmap"
] | rafalp | 0 |
scikit-learn-contrib/metric-learn | scikit-learn | 285 | Question: Re: Feature selection | Hi,
Thank you for making this fantastic library available!
I have been using it to reduce dimensionality of my dataset prior to machine learning applications. While deciding how many features I must keep, I tried to calculate a Variable Importance Measure or Explained Variance measure, but it wasn't easy. Coefficients or eigenvalues are not readily returned by the functions.
What do you think is the most convenient way of doing this in a way that could be applied all the algorithms here?
Thanks,
Tayfun | closed | 2020-04-09T16:36:29Z | 2020-04-22T12:05:03Z | https://github.com/scikit-learn-contrib/metric-learn/issues/285 | [] | ttumkaya | 2 |
tpvasconcelos/ridgeplot | plotly | 279 | More comprehensive docs for all coloring options | Refer to #226 for more context.
Would also be nice to add a [glossary](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/glossary.html) section 📚 | open | 2024-11-18T15:47:46Z | 2024-11-24T22:53:54Z | https://github.com/tpvasconcelos/ridgeplot/issues/279 | [
"DOCS",
"good first issue",
"COLOR"
] | tpvasconcelos | 0 |
tox-dev/tox | automation | 2,836 | py35, usedevelop setup hangs | ## Issue
py35 setup with usedevelop fails with "incorrect request to backend", hangs.
## Environment
Provide at least:
- OS: Linux
- `pip list` of the host Python where `tox` is installed:
```console
$ pip list
Package Version
------------- -------
cachetools 5.2.0
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.9.0
packaging 22.0
pip 22.3.1
platformdirs 2.6.2
pluggy 1.0.0
pyproject_api 1.4.0
setuptools 65.5.0
tox 4.2.6
virtualenv 20.17.1
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
# see below
```
## Minimal example
If possible, provide a minimal reproducer for the issue:
```console
$ mkdir /tmp/new-empty-dir
$ cd /tmp/new-empty-dir
$ tox run -rvv --develop -e py35
[...]
.pkg-cpython35: 455 W install_requires> python -I -m pip install 'setuptools>=40.8.0' wheel [tox/tox_env/api.py:427]
DEPRECATION: Python 3.5 reached the end of its life on September 13th, 2020. Please upgrade your Python as Python 3.5 is no longer maintained. pip 21.0 will drop support for Python 3.5 in January 2021. pip 21.0 will remove support for this functionality.
Requirement already satisfied: setuptools>=40.8.0 in ./.tox/.pkg-cpython35/lib/python3.5/site-packages (50.3.2)
Requirement already satisfied: wheel in ./.tox/.pkg-cpython35/lib/python3.5/site-packages (0.37.1)
.pkg-cpython35: 1358 I exit 0 (0.90 seconds) /tmp/new-empty-dir> python -I -m pip install 'setuptools>=40.8.0' wheel pid=1750750 [tox/execute/api.py:275]
.pkg-cpython35: 1359 W _optional_hooks> python [...]/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__ [tox/tox_env/api.py:427]
Backend: incorrect request to backend: bytearray(b'{"cmd": "_optional_hooks", "kwargs": {}, "result": "/tmp/pep517__optional_hooks-rag91x47.json"}')
# hangs here
```
| closed | 2023-01-08T07:50:55Z | 2023-01-09T20:21:51Z | https://github.com/tox-dev/tox/issues/2836 | [] | scop | 5 |
cleanlab/cleanlab | data-science | 529 | Docs: short (one-line) name for tutorials in sidebar | Names of tutorials in sidebar currently take up too much vertical space (see picture). Rather than displaying full title of each tutorial, would be better to only display an abbreviated one-line name in the sidebar. Eg. "Audio Classification with SpeechBrain and Cleanlab" could have short-name "Audio Classification" in the sidebar.

| closed | 2022-11-08T04:58:16Z | 2022-12-02T19:58:53Z | https://github.com/cleanlab/cleanlab/issues/529 | [
"enhancement",
"good first issue"
] | jwmueller | 0 |
automagica/automagica | automation | 39 | PressHotKey() doesn't work with some combinations including shift | Hello,
I'm trying to resize some images in a PPT presentation, using `ctrl-shift-up` or `ctrl-shift-left`, `shift-up` or `shift-left` combinations. However, it seems that it ignores `shift`, since it moves images up or left, without resizing.
In Visual Studio Code, when simply running `PressHotkey('ctrl', 'shift', 'left')`, and setting the caret to a line, it should select a word on the left of the caret, however it simply moves to the left of the first word, without selecting it. It confirms that the shift key is ignored.
Though, `ctrl-shift-s`-that I use to save to a new file-works flawlessly. Maybe there are some problems in using `shift` and arrow keys together?
I am running:
- Python 3.7.1
- Windows 10 Pro
- Automagica 0.3.1
- Microsoft PowerPoint 2016
- Visual Studio Code 1.30.1 | closed | 2018-12-19T09:09:51Z | 2018-12-19T09:37:52Z | https://github.com/automagica/automagica/issues/39 | [] | dedeswim | 1 |
aimhubio/aim | tensorflow | 2,907 | No such file or directory: '~/.aim_profile' | ## 🐛 Bug
When running a slurm job, the latter tries to access the file '~/.aim_profile' without having the necessary permission, and it fails with no such file error.
The exception is thrown on the following: https://github.com/aimhubio/aim/blob/c7f6bbc28500433e29cc958f92ad53cf6b900f60/src/python/aim/_ext/tracking/__init__.py#L12
### To reproduce
Run any slurm job.
### Expected behavior
To skip the logic if the permission for the file is not granted.
### Environment
- Aim Version: 3.17.3
- Python version: 3.9.16 | open | 2023-07-11T13:39:21Z | 2023-07-19T12:23:47Z | https://github.com/aimhubio/aim/issues/2907 | [
"type / bug",
"help wanted",
"phase / ready-to-go"
] | tamohannes | 0 |
MaartenGr/BERTopic | nlp | 1,818 | DTM Data Extraction | How do I cull out the results of the DTM operations, i.e. the FREQUENCY of each topic at different times? | open | 2024-02-18T16:35:06Z | 2024-02-20T13:07:39Z | https://github.com/MaartenGr/BERTopic/issues/1818 | [] | starsream | 4 |
plotly/dash | data-science | 2,225 | Dropdown maxHeight being overridden by something in DataTable [BUG] | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.6.1 pyhd8ed1ab_0 conda-forge
dash-bootstrap-components 1.0.3 pyhd8ed1ab_0 conda-forge
dash-daq 0.5.0 pyh9f0ad1d_1 conda-forge
```
- if frontend related, tell us your Browser, Version and OS
- OS: macOS 12.4
- Browser: Chrome 105
**Describe the bug**
I can change the maxHeight of the dcc.Dropdown component, but as soon as a DataTable loads (even with no styling) the background of the dropdown shrinks back to 200px and any text below that is left, with a transparent background. If I return a boring Div instead of a DataTable, the dropdown stays styled correctly. Seems to be css-related, because the "incorrect styling" sticks around until I close/reopen the tab, then it's good again until a DataTable loads.
**Screenshots**
If applicable, add screenshots or screen recording to help explain your problem.

Dropdown creation:
`dropdown = dcc.Dropdown(options=options,
placeholder='Sport',
id="sport-dropdown",
searchable=False,
maxHeight=400,
)`
DataTable creation
```
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/solar.csv')
return dash_table.DataTable(df.to_dict('records'), [{"name": i, "id": i} for i in df.columns])
```
| closed | 2022-09-12T17:34:32Z | 2024-06-21T15:04:58Z | https://github.com/plotly/dash/issues/2225 | [
"dash-data-table"
] | lukeallpress | 3 |
trevismd/statannotations | seaborn | 110 | Only displaying tests that are significant | The tests that are displayed are defined by what's fed into the argument `pairs` of `Annotator` but we don't know prior to running `apply_and_annotate` what tests are significant. I don't think there's an easy way to eliminate from the list fed to `pairs` the `pairs` that are not significant. | closed | 2023-02-13T10:44:27Z | 2023-10-01T18:12:34Z | https://github.com/trevismd/statannotations/issues/110 | [] | JasonMendoza2008 | 6 |
zihangdai/xlnet | tensorflow | 214 | Multigpus memory leak during pretraining | These are my gpus,
```
2019-08-16 23:34:38.383605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: Tesla V100-DGXS-32GB major: 7 minor: 0 memoryClockRate(GHz): 1.53
pciBusID: 0000:08:00.0
2019-08-16 23:34:38.385992: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 1 with properties:
name: Tesla V100-DGXS-32GB major: 7 minor: 0 memoryClockRate(GHz): 1.53
pciBusID: 0000:0e:00.0
2019-08-16 23:34:38.388424: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 2 with properties:
name: Tesla V100-DGXS-32GB major: 7 minor: 0 memoryClockRate(GHz): 1.53
pciBusID: 0000:0f:00.0
```
This is my training command,
```
python3 train_gpu.py --corpus_info_path=save-location2/corpus_info.json --record_info_dir=save-location2/tfrecords --train_batch_size=30 --seq_len=512 --reuse_len=256 --mem_len=384 --perm_size=256 --n_layer=12 --d_model=512 --d_embed=512 --n_head=16 --d_head=64 --d_inner=2048 --untie_r=True --mask_alpha=6 --mask_beta=1 --num_predict=85 --model_dir=output-model --uncased=False --num_core_per_host=3 --train_steps=300000 --iterations=10 --learning_rate=1e-4 --save_steps=1000
```
Training session is fine, but my instance RAM keep increasing, lucky my OS auto killed those programs use a lot of RAM or else my instance got hang really bad. Tensorflow version 1.14.
This is my RAM size,
```
GiB Mem : 251.825 total
```
And last time I checked, it used up to 30% and keep increasing! Anything can help?
| closed | 2019-08-16T15:39:15Z | 2020-09-28T15:28:26Z | https://github.com/zihangdai/xlnet/issues/214 | [] | huseinzol05 | 4 |
keras-team/autokeras | tensorflow | 1,655 | Issues using autokeras | When I try to follow [this example](https://blogs.rstudio.com/ai/posts/2019-04-16-autokeras/) and fit an image classification model, I keep receiving the following error. I have not been able to find a resolution and I would be grateful for your insight.
Error in py_call_impl(callable, dots$args, dots$keywords) :
AttributeError: 'Graph' object has no attribute 'hypermodel'
Detailed traceback:
File "C:\Users\avren\AppData\Local\r-miniconda\envs\r-reticulate\lib\site-packages\autokeras\tasks\image.py", line 89, in __init__
super().__init__(
File "C:\Users\avren\AppData\Local\r-miniconda\envs\r-reticulate\lib\site-packages\autokeras\tasks\image.py", line 35, in __init__
super().__init__(inputs=input_module.ImageInput(), outputs=outputs, **kwargs)
File "C:\Users\avren\AppData\Local\r-miniconda\envs\r-reticulate\lib\site-packages\autokeras\auto_model.py", line 142, in __init__
self.tuner = tuner(
File "C:\Users\avren\AppData\Local\r-miniconda\envs\r-reticulate\lib\site-packages\autokeras\tuners\task_specific.py", line 157, in __init__
super().__init__(initial_hps=IMAGE_CLASSIFIER, **kwargs)
File "C:\Users\avren\AppData\Local\r-miniconda\envs\r-reticulate\lib\site-packages\autokeras\tuners\greedy.py", line 230, in __init__
super().__init__(oracle=oracle, hyperm | open | 2021-12-17T08:58:09Z | 2021-12-18T08:17:49Z | https://github.com/keras-team/autokeras/issues/1655 | [] | avrenli2 | 1 |
awesto/django-shop | django | 508 | Button labels in add to cart widget | We have three buttons there at the moment:
* Show Cart
* Continue Shopping
* Cancel
Instead of "Show Cart", I suggest something like "Go to checkout".
"Cancel" is misleading. It just takes the user back to the product page. I'd expect it to cancel the add-to-cart oprtation, but that is not what it does. Do we need this button at all?
"Continue Shopping" redirects to the product's category. It seems less surprising to me to return the user to the product page instead. | open | 2017-03-17T13:22:40Z | 2021-09-21T05:49:38Z | https://github.com/awesto/django-shop/issues/508 | [
"easy-picking"
] | rfleschenberg | 1 |
amidaware/tacticalrmm | django | 1,203 | Migrate to PostgreSQL for MeshCentral | **Is your feature request related to a problem? Please describe.**
I would like to use PostgreSQL for all my applications, even MeshCentral.
**Describe the solution you'd like**
TRMM currently uses MongoDB for the MeshCentral database. It would be nice if TRMM supported/used PostgreSQL for MeshCentral.
**Describe alternatives you've considered**
As stated in a [Discord conversation](https://discord.com/channels/736478043522072608/815123191458299934/995074788366221342):
> ya, systemd will take care of restarting all the services if they crash (which happens more than you think but you never notice) but if we switch mesh to postgres then you will definitely start noticing and have to constantly restart meshcentral manually
MeshCentral does not handle database errors gracefully ([issue #3645](https://github.com/Ylianst/MeshCentral/issues/3645)). If PostgreSQL dies abruptly, `systemd` will restart the `postgresql.service` but `meshcentral.service` will not reconnect.
**Additional context**
Add any other context or screenshots about the feature request here.
STR:
1. Get the postgres PID: `ps -ef | grep postgres` You want the one with the full path to postgres and with PPID of 1. In this case, it's 233.
```bash
$ ps -ef | grep postgres
postgres 233 1 0 Jul05 ? 00:02:02 /usr/lib/postgresql/13/bin/postgres -D /var/lib/postgresql/13/main -c config_file=/etc/postgresql/13/main/postgresql.conf
postgres 242 233 0 Jul05 ? 00:00:20 postgres: 13/main: checkpointer
postgres 243 233 0 Jul05 ? 00:00:05 postgres: 13/main: background writer
postgres 244 233 0 Jul05 ? 00:01:24 postgres: 13/main: walwriter
postgres 245 233 0 Jul05 ? 00:00:13 postgres: 13/main: autovacuum launcher
postgres 246 233 0 Jul05 ? 00:01:08 postgres: 13/main: stats collector
postgres 247 233 0 Jul05 ? 00:00:00 postgres: 13/main: logical replication launcher
postgres 316 233 0 Jul05 ? 00:00:02 postgres: 13/main: hrvnqxbj meshcentral 127.0.0.1(41714) idle
postgres 25171 233 0 11:57 ? 00:00:00 postgres: 13/main: efftqfln tacticalrmm 127.0.0.1(36172) idle
```
2. Cause PostgreSQL to core dump with SIGSEGV: `kill -s SIGSEGV 233`
3. `systemd` shows Postgres as still running with a different PID than the one killed. In this case, it's 274 while we killed PID 233 above.
```bash
$ systemctl status --full postgresql.service
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2022-07-05 07:58:15 EDT; 3 days ago
Process: 274 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 274 (code=exited, status=0/SUCCESS)
Jul 05 07:58:15 ns-v18-tactical systemd[1]: Starting PostgreSQL RDBMS...
Jul 05 07:58:15 ns-v18-tactical systemd[1]: Finished PostgreSQL RDBMS.
```
4. `psql` does not connect.
```bash
$ psql
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: Connection refused
Is the server running locally and accepting connections on that socket?
```
5. MeshCentral does not connect.
```bash
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: Starting meshcentral syslog.
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: Starting meshcentral-json JSON syslog.
Jul 08 17:41:15 ns-v18-tactical meshcentral-auth[31142]: MeshCentral v1.0.22 Server Start
Jul 08 17:41:15 ns-v18-tactical meshcentral-auth[31142]: MeshCentral v1.0.22 Server Start
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: Starting meshcentral-auth auth syslog.
Jul 08 17:41:15 ns-v18-tactical meshcentral-auth[31142]: MeshCentral v1.0.22 Server Start
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: ERR: node:internal/process/promises:279
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: triggerUncaughtException(err, true /* fromPromise */);
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: ^
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: Error: connect ECONNREFUSED 127.0.0.1:5432
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16) {
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: errno: -111,
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: code: 'ECONNREFUSED',
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: syscall: 'connect',
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: address: '127.0.0.1',
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: port: 5432
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: }
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: Error: Command failed: /usr/bin/node /meshcentral/node_modules/meshcentral --launch 29803
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: node:internal/process/promises:279
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: triggerUncaughtException(err, true /* fromPromise */);
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: ^
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: Error: connect ECONNREFUSED 127.0.0.1:5432
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16) {
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: errno: -111,
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: code: 'ECONNREFUSED',
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: syscall: 'connect',
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: address: '127.0.0.1',
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: port: 5432
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: }
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: at ChildProcess.exithandler (node:child_process:399:12)
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: at ChildProcess.emit (node:events:538:35)
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: at maybeClose (node:internal/child_process:1092:16)
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5) {
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: killed: false,
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: code: 1,
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: signal: null,
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: cmd: '/usr/bin/node /meshcentral/node_modules/meshcentral --launch 29803'
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: }
Jul 08 17:41:15 ns-v18-tactical meshcentral[29803]: ERROR: MeshCentral failed with critical error, check mesherrors.txt. Restarting in 5 seconds...
Jul 08 17:41:17 ns-v18-tactical nats-api[258]: time="2022-07-08T17:41:17-04:00" level=error msg="dial tcp 127.0.0.1:5432: connect: connection refused"
Jul 08 17:41:18 ns-v18-tactical nats-api[258]: time="2022-07-08T17:41:18-04:00" level=error msg="dial tcp 127.0.0.1:5432: connect: connection refused"
```
## Note 1
It should be noted the PostgreSQL systemd service file uses `/bin/true` for `ExecStart` and `ExecReload`. This could be why systemd does not recognize PostgreSQL as being killed.
`/lib/systemd/system/postgresql.service`
```text
# systemd service for managing all PostgreSQL clusters on the system. This
# service is actually a systemd target, but we are using a service since
# targets cannot be reloaded.
[Unit]
Description=PostgreSQL RDBMS
[Service]
Type=oneshot
ExecStart=/bin/true
ExecReload=/bin/true
RemainAfterExit=on
[Install]
WantedBy=multi-user.target
```
Restarting the `postgresql.service` manually fixes the `systemd` issue and allows MeshCentral to reconnect.
## Note 2
The problem in [issue 3645](https://github.com/Ylianst/MeshCentral/issues/3645) _might_ be fixed. After killing PostgreSQL, restarting the `postgres.service` allowed MeshCentral to reconnect to the database. Issue 3645 was that MeshCentral did **not** retry the connection. MeshCentral version 1.0.22 **did** retry the connection. | closed | 2022-07-08T21:55:05Z | 2023-07-05T18:44:56Z | https://github.com/amidaware/tacticalrmm/issues/1203 | [
"enhancement"
] | NiceGuyIT | 7 |
piskvorky/gensim | nlp | 3,211 | NameError: name 'Similarity' is not defined | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
1) I'm trying to use Similarity from the offcial document
2) but it shows the NameError: name 'Similarity' is not defined
3) I used the latest version of genism 4.0.1
<img width="1114" alt="WeChat6b985efad4404452e7652c9a65ae08cd" src="https://user-images.githubusercontent.com/54015474/128584699-c3a34e18-491b-47cd-b0af-8b7210b0a95a.png">
#### Steps/code/corpus to reproduce
`from gensim.test.utils import common_corpus, common_dictionary, get_tmpfile
index_tmpfile = get_tmpfile("index")
query = [(1, 2), (6, 1), (7, 2)]
index = Similarity(index_tmpfile, common_corpus, num_features=len(common_dictionary)) # build the index
similarities = index[query]`
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
print(my_model.lifecycle_events)
```
#### Versions
Please provide the output of:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-17-027d9974da56> in <module>
4 query = [(1, 2), (6, 1), (7, 2)]
5
----> 6 index = Similarity(index_tmpfile, common_corpus, num_features=len(common_dictionary)) # build the index
7 similarities = index[query]
NameError: name 'Similarity' is not defined
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
macOS-10.16-x86_64-i386-64bit
Python 3.8.8 (default, Apr 13 2021, 12:59:45)
[Clang 10.0.0 ]
Bits 64
NumPy 1.21.1
SciPy 1.7.0
gensim 4.0.1
FAST_VERSION 1
| closed | 2021-08-07T02:11:04Z | 2021-08-07T20:08:35Z | https://github.com/piskvorky/gensim/issues/3211 | [] | keyuchen21 | 3 |
ivy-llc/ivy | pytorch | 28,333 | Fix Frontend Failing Test: jax - math.tensorflow.math.is_strictly_increasing | To-do List: https://github.com/unifyai/ivy/issues/27496 | closed | 2024-02-19T17:26:39Z | 2024-02-20T09:26:01Z | https://github.com/ivy-llc/ivy/issues/28333 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
OpenVisualCloud/CDN-Transcode-Sample | dash | 104 | [Bug]GUI Tool Unexpected quit | The gui tool will unexpected quit when it is used.
| closed | 2019-11-26T02:08:23Z | 2019-11-26T02:24:17Z | https://github.com/OpenVisualCloud/CDN-Transcode-Sample/issues/104 | [] | chuan12x | 0 |
ranaroussi/yfinance | pandas | 1,945 | Pip install fails 3.13.0b1 | I tried the Python Beta Windows 64 release 3.13.0b1 and Yfinance failed to install.
It was the only Pip out of 15 or so that failed. I dont know if this is Yfinance issue or not.
I went back to 3.12.xxx and pip started working again. | closed | 2024-05-20T11:58:49Z | 2025-02-16T18:40:15Z | https://github.com/ranaroussi/yfinance/issues/1945 | [] | hardworkindog | 0 |
rthalley/dnspython | asyncio | 271 | Migration to pycryptodome | `pycryptodome` is a replacement for the seemingly-abandoned `pycrypto`, which hasn't seen a git update since 2014. Gentoo is in the process of migrating all packages depending on `pycrypto` to `pycryptodome` (see [Gentoo bug #611568](https://bugs.gentoo.org/show_bug.cgi?id=611568)). It seems `dnspython` needs a human touch to do this however, since some of the APIs it uses that were available in `pycrypto` are now deprecated in `pycryptodome` (see [Gentoo bug #611590](https://bugs.gentoo.org/show_bug.cgi?id=611590)). | closed | 2017-07-21T01:50:08Z | 2018-10-11T01:49:14Z | https://github.com/rthalley/dnspython/issues/271 | [] | maxcrees | 14 |
noirbizarre/flask-restplus | api | 196 | Swagger parameters in path vs. method | Placing the @api.param() decorator after a route definition generates a Swagger definition that works perfectly, but the "parameters" JSON is placed in the "paths" section. I need to make sure that the "parameters" actually shows up under each of the method definitions, as Amazon API Gateway needs this. Strictly speaking it is not required by the Swagger specification, but control over where the definition goes is important.
Moving the @api.param() so it decorates the method function does not work as it will cause duplicate Swagger parameter definitions.
| closed | 2016-08-29T15:12:32Z | 2020-01-02T21:13:18Z | https://github.com/noirbizarre/flask-restplus/issues/196 | [] | dmulter | 2 |
FlareSolverr/FlareSolverr | api | 267 | FlareSolverr aborts during build | I'm attempting to install FlareSolver to work with Jackett and it aborts before it stats building. I'm assuming from the below it's a permissions issue. Any suggestion on how to correct it? Thanks
Manjaro KDE 21.2rc1 Arch based
Latest Jacket-Develop version
~~~
-> Found flaresolverr.sysusers
-> Found flaresolverr.tmpfiles
==> Validating source files with sha512sums...
flaresolverr-v2.0.2-linux-x64.zip ... Passed
flaresolverr.sysusers ... Passed
flaresolverr.tmpfiles ... Passed
==> Removing existing $srcdir/ directory...
==> Extracting sources...
-> Extracting flaresolverr-v2.0.2-linux-x64.zip with bsdtar
==> Entering fakeroot environment...
==> Starting package()...
chown: invalid user: ‘flaresolverr:flaresolverr’
==> ERROR: A failure occurred in package().
Aborting...
~~~ | closed | 2021-12-23T13:50:10Z | 2022-01-23T19:49:16Z | https://github.com/FlareSolverr/FlareSolverr/issues/267 | [
"fix available"
] | CummingCowGirl | 2 |
wkentaro/labelme | computer-vision | 786 | [Copy annotation] sometimes it is difficult to annotate same objects on different images, it will be really helpful if we can get a feature that can copy annotations from one image to another, if already exists, please explain. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2020-10-08T12:33:56Z | 2022-06-25T04:54:15Z | https://github.com/wkentaro/labelme/issues/786 | [] | vishalmandley | 0 |
voxel51/fiftyone | computer-vision | 4,762 | [BUG] Unable to get `Frames` info after serializing/de-serializing the video samples | ### Describe the problem
I am using apache beam framework to process my dataset. This requires serializing samples before creating an iterator for the beam pipeline(as is suggested [here](https://github.com/voxel51/fiftyone/blob/develop/fiftyone/utils/beam.py#L118-L120)), my sample media type is video and I need to extract annotation info from sample's `frames`. But after I serialized the samples and de-serialized it back, the sample.frames became empty. Here is the sample code I used to test this issue:
```
serialized_sample = original_sample.to_mongo_dict()
deserialized_sample = fo.Sample.from_dict(serialized_sample)
old_frames = original_sample.frames
new_frames = deserialized_sample.frames
print(f" Frame counts before serialization: {sum(1 for _, _ in old_frames.items())}")
print(f" Frame counts after serialization: {sum(1 for _, _ in new_frames.items())}")
```
The output of the sample code is
```
Frame counts before serialization: 100
Frame counts after serialization: 0
```
I can confirm that all the rest of the fields of the samples can be deserialized correctly, only the `frames` is totally lost, not sure if it is a bug or I didn't use the serialization correctly. Any insight would be appreciated.
### System information
- **OS Platform and Distribution** (e.g., Linux Ubuntu 22.04):
- **Python version** (`python --version`): 3.10.8
- **FiftyOne version** (`fiftyone --version`): 0.25.0
- **FiftyOne installed from** (pip or source): pip
### Other info/logs
Include any logs or source code that would be helpful to diagnose the problem.
If including tracebacks, please include the full traceback. Large logs and
files should be attached. Please do not use screenshots for sharing text. Code
snippets should be used instead when providing tracebacks, logs, etc.
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently
- [x] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
- [ ] No. I cannot contribute a bug fix at this time
| closed | 2024-09-02T18:29:44Z | 2024-09-03T18:13:45Z | https://github.com/voxel51/fiftyone/issues/4762 | [
"bug"
] | CompilerBian | 1 |
ultralytics/yolov5 | machine-learning | 13,315 | RuntimeError: The size of tensor a (6) must match the size of tensor b (7) at non-singleton dimension 2 | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I installed CUDA 12.6, cuDNN, and YOLOv5 to train a model for traffic light detection. However, I'm encountering an error while running tests on a subset of my dataset. Due to this error, I'm unable to resolve the issue. Please, I need your help to find a solution.
![Uploading 스크린샷, 2024-09-15 23-57-43.png…]()
### Additional
_No response_ | open | 2024-09-15T06:01:57Z | 2024-11-09T07:02:27Z | https://github.com/ultralytics/yolov5/issues/13315 | [
"question"
] | YounghoJo01 | 2 |
JaidedAI/EasyOCR | pytorch | 630 | readtext doesn't work with latest opencv version | Windows 10 64bit
easyocr 1.4.1
opencv-python 4.5.5.62
## Problem
When using reader.readtext I was getting an error message from opencv
```
\lib\site-packages\easyocr\craft_utils.py", line 31, in getDetBoxes_core
nLabels, labels, stats, centroids = cv2.connectedComponentsWithStats(text_score_comb.astype(np.uint8), connectivity=4)
cv2.error: Unknown C++ exception from OpenCV code
```
## Solution
Spent most of my day trying to fix whatever was wrong with connectedComponentsWithStats.... Issue didn't go away no matter what I did until I switched from opencv-python version 4.5.5.62 to version 4.5.4.60 .
I know this is a opencv problem so you can't do much about, but probably should limit the opencv version until they fix | closed | 2021-12-29T21:15:09Z | 2023-06-24T11:11:42Z | https://github.com/JaidedAI/EasyOCR/issues/630 | [] | SHA65536 | 6 |
LAION-AI/Open-Assistant | python | 3,021 | Summarize Chat as Chat title | Currently the chat name is just the initial prompt, but we should do something similar to ChatGPT and ask the model to summarize the conversation and have that as the chat title automatically.
Maybe only use one of the smallest models so inference is extremely cheap and smaller models perform pretty well at summarization. | closed | 2023-05-03T05:18:45Z | 2023-05-03T05:28:11Z | https://github.com/LAION-AI/Open-Assistant/issues/3021 | [] | z11h | 1 |
axnsan12/drf-yasg | rest-api | 285 | flex is not maintained anymore | https://github.com/pipermerriam/flex/commit/b94c2d57d0ee85eb00a9b4d341fd7fbf6b01c1de
It should probably get removed from the README, `extras_require[validation]` etc? | closed | 2019-01-08T09:26:10Z | 2019-02-12T21:46:46Z | https://github.com/axnsan12/drf-yasg/issues/285 | [] | blueyed | 4 |
hack4impact/flask-base | sqlalchemy | 99 | same setUp function code for testing should be extracted | https://codeclimate.com/github/hack4impact/flask-base/tests/test_basics.py
<img width="437" alt="screen shot 2017-01-28 at 4 27 14 pm" src="https://cloud.githubusercontent.com/assets/4250521/22399960/a89ea1dc-e576-11e6-8449-d7fcaf901df1.png">
| closed | 2017-01-28T21:26:19Z | 2017-02-28T02:35:24Z | https://github.com/hack4impact/flask-base/issues/99 | [] | yoninachmany | 0 |
flasgger/flasgger | rest-api | 634 | persistAuthorization implemented? | Is this setting wired up/passed through to whatever swagger expects? I tried below in three different places as I wasn't sure which was in play. None of them worked individually. I have them in there concurrently just to show the levels where I tried.
```
app.config['SWAGGER'] = {
'title': 'test API',
'uiversion': 3,
'ui_params': {
'apisSorter': 'alpha',
'operationsSorter': 'alpha',
'tagsSorter': 'alpha',
'persistAuthorization': True
},
'swagger_ui_config': {
'persistAuthorization': True
},
'version': '1.0',
'description': 'Serves API requests for the liongpt system.',
'persistAuthorization': True, #Keeps the auth information between refreshes
'securityDefinitions': {
'BearerAuth': {
"type": "apiKey",
"scheme": "bearer",
"name": "Authorization",
"in": "header",
},
},
'security': [{'BearerAuth': []}], # Default security scheme
}
``` | open | 2025-03-12T14:43:41Z | 2025-03-12T14:43:41Z | https://github.com/flasgger/flasgger/issues/634 | [] | agibson-fl | 0 |
joke2k/django-environ | django | 382 | Detect mismatched default data type | The following does not raise an error:
```py
SECRET_KEY = env.bool('SECRET_KEY', 'some random key')
```
`SECRET_KEY` will be the default value of type `str`.
When someone tries to override the default, it will break. | open | 2022-04-19T00:20:03Z | 2022-04-19T00:41:47Z | https://github.com/joke2k/django-environ/issues/382 | [] | jayvdb | 1 |
scrapy/scrapy | web-scraping | 5,923 | ValueError: Cannot use xpath on a Selector of type 'json' | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
.local/lib/python3.8/site-packages/parsel/selector.py", line 621, in xpath
Scrapy version 2.8.0
[Description of the issue]
### Steps to Reproduce
1. URL request response is json.
2. Some times we use to get the html response.
3. We have written xpath for the html_response , but now i'm getting the ValueError: Cannot use xpath on a Selector of type 'json'
4. Earlier we are able to get the response, now it is thowing an exception
**Expected behavior:** Error should not accour
**Actual behavior:** json is removed from the self.type under selector
ex: if self.type not in ("html", "xml", "text", "json"):
raise ValueError(
f"Cannot use xpath on a Selector of type {self.type!r}"
)
if self.type in ("html", "xml"):
try:
xpathev = self.root.xpath
except AttributeError:
return typing.cast(
SelectorList[_SelectorType], self.selectorlist_cls([])
)
else:
try:
xpathev = self._get_root(self._text or "", type="html").xpath
except AttributeError:
return typing.cast(
SelectorList[_SelectorType], self.selectorlist_cls([])
)
**Reproduces how often:** 100%
### Versions
Please paste here the output of executing `scrapy version --verbose` in the command line.
### Additional context
Any additional information, configuration, data or output from commands that might be necessary to reproduce or understand the issue. Please try not to include screenshots of code or the command line, paste the contents as text instead. You can use [GitHub Flavored Markdown](https://help.github.com/en/articles/creating-and-highlighting-code-blocks) to make the text look better.
| closed | 2023-05-05T12:13:51Z | 2025-02-03T16:03:41Z | https://github.com/scrapy/scrapy/issues/5923 | [] | damodharheadrun | 8 |
mherrmann/helium | web-scraping | 27 | Is it possible to get the pixel color at coordinates (x;y)? | Hi,
I need to know the color of a given pixel within a canvas with Helium.
How to do it?
Thank you in advance!
Best regards, | closed | 2020-05-13T10:13:30Z | 2020-05-13T11:12:11Z | https://github.com/mherrmann/helium/issues/27 | [] | Jars-of-jam-Scheduler | 2 |
dropbox/PyHive | sqlalchemy | 439 | precision and scale not supported for DECIMAL type | The [current implementation of `DECIMAL` (/`NUMERIC`)](https://github.com/dropbox/PyHive/blob/master/pyhive/sqlalchemy_hive.py#L176-L178) does not support the precision and scale attribute which is supported [since hive 0.13](https://cwiki.apache.org/confluence/display/hive/languagemanual+types#LanguageManualTypes-NumericTypes).
possible implementation solution:
``` python
class HiveTypeCompiler(compiler.GenericTypeCompiler):
# [..]
def visit_NUMERIC(self, type_):
precision = getattr(type_, "precision", None)
if precision is None:
return "DECIMAL"
else:
scale = getattr(type_, "scale", None)
if scale is None:
return "DECIMAL(%(precision)s)" % {"precision": precision}
else:
return "DECIMAL(%(precision)s, %(scale)s)" % {"precision": precision, "scale": scale}
```
| open | 2022-12-01T09:53:24Z | 2022-12-01T09:53:24Z | https://github.com/dropbox/PyHive/issues/439 | [] | leo-schick | 0 |
ansible/awx | django | 15,361 | Command run_callback_receiver Gives error: MetricsServer failed to start for service 'callback_receiver | OSError: [Errno 98] Address already in use | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
While running the run_callback_receiver command, I get an error mentrioning that the MetricsServer cannot be started because its address is already in use.
awx-manage run_callback_receiver
2024-07-11 17:52:46,928 ERROR [-] awx.main.analytics **MetricsServer failed to start for service 'callback_receiver.**
Traceback (most recent call last):
File "/usr/bin/awx-manage", line 8, in <module>
sys.exit(manage())
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/_init_.py", line 175, in manage
execute_from_command_line(sys.argv)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/_init_.py", line 442, in execute_from_command_line
utility.execute()
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/_init_.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/base.py", line 412, in run_from_argv
self.execute(*args, **cmd_options)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/base.py", line 458, in execute
output = self.handle(*args, **options)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/management/commands/run_callback_receiver.py", line 30, in handle
CallbackReceiverMetricsServer().start()
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/analytics/subsystem_metrics.py", line 39, in start
prometheus_client.start_http_server(self.port(), addr='localhost', registry=self._registry)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/prometheus_client/exposition.py", line 171, in start_wsgi_server
httpd = make_server(addr, port, app, TmpServer, handler_class=_SilentHandler)
File "/usr/lib64/python3.9/wsgiref/simple_server.py", line 154, in make_server
server = server_class((host, port), handler_class)
File "/usr/lib64/python3.9/socketserver.py", line 452, in _init_
self.server_bind()
File "/usr/lib64/python3.9/wsgiref/simple_server.py", line 50, in server_bind
HTTPServer.server_bind(self)
File "/usr/lib64/python3.9/http/server.py", line 137, in server_bind
socketserver.TCPServer.server_bind(self)
File "/usr/lib64/python3.9/socketserver.py", line 466, in server_bind
self.socket.bind(self.server_address)
**OSError: [Errno 98] Address already in use**
### AWX version
23.8.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
In AWX task pod, run awx-manage run_callback_receiver
### Expected results
The callback receiver to be restarted
### Actual results
Receive the error
MetricsServer failed to start for service 'callback_receiver
OSError: [Errno 98] Address already in use
### Additional information
_No response_ | closed | 2024-07-15T07:41:05Z | 2024-07-24T17:24:38Z | https://github.com/ansible/awx/issues/15361 | [
"type:bug",
"component:api",
"needs_triage",
"community"
] | debeste123 | 2 |
pywinauto/pywinauto | automation | 520 | How to click on Objects that are not listed in Control Identifier? | Hi there, I am new with Pywinauto. I am trying to do some things with my Application called iCUE.
I have listed all objects in this program by using print_control_identifier(). The result:
```
Control Identifiers:
Dialog - 'iCUE' (L0, T0, R1280, B680)
['Dialog', 'iCUEDialog', 'iCUE', 'Dialog0', 'Dialog1']
child_window(title="iCUE", control_type="Window")
|
| Dialog - '' (L0, T0, R1280, B680)
| ['Dialog2', '', '0', '1', '00', '01']
| |
| | Edit - '' (L26, T156, R276, B184)
| | ['2', 'Edit', 'Edit0', 'Edit1']
| | |
| | | Static - 'Type profile name' (L32, T158, R270, B182)
| | | ['Type profile nameStatic', 'Static', 'Type profile name', 'Static0', 'Static1']
| | | child_window(title="Type profile name", control_type="Text")
| |
| | Button - '' (L0, T0, R0, B0)
| | ['3', 'Button', 'Button0', 'Button1']
| |
| | Button - '' (L252, T162, R268, B177)
| | ['4', 'Button2']
| |
| | ScrollBar - '' (L0, T0, R0, B0)
| | ['5', 'ScrollBar', 'ScrollBar0', 'ScrollBar1']
| |
| | ScrollBar - '' (L0, T0, R0, B0)
| | ['5', 'ScrollBar', 'ScrollBar0', 'ScrollBar1']
| |
| | Edit - '' (L0, T0, R0, B0)
| | ['7', 'Edit2']
| |
| | Edit - '' (L0, T0, R0, B0)
| | ['7', 'Edit2']
| |
| | Edit - '' (L0, T0, R0, B0)
| | ['7', 'Edit2']
| |
| | StatusBar - '' (L0, T0, R0, B0)
| | ['StatusBar', '10', 'StatusBar0', 'StatusBar1']
| |
| | Button - '' (L244, T50, R260, B65)
| | ['11', 'Button3']
.....
......
```
The problem is some objects such as SETTINGS, HOME, DASHBOARD, INSTANT LIGHTING and "Default" on my screenshot are not included in this Control Identifiers.
My question is that how to control (click on) these objects ? Thank you !
My screenshot: https://i.imgur.com/VQUxHBL.png
Inspect screenshot, mouse position is on SETTINGS text: https://i.imgur.com/BtUOxV8.png
| closed | 2018-07-12T08:39:54Z | 2018-07-22T20:51:40Z | https://github.com/pywinauto/pywinauto/issues/520 | [
"question"
] | pvase666 | 1 |
mckinsey/vizro | pydantic | 383 | CSS of AG Grid `floatingFilter` option buggy | ### Description
Floating filter not looking ideal. This sort of buggy behaviour may arise with a few other AG Grid options that we have not explicitly tested. In an ideal world we would build our own custom theme that takes care of all potential options automatically.

### Expected behavior
Input field not cutting over the continuous line
### Which package?
vizro
### Package version
0.1.13
### Python version
3
### OS
Mac
### How to Reproduce
```python
import vizro.models as vm
import vizro.plotly.express as px
from vizro import Vizro
from vizro.tables import dash_ag_grid
df = px.data.gapminder(datetimes=True)
page = vm.Page(
title="Enhanced AG Grid",
components=[
vm.AgGrid(
title="Dash AG Grid",
figure=dash_ag_grid(
data_frame=df,
columnDefs=[
{"field" : "country", 'floatingFilter': True},
{"field" : "continent"},
{"field" : "year"},
{"field" : "lifeExp", "cellDataType": "numeric"},
{"field" : "pop", "cellDataType": "numeric"},
{"field" : "gdpPercap", "cellDataType": "euro"},]
)
),
],
controls=[vm.Filter(column="continent")],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run(port = 8051)
```
### Output
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-03-25T09:53:40Z | 2024-03-26T12:51:20Z | https://github.com/mckinsey/vizro/issues/383 | [
"Bug Report :bug:"
] | maxschulz-COL | 1 |
Sanster/IOPaint | pytorch | 9 | Error when editing images above 2k | It seems to error out on larger images when editing the original size.
3840x2880
3648x5472
2299x3065
I've attached the error outputs:
[err2.txt](https://github.com/Sanster/lama-cleaner/files/7885971/err2.txt)
[err1.txt](https://github.com/Sanster/lama-cleaner/files/7885972/err1.txt)
I'm on CUDA 11.1 (I've tried 11.3/11.5 too) and have also tried torch 1.9-1.10, with the same errors occurring on my
RTX 3080 GPU. I'm using the Docker setup but with the torch replaced with the cuda 11.1+ version. Selecting 2k or 1080p usually results in it working properly (I believe sometimes 2k will throw an error, but then will work) | closed | 2022-01-18T03:34:18Z | 2022-03-23T02:18:39Z | https://github.com/Sanster/IOPaint/issues/9 | [] | amooose | 8 |
modelscope/modelscope | nlp | 528 | @grindr | **Describe the feature**
Features description
**Motivation**
A clear and concise description of the motivation of the feature. Ex1. It is inconvenient when [....]. Ex2. There is a recent paper [....], which is very helpful for [....].
**Related resources**
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
**Additional context**
Add any other context or screenshots about the feature request here. If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated. | closed | 2023-09-07T19:09:41Z | 2023-09-14T06:07:54Z | https://github.com/modelscope/modelscope/issues/528 | [] | Jucasan22 | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,801 | UC Mode no longer working for Cloudflar turnstiles? | is this code still supposed to be working?
```
from seleniumbase import SB
with SB(uc=True, test=True) as sb:
url = "https://gitlab.com/users/sign_in"
sb.driver.uc_open_with_reconnect(url, 3)
if not sb.is_text_visible("Username", '[for="user_login"]'):
sb.driver.uc_open_with_reconnect(url, 4)
sb.assert_text("Username", '[for="user_login"]', timeout=3)
sb.assert_element('label[for="user_login"]')
sb.highlight('button:contains("Sign in")')
sb.highlight('h1:contains("GitLab.com")')
sb.post_message("SeleniumBase wasn't detected", duration=4)
``` | closed | 2024-05-24T01:29:34Z | 2024-05-24T02:29:46Z | https://github.com/seleniumbase/SeleniumBase/issues/2801 | [
"question",
"UC Mode / CDP Mode"
] | davidjivan | 3 |
twopirllc/pandas-ta | pandas | 671 | ADX has a different value from TradingView | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
**0.3.14b0**
**Do you have _TA Lib_ also installed in your environment?**
**Yes, I have.**
**Describe the bug**
Some indicator provide some value not equal as TradingView
I know that this issue is not that new but I found the some of the solution
but still not sure if it's all. The cause of it is float number that is equal to each other.
**To Reproduce**
Example from adx.py
```python
pos = ((up > dn) & (up > 0)) * up
neg = ((dn > up) & (dn > 0)) * dn
```
turn to
```python
import numpy
pos = ((up > dn) & (up > 0) & (~numpy.isclose(up,dn))) * up
neg = ((dn > up) & (dn > 0) & (~numpy.isclose(up,dn))) * dn
```
Then things get fixed.
**Expected behavior**
Just wanna share in case you can dev and update it in next release.
I did't check all of it but ema also got this problem, still didn't look into it's detail.
if you saw it you can close this issue yourself. I'm still not quite use to with Github instead of look read and take. On going to be the one who can share too.
| open | 2023-03-28T05:54:52Z | 2025-03-08T17:08:18Z | https://github.com/twopirllc/pandas-ta/issues/671 | [
"duplicate"
] | NAYjY | 4 |
vaexio/vaex | data-science | 1,359 | [FEATURE-REQUEST] | Not sure if this feature request has been already submitted. Is an outer join feature for dataframes being worked on? If yes, by when would it be avbl? | closed | 2021-05-19T17:51:40Z | 2021-05-19T20:28:08Z | https://github.com/vaexio/vaex/issues/1359 | [] | upenbendre | 1 |
satwikkansal/wtfpython | python | 295 | Add translation for POLISH | Expected time to finish: ~8 weeks. I'll start working on it from 11.06.2022.
Hey,
It would be amazing to translate your project to Polish language. Very helpful stuff, and during that work a lot to learn for me 🧑🏭
As I understood: `fork -> 3.0 -> translating -> inform when done`, right? | closed | 2022-06-10T19:33:51Z | 2025-02-24T08:35:59Z | https://github.com/satwikkansal/wtfpython/issues/295 | [] | achoruzy | 6 |
ultralytics/ultralytics | computer-vision | 19,644 | The True Negative(TN) value in the confusion matrix is absent. | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
first off, let me clarify my question, imagine 3 scenarios:
**scenerio 1:**
My dataset is presented below:
- nc= ["aircraft carrier", "destroyer", "war ship"]
- train: 228 images
- valid: 62 images
"there is **no background** and all the images are labeled"
The following is my normalized confusion matrix pertaining to scenario 1:

--------------------------------------------------------------------------------------------------------------------------------------------------
**scenerio 2:**
My dataset is presented below:
- nc= ["aircraft carrier", "destroyer", "war ship"]
- train: **228 images** and **30 backgrounds**
- valid: **62 images** and **15 backgrounds**
The following is my normalized confusion matrix pertaining to scenario 2:

--------------------------------------------------------------------------------------------------------------------------------------------------
**scenerio 3:**
My dataset is presented below:
- nc= ["aircraft carrier", "destroyer", "war ship", "background"]
- train: 258 images (30 images is labeled as background)
- valid: 77 images (15 images is labeled as background)
**I created a new class as background and labeled the entire image (just background not other images) as background, with the corresponding text file value being in the format 3 1 1 1 1.**
The following is my normalized confusion matrix pertaining to scenario 3:

--------------------------------------------------------------------------------------------------------------------------------------------------
and this is my question,
under what circumstances can I observe a value in the TN cell, where the true label is background and the predicted label is also background, without including background as a separate class??
"I think it's just a placeholder and under no circumstances get a number (because it's not a class) and if i create a separate class for the background, i can observe two backgrounds, the first being just a placeholder and the second being the actual background class, is it true??? "

### Additional
_No response_ | open | 2025-03-11T16:03:44Z | 2025-03-11T16:16:06Z | https://github.com/ultralytics/ultralytics/issues/19644 | [
"question",
"detect"
] | shahin-ss51 | 2 |
matterport/Mask_RCNN | tensorflow | 2,468 | masks of my data | HI all,
I hope you are all coping well with these difficult times.
I have my data and their masks to feed maskrcnn.
I tried the nucleus code , their masks are separated in png files, but mine are not , they are all together in one png file for each image they are all the same class.
I really help modifying their code to suit my needs
please help me please
def load_mask(self, image_id):
"""Generate instance masks for an image.
Returns:
masks: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance masks.
"""
info = self.image_info[image_id]
# Get mask directory from image path
mask_dir = os.path.join(os.path.dirname(os.path.dirname(info['path'])), "masks")
# Read mask files from .png image
mask = []
for f in next(os.walk(mask_dir))[2]:
if f.endswith(".png"):
m = skimage.io.imread(os.path.join(mask_dir, f),as_gray=True).astype(np.bool)
mask = np.stack(mask, axis=-1)
# Return mask, and array of class IDs of each instance. Since we have
# one class ID, we return an array of ones
return mask, np.ones([mask.shape[-1]], dtype=np.int32)
Thank you
| open | 2021-01-22T20:12:31Z | 2021-01-22T20:12:31Z | https://github.com/matterport/Mask_RCNN/issues/2468 | [] | Hadeelmas | 0 |
aleju/imgaug | machine-learning | 203 | Avoid 1-hot encoding in SegmentationMapOnImage | `SegmentationMapOnImage` converts masks to a 1-hot representation (see https://github.com/aleju/imgaug/blob/69ac72ef4f2b9d5de62c7813dcf3427ca4a604b5/imgaug/imgaug.py#L4825).
However, this has two drawbacks:
* It is required to know the (at least upper bound) number of classes
* The representation is wasteful and, especially when N is large
I guess the conversion is performed to make drawing easier. I would suggest to stick to the original representation and to only convert when the user is drawing. (Commonly drawing samples is expensive anyway and avoided in training loops) | open | 2018-11-07T00:55:35Z | 2019-09-28T10:02:01Z | https://github.com/aleju/imgaug/issues/203 | [
"enhancement"
] | martinruenz | 4 |
modAL-python/modAL | scikit-learn | 87 | Conda package | I am creating a conda package for modal: https://github.com/conda-forge/staged-recipes/pull/11734
For the next version, it's a good practice to include the LICENSE file in the pypi archive. | closed | 2020-05-27T18:09:57Z | 2020-05-31T21:54:31Z | https://github.com/modAL-python/modAL/issues/87 | [] | hadim | 1 |
kymatio/kymatio | numpy | 608 | Serious-ish sounding warnings in TF '2.2.0-rc1' | Am running the 1D classification example from [here](https://www.kymat.io/gallery_1d/classif_keras.html#sphx-glr-gallery-1d-classif-keras-py) in my colab notebook and got the following warnings:
```
WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x7feae3685048> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to identify source code of lambda function <function <lambda> at 0x7feae3685048>. It was defined on this line: backend.fft = FFT(lambda x: tf.signal.fft(x, name='fft1d'),
lambda x: tf.signal.ifft(x, name='ifft1d'),
lambda x: tf.math.real(tf.signal.ifft(x, name='irfft1d')),
lambda x: None)
, which must contain a single lambda with matching signature. To avoid ambiguity, define each lambda in a separate expression.
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x7feae3685048> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to identify source code of lambda function <function <lambda> at 0x7feae3685048>. It was defined on this line: backend.fft = FFT(lambda x: tf.signal.fft(x, name='fft1d'),
lambda x: tf.signal.ifft(x, name='ifft1d'),
lambda x: tf.math.real(tf.signal.ifft(x, name='irfft1d')),
lambda x: None)
, which must contain a single lambda with matching signature. To avoid ambiguity, define each lambda in a separate expression.
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
```
Any thoughts on using the decorator as indicated above? | closed | 2020-03-28T19:51:58Z | 2020-07-07T21:58:36Z | https://github.com/kymatio/kymatio/issues/608 | [] | vinayprabhu | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.