repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
thtrieu/darkflow | tensorflow | 371 | Building net problem when load ckpt for training | HI , I have some questions. I load ckpt for training, but there is "init" not "load" in Source column.
I want all "load" in this column. How to fix it ??
Thanks a lot!!

| open | 2017-08-10T03:50:39Z | 2017-10-19T08:12:29Z | https://github.com/thtrieu/darkflow/issues/371 | [] | Tingwei-Jen | 5 |
mirumee/ariadne | api | 54 | API descriptions | GraphQL supports item descriptions, but currently, Ariadne provides no way to set those, and neither does `GraphQL-Core` version we are using.
Ideally, we should provide two ways to set item descriptions:
- if resolver has docstring, we should use it
- add `description=` kwarg to `make_executable_schema` & friends that would take dict of dicts and would override items descriptions based on that. We could read special key (eg. `__description`) to get description for type. This approach should also support modularization. | closed | 2018-10-31T17:48:25Z | 2019-03-25T17:41:37Z | https://github.com/mirumee/ariadne/issues/54 | [
"help wanted",
"roadmap",
"docs"
] | rafalp | 3 |
keras-team/keras | python | 20,979 | keras.src.models.functional.Functional.__init__() got multiple values for keyword argument 'inputs' | Probably typo.
```python
@keras.saving.register_keras_serializable()
class DummyModel(keras.Model):
def __init__(
self,
*,
input_shape=(28, 28, 1),
filters=[16, 32],
activation='relu',
**kwargs,
):
input_spec = keras.layers.Input(shape=input_shape)
x = input_spec
x = layers.Conv2D(filters[0], 3, activation=activation)(x)
x = layers.Conv2D(filters[1], 3, activation=activation)(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(filters[1], 3, activation=activation)(x)
x = layers.Conv2D(filters[0], 3, activation=activation)(x)
x = layers.GlobalMaxPooling2D()(x)
super().__init__(inputs=input_spec, outputs=x, **kwargs)
self.filters = filters
self.activation = activation
def get_config(self):
config = super().get_config()
config.update(
{
"input_shape": self.input_shape[1:],
"filters": self.filters,
"activation": self.activation,
}
)
return config
a = np.ones((1, 28, 28, 1), dtype=np.float32); print(a.shape)
model = DummyModel()
output = model(a)
print(output.shape)
(1, 28, 28, 1)
(1, 16)
model.save('new.keras') # ok
loaded_model = keras.saving.load_model("new.keras")
```
```bash
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/keras/src/saving/serialization_lib.py in deserialize_keras_object(config, custom_objects, safe_mode, **kwargs)
710 try:
--> 711 instance = cls.from_config(inner_config)
712 except TypeError as e:
/usr/local/lib/python3.10/dist-packages/keras/src/models/model.py in from_config(cls, config, custom_objects)
491
--> 492 return functional_from_config(
493 cls, config, custom_objects=custom_objects
/usr/local/lib/python3.10/dist-packages/keras/src/models/functional.py in functional_from_config(cls, config, custom_objects)
555 output_tensors = map_tensors(config["output_layers"])
--> 556 return cls(
557 inputs=input_tensors,
<ipython-input-17-242e47f47517> in __init__(self, input_shape, filters, activation, **kwargs)
33 x = layers.GlobalMaxPooling2D()(x)
---> 34 super().__init__(inputs=input_spec, outputs=x, **kwargs)
35
TypeError: keras.src.models.functional.Functional.__init__() got multiple values for keyword argument 'inputs'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-20-9e5a9738bac0> in <cell line: 1>()
----> 1 loaded_model = keras.saving.load_model("new.keras")
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_api.py in load_model(filepath, custom_objects, compile, safe_mode)
174
175 if is_keras_zip:
--> 176 return saving_lib.load_model(
177 filepath,
178 custom_objects=custom_objects,
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py in load_model(filepath, custom_objects, compile, safe_mode)
153 # Construct the model from the configuration file in the archive.
154 with ObjectSharingScope():
--> 155 model = deserialize_keras_object(
156 config_dict, custom_objects, safe_mode=safe_mode
157 )
/usr/local/lib/python3.10/dist-packages/keras/src/saving/serialization_lib.py in deserialize_keras_object(config, custom_objects, safe_mode, **kwargs)
711 instance = cls.from_config(inner_config)
712 except TypeError as e:
--> 713 raise TypeError(
714 f"{cls} could not be deserialized properly. Please"
715 " ensure that components that are Python object"
TypeError: <class '__main__.DummyModel'> could not be deserialized properly. Please ensure that components that are Python object instances (layers, models, etc.) returned by `get_config()` are explicitly deserialized in the model's `from_config()` method.
config={'module': None, 'class_name': 'DummyModel', 'config': {'name': 'dummy_model_3', 'trainable': True, 'layers': [{'module': 'keras.layers', 'class_name': 'InputLayer', 'config': {'batch_shape': [None, 28, 28, 1], 'dtype': 'float32', 'sparse': False, 'name': 'input_layer_2'}, 'registered_name': None, 'name': 'input_layer_2', 'inbound_nodes': []}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_24', 'trainable': True, 'dtype': 'float32', 'filters': 16, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 28, 28, 1]}, 'name': 'conv2d_24', 'inbound_nodes': [{'args': [{'class_name': '__keras_tensor__', 'config': {'shape': [None, 28, 28, 1], 'dtype': 'float32', 'keras_history': ['input_layer_2', 0, 0]}}], 'kwargs': {}}]}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_25', 'trainable': True, 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 26, 26, 16]}, 'name': 'conv2d_25', 'inbound_nodes': [{'args': [{'class_name': '__keras_tensor__', 'config': {'shape': [None, 26, 26, 16], 'dtype': 'float32', 'keras_history': ['conv2d_24', 0, 0]}}], 'kwargs': {}}]}, {'module': 'keras.layers', 'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_6', 'trainable': True, 'dtype': 'float32', 'pool_size': [3, 3], 'padding': 'valid', 'strides': [3, 3], 'data_format': 'channels_last'}, 'registered_name': None, 'build_config': {'input_shape': [None, 24, 24, 32]}, 'name': 'max_pooling2d_6', 'inbound_nodes': [{'args': [{'class_name': '__keras_tensor__', 'config': {'shape': [None, 24, 24, 32], 'dtype': 'float32', 'keras_history': ['conv2d_25', 0, 0]}}], 'kwargs': {}}]}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_26', 'trainable': True, 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 8, 8, 32]}, 'name': 'conv2d_26', 'inbound_nodes': [{'args': [{'class_name': '__keras_tensor__', 'config': {'shape': [None, 8, 8, 32], 'dtype': 'float32', 'keras_history': ['max_pooling2d_6', 0, 0]}}], 'kwargs': {}}]}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_27', 'trainable': True, 'dtype': 'float32', 'filters': 16, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 6, 6, 32]}, 'name': 'conv2d_27', 'inbound_nodes': [{'args': [{'class_name': '__keras_tensor__', 'config': {'shape': [None, 6, 6, 32], 'dtype': 'float32', 'keras_history': ['conv2d_26', 0, 0]}}], 'kwargs': {}}]}, {'module': 'keras.layers', 'class_name': 'GlobalMaxPooling2D', 'config': {'name': 'global_max_pooling2d_6', 'trainable': True, 'dtype': 'float32', 'data_format': 'channels_last', 'keepdims': False}, 'registered_name': None, 'build_config': {'input_shape': [None, 4, 4, 16]}, 'name': 'global_max_pooling2d_6', 'inbound_nodes': [{'args': [{'class_name': '__keras_tensor__', 'config': {'shape': [None, 4, 4, 16], 'dtype': 'float32', 'keras_history': ['conv2d_27', 0, 0]}}], 'kwargs': {}}]}], 'input_layers': [['input_layer_2', 0, 0]], 'output_layers': [['global_max_pooling2d_6', 0, 0]], 'input_shape': [28, 28, 1], 'filters': [16, 32], 'activation': 'relu'}, 'registered_name': 'Custom>DummyModel', 'build_config': {'input_shape': None}}.
Exception encountered: keras.src.models.functional.Functional.__init__() got multiple values for keyword argument 'inputs'
``` | closed | 2025-03-02T18:46:41Z | 2025-03-06T02:11:16Z | https://github.com/keras-team/keras/issues/20979 | [
"type:Bug"
] | innat | 5 |
xlwings/xlwings | automation | 1,875 | Remote interpreter: support "include" parameter | the contrary of the current "exclude" parameter | closed | 2022-03-23T13:31:12Z | 2022-03-28T07:48:50Z | https://github.com/xlwings/xlwings/issues/1875 | [
"enhancement",
"PRO"
] | fzumstein | 0 |
microsoft/nni | pytorch | 4,796 | How to dynamically skip over empty layers when performing model speedup after pruning? | **Describe the issue**:
When pruning a model at various pruning percentages (10%-95%) using the L1Norm Pruner, I get a `nni.compression.pytorch.speedup.error_code.EmptyLayerError: Pruning a Layer to empty is not legal` error. I was wondering if I can dynamically skip over such layers in these cases? Based on the documentation, I can't determine if a layer will be empty after pruning and before model speedup.
I couldn't find it in the documentation, but I was wondering if there was a way to tell if a layer is empty after pruning and before speedup, so that I can exclude it when speeding up, preventing the EmptyLayerError. Any help would be greatly appreciated, thanks!
**Environment**:
- NNI version: nni==2.7
- Python version: Python 3.8.10
- PyTorch/TensorFlow version: torch==1.10.2+cu113
**How to reproduce it?**:
Prune a model to the point where it gets very small, or start with a small model and continue to prune. | closed | 2022-04-23T20:11:09Z | 2022-11-16T07:21:15Z | https://github.com/microsoft/nni/issues/4796 | [] | pmmitche | 4 |
RobertCraigie/prisma-client-py | pydantic | 229 | Remove deprecated order argument in count() | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
This argument has been deprecated (#146), it should be completely removed.
| closed | 2022-01-18T18:27:58Z | 2022-02-01T12:08:15Z | https://github.com/RobertCraigie/prisma-client-py/issues/229 | [
"kind/improvement"
] | RobertCraigie | 0 |
PaddlePaddle/ERNIE | nlp | 232 | where is ERNIE 2.0 ? | The paper released today mentioned that the code and pretrained model has already been open-sourced.
| closed | 2019-07-30T09:27:58Z | 2019-08-19T03:09:51Z | https://github.com/PaddlePaddle/ERNIE/issues/232 | [] | Jiakui | 1 |
Lightning-AI/pytorch-lightning | deep-learning | 20,190 | shortcuts for logging weights and biases norms | ### Description & Motivation
Knowing the norm of weights was necessary to debug float16 training for me.
### Pitch
from lightning.pytorch.utilities import grad_norm
norms = grad_norm(self.layer, norm_type=2)
something like this for weights would be convenient.
### Alternatives
_No response_
### Additional context
_No response_
cc @borda | open | 2024-08-11T23:04:44Z | 2024-08-11T23:05:06Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20190 | [
"feature",
"needs triage"
] | heth27 | 0 |
vitalik/django-ninja | rest-api | 741 | Implementing User Authentication | ### Discussed in https://github.com/vitalik/django-ninja/discussions/740
<div type='discussions-op-text'>
<sup>Originally posted by **KrystianMaccs** April 14, 2023</sup>
Hi Vitaliy, I just got started with django-ninja this week and so far it's been good and I am getting a hang of it. However, I have a little trouble implementing User Authentication and Authorization. How does it work? I have a User model already and schema. What do I do from here? Meanwhile, this is my repo: https://github.com/KrystianMaccs/cinema.git</div> | closed | 2023-04-14T01:19:20Z | 2023-04-20T14:38:32Z | https://github.com/vitalik/django-ninja/issues/741 | [] | sauron136 | 10 |
deepinsight/insightface | pytorch | 2,350 | AgeDB-30 | 请问可以提供一份AgeDB-30数据集么,给这个数据集的作者发邮件不回复。谢谢给位大哥 | open | 2023-06-26T03:13:46Z | 2023-06-26T03:14:15Z | https://github.com/deepinsight/insightface/issues/2350 | [] | zhangfenfang12138 | 1 |
harry0703/MoneyPrinterTurbo | automation | 405 | 关于本地素材使用 | Hi,
我对本地素材的使用有些疑问:
1. 使用本地素材时,多模态LLM是必须的吗?如果不是,需要自己手动打上文本标签吗?或者说,手动打标签能获得更好的效果,比如更好的相关性?
2. 我看到本地素材的大小(所有素材的整体大小,或许?)被限制在400MB以内。我很好奇为什么被限定在400MB?能不能被放开?我打算用它来处理一些本地的影视视频片段,总大小在几GB,甚至几十GB级别。 | closed | 2024-06-10T03:14:54Z | 2024-06-11T03:39:16Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/405 | [] | Mingzefei | 1 |
pbugnion/gmaps | jupyter | 47 | No module named 'loader' at pip install | C:\Users\XXX\AppData\Local\Continuum\Anaconda3>pip install gmaps
Collecting gmaps
Downloading gmaps-0.1.6.tar.gz (98kB)
100% |################################| 102kB 1.5MB/s
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "C:\Users\XXX\AppData\Local\Temp\pip-build-m3s6zj_3\gmaps\setup.py", line 7, in <module>
import gmaps
File "C:\Users\XXX\AppData\Local\Temp\pip-build-m3s6zj_3\gmaps\gmaps__init__.py", line 2, in <module>
from loader import init
ImportError: No module named 'loader'
```
----------------------------------------
```
Command "python setup.py egg_info" failed with error code 1 in C:\Users\XXX\AppData\Local\Temp\pip-build-m3s6zj_3\gmaps
| closed | 2015-12-23T13:39:29Z | 2015-12-24T17:07:37Z | https://github.com/pbugnion/gmaps/issues/47 | [] | chanansh | 3 |
mars-project/mars | scikit-learn | 3,020 | [BUG] Mars import took too long | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
When `import mars` first time, it took about 4~5 seconds which is pretty time-consuming for users

**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7.9
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
`import mars` should take less than 1 second, just like pandas:

**Additional context**
Add any other context about the problem here.
| closed | 2022-05-11T08:02:46Z | 2022-05-13T07:53:37Z | https://github.com/mars-project/mars/issues/3020 | [] | chaokunyang | 2 |
ultralytics/ultralytics | pytorch | 19,309 | new metrics for best.pt | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm curious about the metric that produces best.pt. I know that the exact criteria for best.pt that I've figured out is to calculate [p,r,map50,map50-95] with the weights of [0, 0, 0.1,0.9] (in metrics.py). Is there a way to record best.pt as the accuracy of class prediction, that is, the Precision Recall score? The Precision,Recall that I wrote above also seems to be the Precision, Recall score for the bounding box. For example, while performing validation, I want to select the model that matches the class of the bounding box well as the best.pt, even though it is a Detection task. Also, is there a metric that penalizes the part where an object is incorrectly detected in the background image when it is not?
thanks.
### Additional
_No response_ | open | 2025-02-19T09:24:16Z | 2025-02-20T00:10:37Z | https://github.com/ultralytics/ultralytics/issues/19309 | [
"question",
"detect"
] | yeonhyochoi | 4 |
matplotlib/matplotlib | data-visualization | 28,960 | [Bug]: High CPU utilization of the macosx backend | ### Bug summary
After showing interactive figure, the CPU utilization of python process went to 100%.
### Code for reproduction
```Python
#######################################################
# Case 1: 100% cpu
import matplotlib.pyplot as plt
fig = plt.figure()
plt.plot(range(5))
plt.show()
# after closing the window
import pandas # starting 100% CPU utilization
#######################################################
# Case 2: 100% cpu
import matplotlib.pyplot as plt
import pandas as pd
# No CPU utilization at the moment
fig = plt.figure()
df = pd.DataFrame(range(5))
plt.plot(df[0]) # with this, CPU utilization is 100%.
plt.show() # the same
# after closing the window, still 100%
#######################################################
# Case 3: no issue.
import matplotlib.pyplot as plt
fig = plt.figure()
plt.plot(range(5))
plt.show() # strangely, this has no problem.
```
### Actual outcome
no problem except it consumes 100% cpu. The figure is still responsive.
### Expected outcome
matplotlib backend should not consume 100% cpu.
### Additional information
- pandas version 2.2.3 (pip)
- Certain operation (closing interactive figures) causes 100% cpu with 'macosx' backend.
- Closing the figure, or calling `plt.close()` does not help. (backend: macosx)
- No problem with `qt5agg` backend.
### Operating system
Mac (Intel) Ventura 13.6.6
### Matplotlib Version
3.9.2
### Matplotlib Backend
macosx
### Python version
Python 3.12.3
### Jupyter version
_No response_
### Installation
pip | closed | 2024-10-09T21:31:04Z | 2024-10-30T20:05:33Z | https://github.com/matplotlib/matplotlib/issues/28960 | [
"status: confirmed bug",
"GUI: MacOSX"
] | cinsk | 7 |
AntonOsika/gpt-engineer | python | 928 | KeyError in apply_edits breaking improve mode | I am running improve mode, creating c# and xaml. GPT Engineer is attempting to make updates to a xaml user control (here renamed to be "myExistingUserControl.xaml") and running into an issue where the filepath is invalid.
```These edits will ensure that the code changes are in the correct format and can be found in the code.Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\asdf\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\Scripts\gpte.exe\__main__.py", line 7, in <module>
sys.exit(app())
^^^^^
File "C:\Users\asdf\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt_engineer\applications\cli\main.py", line 194, in main
files_dict = agent.improve(files_dict, prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asdf\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt_engineer\applications\cli\cli_agent.py", line 131, in improve
files_dict = self.improve_fn(
^^^^^^^^^^^^^^^^
File "C:\Users\asdf\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt_engineer\core\default\steps.py", line 182, in improve
overwrite_code_with_edits(chat, files_dict)
File "C:\Users\asdf\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt_engineer\core\chat_to_files.py", line 97, in overwrite_code_with_edits
apply_edits(edits, files_dict)
File "C:\Users\asdf\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt_engineer\core\chat_to_files.py", line 185, in apply_edits
occurrences_cnt = files_dict[filename].count(edit.before)
~~~~~~~~~~^^^^^^^^^^
KeyError: 'some/dir/myExistingUserControl.xaml'``` | closed | 2023-12-22T17:53:59Z | 2024-01-05T12:56:50Z | https://github.com/AntonOsika/gpt-engineer/issues/928 | [
"bug",
"triage"
] | baldmanwithbeard | 13 |
man-group/arctic | pandas | 126 | Append doesn't seem to work | Hi -
Firstly, good work on putting out Arctic - it's awesome! I have the below script that unzip's some fx tick data and tries to write to arctic db. The files are broken down per month and goes few years, so I have used the append() to add each file, however, it looks like the data from only the last file is being persisted to the db and the remaining ones are being deleted when the new ones are added. It might be a bug in how I'm trying to append the data - would appreciate if your input.
Cheers,
Eric
``` Loader
from arctic import Arctic
from datetime import datetime as dt
import pandas as pd
import os, zipfile
import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='fxdb.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.DEBUG)
class fxdb():
def __init__(self, mongo_host = 'localhost'):
self.store = Arctic(mongo_host)
self.store.initialize_library('fx')
self.library = self.store['fx']
self.folder = '../data'
self.lst = os.listdir(self.folder)
self.lst.sort()
def csv_to_pd(self, filename):
df = pd.read_csv(filename, names=['sym', 'datetime', 'bid', 'ask'], header=None, index_col=1, parse_dates=True)
return df
def read(self, sym):
df = self.library.read(sym)
return df
def write(self, sym, df, metadata):
self.library.write(sym, df, metadata)
def append(self, sym, df, metadata):
self.library.write(sym, df, metadata)
def csv_to_db(self, filename, metadata):
sym, year, month = filename.translate(None, '../data/').translate(None, '.csv').split('-')
logging.info("Loading csv for "+sym+" year "+year+" month "+month)
df = self.csv_to_pd(filename)
logging.info("Converted to dataframe "+sym+" year "+year+" month "+month)
self.append(sym, df, metadata)
logging.info("Loaded to the db "+sym+" year "+year+" month "+month)
def unzip(self, filename):
with zipfile.ZipFile(filename, "r") as z:
z.extractall(self.folder)
logging.info("Unzipped "+filename)
def zip_to_db(self, filename):
try:
self.unzip(filename)
except:
logging.error("Error unzipping "+filename)
else:
self.csv_to_db(filename.replace('zip','csv'), {'source': 'PepperStone'})
os.remove(filename.replace('zip','csv'))
logging.info("Removed "+filename.replace('zip','csv'))
folder = '../data'
fx = fxdb()
lst = os.listdir(folder)
lst.sort()
for i in [folder + '/' + s for s in lst if "zip" in s]:
fx.zip_to_db(i)
```
``` Log snippet
2016-04-10 13:31:18,940 - root - INFO - Unzipped ../data/AUDUSD-2015-08.zip
2016-04-10 13:31:18,940 - root - INFO - Loading csv for AUDUSD year 2015 month 08
2016-04-10 13:41:23,142 - root - INFO - Converted to dataframe AUDUSD year 2015 month 08
2016-04-10 13:41:24,894 - arctic.store.version_store - DEBUG - Finished writing versions for AUDUSD
2016-04-10 13:41:24,894 - root - INFO - Loaded to the db AUDUSD year 2015 month 08
2016-04-10 13:41:24,920 - root - INFO - Removed ../data/AUDUSD-2015-08.csv
2016-04-10 13:41:26,470 - root - INFO - Unzipped ../data/AUDUSD-2015-09.zip
2016-04-10 13:41:26,470 - root - INFO - Loading csv for AUDUSD year 2015 month 09
2016-04-10 13:48:19,681 - root - INFO - Converted to dataframe AUDUSD year 2015 month 09
2016-04-10 13:48:21,124 - arctic.store.version_store - DEBUG - Finished writing versions for AUDUSD
2016-04-10 13:48:21,124 - root - INFO - Loaded to the db AUDUSD year 2015 month 09
2016-04-10 13:48:21,142 - root - INFO - Removed ../data/AUDUSD-2015-09.csv
2016-04-10 13:48:22,352 - root - INFO - Unzipped ../data/AUDUSD-2015-10.zip
2016-04-10 13:48:22,352 - root - INFO - Loading csv for AUDUSD year 2015 month 10
2016-04-10 13:54:45,325 - root - INFO - Converted to dataframe AUDUSD year 2015 month 10
2016-04-10 13:54:46,691 - arctic.store.version_store - DEBUG - Finished writing versions for AUDUSD
2016-04-10 13:54:46,802 - root - INFO - Loaded to the db AUDUSD year 2015 month 10
2016-04-10 13:54:46,819 - root - INFO - Removed ../data/AUDUSD-2015-10.csv
2016-04-10 13:54:48,130 - root - INFO - Unzipped ../data/AUDUSD-2015-11.zip
2016-04-10 13:54:48,130 - root - INFO - Loading csv for AUDUSD year 2015 month 11
2016-04-10 14:01:44,774 - root - INFO - Converted to dataframe AUDUSD year 2015 month 11
2016-04-10 14:01:46,562 - arctic.store.version_store - DEBUG - Finished writing versions for AUDUSD
2016-04-10 14:01:46,563 - root - INFO - Loaded to the db AUDUSD year 2015 month 11
2016-04-10 14:01:46,581 - root - INFO - Removed ../data/AUDUSD-2015-11.csv
2016-04-10 14:01:47,692 - root - INFO - Unzipped ../data/AUDUSD-2015-12.zip
2016-04-10 14:01:47,692 - root - INFO - Loading csv for AUDUSD year 2015 month 12
2016-04-10 14:07:30,749 - root - INFO - Converted to dataframe AUDUSD year 2015 month 12
2016-04-10 14:07:31,832 - arctic.store.version_store - DEBUG - Finished writing versions for AUDUSD
2016-04-10 14:07:31,832 - root - INFO - Loaded to the db AUDUSD year 2015 month 12
2016-04-10 14:07:31,848 - root - INFO - Removed ../data/AUDUSD-2015-12.csv
2016-04-10 14:07:33,713 - root - INFO - Unzipped ../data/AUDUSD-2016-01.zip
2016-04-10 14:07:33,713 - root - INFO - Loading csv for AUDUSD year 2016 month 01
2016-04-10 14:16:54,552 - root - INFO - Converted to dataframe AUDUSD year 2016 month 01
2016-04-10 14:16:56,177 - arctic.store.version_store - DEBUG - Finished writing versions for AUDUSD
2016-04-10 14:16:56,177 - root - INFO - Loaded to the db AUDUSD year 2016 month 01
2016-04-10 14:16:56,201 - root - INFO - Removed ../data/AUDUSD-2016-01.csv
``````
``` Checks
>>> import fxlib as fx
>>> fxdb = fx.fxdb()
>>> fxdb.library.list_symbols()
[u'AUDJPY', u'AUDNZD', u'AUDUSD', u'CADJPY']
>>> fxdb.read('AUDUSD').data.head(1)
sym bid ask
datetime
2016-01-04 00:00:00.108 AUD/USD 0.72845 0.72849
>>> fxdb.read('AUDUSD').data.tail(1)
sym bid ask
datetime
2016-01-29 20:59:59.762 AUD/USD 0.70791 0.70797
>>> fxdb.read('AUDUSD').data.count()
sym 3391202
bid 3391202
ask 3391202
dtype: int64
>>> list(fxdb.library.list_versions('AUDUSD'))
[{'deleted': False, 'date': datetime.datetime(2016, 4, 10, 14, 16, 54, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 81, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 14, 7, 30, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 80, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 14, 1, 44, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 79, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 54, 45, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 78, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 48, 19, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 77, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 41, 23, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 76, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 31, 14, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 75, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 21, 42, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 74, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 15, 26, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 73, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 8, 57, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 72, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 13, 0, 14, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 71, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 52, 10, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 70, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 46, 10, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 69, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 39, 17, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 68, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 37, 36, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 67, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 31, 48, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 66, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 23, 50, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 65, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 17, 38, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 64, 'snapshots': []}, {'deleted': False, 'date': datetime.datetime(2016, 4, 10, 12, 13, 4, tzinfo=tzfile(u'/usr/share/zoneinfo/Europe/London')), 'symbol': u'AUDUSD', 'version': 63, 'snapshots': []}]
>>> fxdb.library.get_info('AUDUSD')
{'rows': 3391202, 'segment_count': 51, 'dtype': [('datetime', '<M8[ns]'), ('sym', 'S7'), ('bid', '<f8'), ('ask', '<f8')], 'handler': 'PandasDataFrameStore', 'col_names': {u'index': [u'datetime'], u'columns': [u'sym', u'bid', u'ask']}, 'type': u'pandasdf', 'size': 105127262}
``````
| closed | 2016-04-10T14:25:44Z | 2016-04-11T16:51:51Z | https://github.com/man-group/arctic/issues/126 | [] | ericjohn | 5 |
dynaconf/dynaconf | django | 221 | dynaconf.contrib.flask_dynaconf.DynaconfConfig to flask.config.Config | Hello, is there a way to convert a dynaconf.contrib.flask_dynaconf.DynaconfConfig object into a flask.config.Config one?
Otherwise, is there a way to convert dynaconf.contrib.flask_dynaconf.DynaconfConfig into a dict?
I have been struggling trying to pass a dynaconf.contrib.flask_dynaconf.DynaconfConfig to a Flask Cache constructor as a config. With flask.config.Config it works but with the dynaconf class it doesn't :-/.
cache = Cache().init_app(app, app.config)
| closed | 2019-09-04T19:09:02Z | 2019-09-05T14:44:20Z | https://github.com/dynaconf/dynaconf/issues/221 | [
"question"
] | tul1 | 6 |
yzhao062/pyod | data-science | 549 | Quasi-Monte Carlo Discrepancy always predicts an outlier | I've found that the QMCD model will always predict at least one outlier due to the normalization of its decision scores.
This results in the model not performing at all if there are no outliers in the dataset.
Is this intentional? If so, why was it implemented like this? | open | 2024-03-27T12:48:37Z | 2024-03-27T20:18:03Z | https://github.com/yzhao062/pyod/issues/549 | [] | Hellsice | 1 |
hootnot/oanda-api-v20 | rest-api | 184 | Expiry time on Stop order not correct | Hello,
I am trying to add an expiry time to a stoporder by taking the current time and adding 5 minutes. However on the Oanda trader platform it shows the expiry as tomorrow at a completely different timestamp.
I am using the following code
```
Cancel_Time = datetime.now() + timedelta(minutes=5)
mktOrder_DAX_Long = StopOrderRequest(instrument="DE30_EUR", units=1, price=highest_high, gtdTime=str(Cancel_Time), timeInForce="GTD")
``` | closed | 2021-08-10T16:07:27Z | 2021-08-15T19:07:23Z | https://github.com/hootnot/oanda-api-v20/issues/184 | [] | sword134 | 1 |
huggingface/pytorch-image-models | pytorch | 1,050 | ModuleNotFoundError: No module named 'timm.models.xcit | i got `ModuleNotFoundError: No module named 'timm.models.xcit'` . i couldn't found xcit in timm
| closed | 2021-12-18T13:17:53Z | 2021-12-18T19:57:04Z | https://github.com/huggingface/pytorch-image-models/issues/1050 | [
"bug"
] | SamMohel | 3 |
browser-use/browser-use | python | 664 | ERROR [backoff] Giving up send_request(...) after 4 tries | ### Bug Description
Cannot run using ollama.
### Reproduction Steps
Operation steps:
Step 1:
``` cmd
uv venv --python 3.11
.venv\Scripts\activate
uv pip install browser-use
playwright install
```
Step 2:
Write main.py
``` python
from langchain_ollama import ChatOllama
from browser_use import Agent
from pydantic import SecretStr
# Initialize the model
llm=ChatOllama(model="qwen:7b", num_ctx=32000)
# Create agent with the model
agent = Agent(
task="open Google and search for 'hello world'",
llm=llm
)
```
Step 3:
``` cmd
python main.py
```
The output result is shown as follows:

### Code Sample
```python
Same as the step code.
```
### Version
0.1.36
### LLM Model
Local Model (Specify model in description)
### Operating System
win 11
### Relevant Log Output
```shell
``` | open | 2025-02-11T10:32:36Z | 2025-03-11T01:19:40Z | https://github.com/browser-use/browser-use/issues/664 | [
"bug"
] | zy1024 | 1 |
FlareSolverr/FlareSolverr | api | 891 | 500 Internal Server Error | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [x] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:3.3.4
- Last working FlareSolverr version:3.3.3
- Operating system:Win11 22H2 22631.1830
- Are you using Docker: [yes/no]no
- FlareSolverr User-Agent (see log traces or / endpoint):Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no]yes
- Are you using a Proxy: [yes/no]yes
- Are you using Captcha Solver: [yes/no]no
- If using captcha solver, which one:
- URL to test this issue:https://soutubot.moe/
```
### Description
About a week ago, the V3.3.3 version suddenly failed to pass the verification at https://soutubot.moe/. After downloading the V3.3.4 version, it still failed to pass the verification.And reported the following error.
```
2023-09-07 11:33:28 INFO Incoming request => POST /v1 body: {'cmd': 'request.get', 'url': 'https://soutubot.moe/', 'maxTimeout': 60000}
2023-09-07 11:33:31 INFO Challenge detected. Selector found: #challenge-spinner
2023-09-07 11:34:29 ERROR Error: Error solving the challenge. Timeout after 60.0 seconds.
2023-09-07 11:34:29 INFO Response in 60.973 s
2023-09-07 11:34:29 INFO 127.0.0.1 POST http://localhost:8191/v1 500 Internal Server Error
```
### Logged Error Messages
```text
2023-09-07 16:58:38 INFO ReqId 2236 Serving on http://0.0.0.0:8191
2023-09-07 16:58:46 INFO ReqId 21264 Incoming request => POST /v1 body: {'cmd': 'request.get', 'url': 'https://soutubot.moe/', 'maxTimeout': 60000}
2023-09-07 16:58:46 DEBUG ReqId 21264 Launching web browser...
2023-09-07 16:58:47 DEBUG ReqId 21264 Started executable: `C:\Users\Name\appdata\roaming\undetected_chromedriver\chromedriver.exe` in a child process with pid: 5136
2023-09-07 16:58:47 DEBUG ReqId 21264 New instance of webdriver has been created to perform the request
2023-09-07 16:58:47 DEBUG ReqId 5372 Navigating to... https://soutubot.moe/
2023-09-07 16:58:53 INFO ReqId 5372 Challenge detected. Selector found: #challenge-spinner
2023-09-07 16:58:53 DEBUG ReqId 5372 Waiting for title (attempt 1): Just a moment...
2023-09-07 16:58:53 DEBUG ReqId 5372 Waiting for title (attempt 1): DDoS-Guard
2023-09-07 16:58:53 DEBUG ReqId 5372 Waiting for selector (attempt 1): #cf-challenge-running
2023-09-07 16:58:53 DEBUG ReqId 5372 Waiting for selector (attempt 1): .ray_id
2023-09-07 16:58:53 DEBUG ReqId 5372 Waiting for selector (attempt 1): .attack-box
2023-09-07 16:58:53 DEBUG ReqId 5372 Waiting for selector (attempt 1): #cf-please-wait
2023-09-07 16:58:53 DEBUG ReqId 5372 Waiting for selector (attempt 1): #challenge-spinner
2023-09-07 16:58:54 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:58:54 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:58:54 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:58:54 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:58:54 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
2023-09-07 16:58:56 DEBUG ReqId 5372 Waiting for title (attempt 2): Just a moment...
2023-09-07 16:58:56 DEBUG ReqId 5372 Waiting for title (attempt 2): DDoS-Guard
2023-09-07 16:58:56 DEBUG ReqId 5372 Waiting for selector (attempt 2): #cf-challenge-running
2023-09-07 16:58:56 DEBUG ReqId 5372 Waiting for selector (attempt 2): .ray_id
2023-09-07 16:58:56 DEBUG ReqId 5372 Waiting for selector (attempt 2): .attack-box
2023-09-07 16:58:56 DEBUG ReqId 5372 Waiting for selector (attempt 2): #cf-please-wait
2023-09-07 16:58:56 DEBUG ReqId 5372 Waiting for selector (attempt 2): #challenge-spinner
2023-09-07 16:58:57 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:58:57 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:58:57 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:58:57 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:58:57 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
2023-09-07 16:59:00 DEBUG ReqId 5372 Waiting for title (attempt 3): Just a moment...
2023-09-07 16:59:00 DEBUG ReqId 5372 Waiting for title (attempt 3): DDoS-Guard
2023-09-07 16:59:00 DEBUG ReqId 5372 Waiting for selector (attempt 3): #cf-challenge-running
2023-09-07 16:59:00 DEBUG ReqId 5372 Waiting for selector (attempt 3): .ray_id
2023-09-07 16:59:00 DEBUG ReqId 5372 Waiting for selector (attempt 3): .attack-box
2023-09-07 16:59:00 DEBUG ReqId 5372 Waiting for selector (attempt 3): #cf-please-wait
2023-09-07 16:59:00 DEBUG ReqId 5372 Waiting for selector (attempt 3): #challenge-spinner
2023-09-07 16:59:01 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:59:01 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:59:01 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:59:01 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:59:01 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
2023-09-07 16:59:03 DEBUG ReqId 5372 Waiting for title (attempt 4): Just a moment...
2023-09-07 16:59:03 DEBUG ReqId 5372 Waiting for title (attempt 4): DDoS-Guard
2023-09-07 16:59:03 DEBUG ReqId 5372 Waiting for selector (attempt 4): #cf-challenge-running
2023-09-07 16:59:03 DEBUG ReqId 5372 Waiting for selector (attempt 4): .ray_id
2023-09-07 16:59:03 DEBUG ReqId 5372 Waiting for selector (attempt 4): .attack-box
2023-09-07 16:59:03 DEBUG ReqId 5372 Waiting for selector (attempt 4): #cf-please-wait
2023-09-07 16:59:03 DEBUG ReqId 5372 Waiting for selector (attempt 4): #challenge-spinner
2023-09-07 16:59:04 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:59:04 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:59:04 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:59:04 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:59:04 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
2023-09-07 16:59:06 DEBUG ReqId 5372 Waiting for title (attempt 5): Just a moment...
2023-09-07 16:59:06 DEBUG ReqId 5372 Waiting for title (attempt 5): DDoS-Guard
2023-09-07 16:59:06 DEBUG ReqId 5372 Waiting for selector (attempt 5): #cf-challenge-running
2023-09-07 16:59:06 DEBUG ReqId 5372 Waiting for selector (attempt 5): .ray_id
2023-09-07 16:59:06 DEBUG ReqId 5372 Waiting for selector (attempt 5): .attack-box
2023-09-07 16:59:06 DEBUG ReqId 5372 Waiting for selector (attempt 5): #cf-please-wait
2023-09-07 16:59:06 DEBUG ReqId 5372 Waiting for selector (attempt 5): #challenge-spinner
2023-09-07 16:59:07 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:59:07 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:59:07 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:59:07 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:59:07 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
2023-09-07 16:59:09 DEBUG ReqId 5372 Waiting for title (attempt 6): Just a moment...
2023-09-07 16:59:09 DEBUG ReqId 5372 Waiting for title (attempt 6): DDoS-Guard
2023-09-07 16:59:09 DEBUG ReqId 5372 Waiting for selector (attempt 6): #cf-challenge-running
2023-09-07 16:59:09 DEBUG ReqId 5372 Waiting for selector (attempt 6): .ray_id
2023-09-07 16:59:09 DEBUG ReqId 5372 Waiting for selector (attempt 6): .attack-box
2023-09-07 16:59:09 DEBUG ReqId 5372 Waiting for selector (attempt 6): #cf-please-wait
2023-09-07 16:59:09 DEBUG ReqId 5372 Waiting for selector (attempt 6): #challenge-spinner
2023-09-07 16:59:10 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:59:10 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:59:10 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:59:10 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:59:10 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
2023-09-07 16:59:12 DEBUG ReqId 5372 Waiting for title (attempt 7): Just a moment...
2023-09-07 16:59:12 DEBUG ReqId 5372 Waiting for title (attempt 7): DDoS-Guard
2023-09-07 16:59:12 DEBUG ReqId 5372 Waiting for selector (attempt 7): #cf-challenge-running
2023-09-07 16:59:12 DEBUG ReqId 5372 Waiting for selector (attempt 7): .ray_id
2023-09-07 16:59:12 DEBUG ReqId 5372 Waiting for selector (attempt 7): .attack-box
2023-09-07 16:59:12 DEBUG ReqId 5372 Waiting for selector (attempt 7): #cf-please-wait
2023-09-07 16:59:12 DEBUG ReqId 5372 Waiting for selector (attempt 7): #challenge-spinner
2023-09-07 16:59:13 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:59:13 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:59:13 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:59:13 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:59:13 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
2023-09-07 16:59:15 DEBUG ReqId 5372 Waiting for title (attempt 8): Just a moment...
2023-09-07 16:59:15 DEBUG ReqId 5372 Waiting for title (attempt 8): DDoS-Guard
2023-09-07 16:59:15 DEBUG ReqId 5372 Waiting for selector (attempt 8): #cf-challenge-running
2023-09-07 16:59:15 DEBUG ReqId 5372 Waiting for selector (attempt 8): .ray_id
2023-09-07 16:59:15 DEBUG ReqId 5372 Waiting for selector (attempt 8): .attack-box
2023-09-07 16:59:15 DEBUG ReqId 5372 Waiting for selector (attempt 8): #cf-please-wait
2023-09-07 16:59:15 DEBUG ReqId 5372 Waiting for selector (attempt 8): #challenge-spinner
2023-09-07 16:59:16 DEBUG ReqId 5372 Timeout waiting for selector
2023-09-07 16:59:16 DEBUG ReqId 5372 Try to find the Cloudflare verify checkbox...
2023-09-07 16:59:16 DEBUG ReqId 5372 Cloudflare verify checkbox not found on the page.
2023-09-07 16:59:16 DEBUG ReqId 5372 Try to find the Cloudflare 'Verify you are human' button...
2023-09-07 16:59:16 DEBUG ReqId 5372 The Cloudflare 'Verify you are human' button not found on the page.
......
2023-09-07 16:59:48 DEBUG ReqId 21264 A used instance of webdriver has been destroyed
2023-09-07 16:59:48 ERROR ReqId 21264 Error: Error solving the challenge. Timeout after 60.0 seconds.
2023-09-07 16:59:48 DEBUG ReqId 21264 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 60.0 seconds.', 'startTimestamp': 1694077126775, 'endTimestamp': 1694077188290, 'version': '3.3.4'}
2023-09-07 16:59:48 INFO ReqId 21264 Response in 61.515 s
2023-09-07 16:59:48 INFO ReqId 21264 127.0.0.1 POST http://localhost:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_ | closed | 2023-09-07T09:13:33Z | 2023-09-13T09:30:27Z | https://github.com/FlareSolverr/FlareSolverr/issues/891 | [
"help wanted"
] | luoxuebinfei | 15 |
arogozhnikov/einops | tensorflow | 88 | do einops' operations account for contiguous memory layout? | Upon heavy reshaping and dimension manipulations, it is necessary from time to time to call .contiguous() on the resulting tensors to straighten out the memory layout. Does einops account for this automatically? I dont see no call to contiguous() anywhere in the examples | open | 2020-11-16T12:53:19Z | 2021-02-23T08:15:00Z | https://github.com/arogozhnikov/einops/issues/88 | [
"question"
] | CDitzel | 5 |
quokkaproject/quokka | flask | 258 | quokka.utils.paas broken in Python 3 | this file uses **execute** to activate a venv, find a solution for python3
| closed | 2015-07-15T12:36:25Z | 2015-07-16T02:56:11Z | https://github.com/quokkaproject/quokka/issues/258 | [
"bug",
"EASY"
] | rochacbruno | 1 |
3b1b/manim | python | 1,823 | - »manimgl example_scenes.py -lo« command does not work. - | ### Describe the error
I run the following command at the end of installation of manim, and then there is the following error.
### Code and Error
**manimgl**:
manimgl example_scenes.py -lo
**Error**:
ManimGL v1.3.0
[01:13:22] INFO Using the default configuration file, which you can modify in config.py:259
`c:\windows\system32\manimgl\manimlib\default_config.yml`
INFO If you want to create a local configuration file, you can create a file named config.py:260
`custom_config.yml`, or run `manimgl --config`
WARNING You may be using Windows platform and have not specified the path of config.py:226
`temporary_storage`, which may cause OSError. So it is recommended to specify the
`temporary_storage` in the config file (.yml)
Traceback (most recent call last):
File "C:\Windows\System32\ManimGL\mgl\Scripts\manimgl-script.py", line 33, in <module>
sys.exit(load_entry_point('manimgl', 'console_scripts', 'manimgl')())
File "c:\windows\system32\manimgl\manimlib\__main__.py", line 21, in main
config = manimlib.config.get_configuration(args)
File "c:\windows\system32\manimgl\manimlib\config.py", line 294, in get_configuration
module = get_module(args.file)
File "c:\windows\system32\manimgl\manimlib\config.py", line 178, in get_module
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 879, in exec_module
File "<frozen importlib._bootstrap_external>", line 1016, in get_code
File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Windows\\System32\\ManimGL\\examples_scenes.py'
### Environment
**Microsoft Windows 11 Home Single Language**
**ManimGL 1.6.1**: master <!-- make sure you are using the latest version of master branch -->
**Python 3.10.0**
| open | 2022-05-27T02:43:51Z | 2022-05-28T04:18:29Z | https://github.com/3b1b/manim/issues/1823 | [] | Siegfried-Gottlich-Wotansson | 5 |
autogluon/autogluon | computer-vision | 3,813 | DDP issue | **Bug Report Checklist**
import pyarrow.parquet as pq
from autogluon.multimodal import MultiModalPredictor
import os
train_data = pq.read_table('features_with_label.parquet').to_pandas()
metric = 'f1'
time_limit = 180
predictor = MultiModalPredictor(label='label', eval_metric=metric)
predictor.fit(train_data, time_limit=time_limit)
**Describe the bug**
I am trying to use MultiModalPredictor to perform classification on combination of text and tabular data. I am running my code on "ml.p3.8xlarge" instance with kernel "conda_pytorch_py310". I am getting below eror
“Lightning can’t create new processes if CUDA is already initialized. Did you manually call `torch.cuda.*` functions, have moved the model to the device, or allocated memory on the GPU any other way? Please remove any such calls, or change the selected strategy. You will have to restart the Python kernel.”
**Screenshots / Logs**
[error_logs.txt](https://github.com/autogluon/autogluon/files/13666797/error_logs.txt)
```python
python version = Python 3.10.13
Lightning version = '2.0.9.post0'
autogluon = 2.21
```
| closed | 2023-12-14T00:01:48Z | 2024-06-27T10:36:23Z | https://github.com/autogluon/autogluon/issues/3813 | [
"bug: unconfirmed",
"Needs Triage",
"module: multimodal"
] | vinayakkarande | 3 |
miguelgrinberg/Flask-SocketIO | flask | 1,536 | incoming request | **Hello
I am using the following code to process incoming request. Although I test the server and client with the random number but when I add get.request the host shows nothing**
**server code:**
```
from flask_socketio import SocketIO, emit
from flask_socketio import SocketIO, emit
from flask import Flask, render_template, url_for, copy_current_request_context, request
from random import random
from time import sleep
from threading import Thread, Event
app = Flask(__name__)
app.config['DEBUG'] = True
#turn the flask app into a socketio app
socketio = SocketIO(app, async_mode=None, logger=True, engineio_logger=True)
#random number Generator Thread
thread = Thread()
thread_stop_event = Event()
def randomNumberGenerator():
# """
# Generate a random number every 1 second and emit to a socketio instance (broadcast)
# Ideally to be run in a separate thread?
# """
#infinite loop of magical random numbers
print("Making random numbers")
while not thread_stop_event.isSet():
# number = round(random()*10, 3)
number = request.values.get('test')
print(number)
socketio.emit('newnumber', {'number': number}, namespace='/test')
socketio.sleep(5)
@app.route('/')
def index():
#only by sending this page first will the client be connected to the socketio instance
return render_template('index.html')
@socketio.on('connect', namespace='/test')
def test_connect():
# need visibility of the global thread object
global thread
print('Client connected')
#Start the random number generator thread only if the thread has not been started before.
if not thread.isAlive():
print("Starting Thread")
thread = socketio.start_background_task(randomNumberGenerator)
@socketio.on('disconnect', namespace='/test')
def test_disconnect():
print('Client disconnected')
if __name__ == '__main__':
socketio.run(app)
```
**client code:**
```
$(document).ready(function(){
//connect to the socket server.
var socket = io.connect('http://' + document.domain + ':' + location.port + '/test');
var numbers_received = [];
//receive details from server
socket.on('newnumber', function(msg) {
console.log("Received number" + msg.number);
//maintain a list of ten numbers
// if (numbers_received.length >= 10){
// numbers_received.shift()
// }
numbers_received.push(msg.number);
numbers_string = '';
for (var i = 0; i < numbers_received.length; i++){
numbers_string = numbers_string + '<p>' + numbers_received[i].toString() + '</p>';
}
$('#log').html(numbers_string);
});
});
```
I am new to coding in socket.io so I do not know how to process get request. | closed | 2021-04-29T09:58:31Z | 2021-06-27T19:38:22Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1536 | [
"question"
] | Elappnano | 10 |
pydantic/logfire | pydantic | 206 | `logfire whoami` should respect the `LOGFIRE_TOKEN` env var. | and I guess `pyproject.toml` and anywhere else we look for a token, e.g. it should have the same semantics in terms of finding a project as
```bash
python -c 'import logfire; logfire.info("hello world")'
``` | closed | 2024-05-22T21:48:17Z | 2024-06-11T09:46:16Z | https://github.com/pydantic/logfire/issues/206 | [
"good first issue",
"Feature Request"
] | samuelcolvin | 1 |
ets-labs/python-dependency-injector | asyncio | 358 | Configuration raises AttributeError when provider is called | Hi, I just run into this issue with the `Configuration` provider. After scratching my head for a bit, I managed to find a workaround, but I was wondering if this is actually a bug or just something wrong I am doing. Any help would be appreciated!
**Steps to reproduce**
`containers.py`
```python
from dependency_injector import providers, containers
class MyService(object):
def __init__(self, **kwargs):
self.key = kwargs.pop('key')
def trigger(self):
pass
class MyDevice(object):
def __init__(self, **kwargs):
# doesn't raise an error because it's an instance of
# dependency_injector.providers.Singleton
self.service = kwargs.pop('service')
def do_something(self):
# raises "AttributeError: 'NoneType' object has no attribute 'get'"
self.service().trigger()
class ServiceContainer(containers.DeclarativeContainer):
config = providers.Configuration()
myservice = providers.Singleton(MyService, config=config.myservice)
class Container(containers.DeclarativeContainer):
config = providers.Configuration()
services = providers.Container(ServiceContainer, config=config.services)
mydevice = providers.Factory(MyDevice)
```
If I run `app.py`
```python
import sys
from containers import Container
container = Container()
container.config.from_yaml('config.yaml')
container.init_resources()
container.wire(modules=[sys.modules[__name__]])
mydevice = container.mydevice(service=container.services.myservice)
mydevice.do_something()
```
with `config.yaml`
```yaml
foo:
bar: 42
```
it raises the following error
> File "/home/stefano/personal/test-error/containers.py", line 15, in do_something
self.service().trigger()
File "src/dependency_injector/providers.pyx", line 168, in dependency_injector.providers.Provider.__call__
File "src/dependency_injector/providers.pyx", line 2245, in dependency_injector.providers.Singleton._provide
File "src/dependency_injector/providers.pxd", line 550, in dependency_injector.providers.__factory_call
File "src/dependency_injector/providers.pxd", line 536, in dependency_injector.providers.__callable_call
File "src/dependency_injector/providers.pxd", line 495, in dependency_injector.providers.__call
File "src/dependency_injector/providers.pxd", line 387, in dependency_injector.providers.__provide_keyword_args
File "src/dependency_injector/providers.pxd", line 310, in dependency_injector.providers.__get_value
File "src/dependency_injector/providers.pyx", line 168, in dependency_injector.providers.Provider.__call__
File "src/dependency_injector/providers.pyx", line 1232, in dependency_injector.providers.ConfigurationOption._provide
File "src/dependency_injector/providers.pyx", line 1467, in dependency_injector.providers.Configuration.get
**AttributeError: 'NoneType' object has no attribute 'get'**
**Workaround**
To avoid the issue, I have to pass the whole `config` to `ServiceContainer`
```python
class ServiceContainer(containers.DeclarativeContainer):
config = providers.Configuration()
myservice = providers.Singleton(MyService, config=config.services.myservice)
class Container(containers.DeclarativeContainer):
config = providers.Configuration()
services = providers.Container(ServiceContainer, config=config)
mydevice = providers.Factory(MyDevice)
```
Running the application now, raises the following (as expected)
> File "/home/stefano/personal/test-error/containers.py", line 18, in do_something
self.service().trigger()
File "src/dependency_injector/providers.pyx", line 168, in dependency_injector.providers.Provider.__call__
File "src/dependency_injector/providers.pyx", line 2245, in dependency_injector.providers.Singleton._provide
File "src/dependency_injector/providers.pxd", line 550, in dependency_injector.providers.__factory_call
File "src/dependency_injector/providers.pxd", line 536, in dependency_injector.providers.__callable_call
File "src/dependency_injector/providers.pxd", line 526, in dependency_injector.providers.__call
File "/home/stefano/personal/test-error/containers.py", line 5, in __init__
**self.key = kwargs.pop('key')
KeyError: 'key'**
| closed | 2021-01-14T20:18:29Z | 2021-01-28T15:06:31Z | https://github.com/ets-labs/python-dependency-injector/issues/358 | [
"bug"
] | StefanoFrazzetto | 21 |
jstrieb/github-stats | asyncio | 18 | Inaccurate statistics | Hello 👋.
I followed all the steps stated in the readme file correctly. Added the correct perms, I clicked the links provided in both steps 2 and 3 and I named the secret correctly.
However, these are the generated images:

and

[My fork](https://github.com/JoseDeFreitas/github-stats)
They do not match my actual stats whatsoever.
As I wrote, I've done all the steps correctly. I tried reloading the page and re-running the workflow, but the result was the same. I don't know if it's an issue with the API or I actually did something wrong...
Thank you in advance. | closed | 2021-02-10T00:23:45Z | 2021-02-10T01:45:24Z | https://github.com/jstrieb/github-stats/issues/18 | [] | JoseDeFreitas | 3 |
NVlabs/neuralangelo | computer-vision | 29 | neuralangelo docker run issue - WSL2 + Ubuntu 20.04 LTS | After done this.
https://github.com/NVlabs/neuralangelo/issues/10
I'm trying with WSL2 + Ubuntu 20.04 LTS + docker.
The log is below.
```shell
(neuralangelo) root@altava-farer:~/neuralangelo# nvidia-smi
Thu Aug 17 10:18:03 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.226.00 Driver Version: 536.67 CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 On | Off |
| 0% 36C P8 32W / 450W | 2974MiB / 24564MiB | 6% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 20 G /Xwayland N/A |
| 0 33 G /Xwayland N/A |
+-----------------------------------------------------------------------------+
(neuralangelo) root@altava-farer:~/neuralangelo#
(neuralangelo) root@altava-farer:~/neuralangelo# docker run --gpus all -it docker.io/chenhsuanlin/neuralangelo:23.04-py3
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/de790850947733812be2cb67e6dd791f79c546dfa8d87cd115ac2d82e2f352eb/merged/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1: file exists: unknown.
ERRO[0000] error waiting for container: context canceled
(neuralangelo) root@altava-farer:~/neuralangelo#
```
But it works without "--gpus all".
```shell
(neuralangelo) root@altava-farer:~/neuralangelo# docker run -it docker.io/chenhsuanlin/neuralangelo:23.04-py3
=============
== PyTorch ==
=============
NVIDIA Release 23.04 (build 58180998)
PyTorch Version 2.1.0a0+fe05266
Container image Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2023 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015 Google Inc.
Copyright (c) 2015 Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use the NVIDIA Container Toolkit to start this container with GPU support; see
https://docs.nvidia.com/datacenter/cloud-native/ .
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for PyTorch. NVIDIA recommends the use of the following flags:
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...
root@bbc348e95135:/workspace#
```
And I run torchrun like below.
```shell
root@bbc348e95135:/workspace/neuralangelo# ll
total 92
drwxr-xr-x 9 root root 4096 Aug 17 01:23 ./
drwxrwxrwx 1 root root 4096 Aug 17 01:21 ../
drwxr-xr-x 8 root root 4096 Aug 17 01:21 .git/
-rw-r--r-- 1 root root 3497 Aug 17 01:21 .gitignore
-rw-r--r-- 1 root root 104 Aug 17 01:21 .gitmodules
-rw-r--r-- 1 root root 143 Aug 17 01:21 .pre-commit-config.yaml
-rw-r--r-- 1 root root 5246 Aug 17 01:21 DATA_PROCESSING.md
-rw-r--r-- 1 root root 4454 Aug 17 01:21 LICENSE.md
-rw-r--r-- 1 root root 4158 Aug 17 01:21 README.md
drwxr-xr-x 2 root root 4096 Aug 17 01:21 assets/
drwxr-xr-x 2 root root 4096 Aug 17 01:21 docker/
drwxr-xr-x 6 root root 4096 Aug 17 01:21 imaginaire/
-rw-r--r-- 1 root root 378 Aug 17 01:21 neuralangelo.yaml
drwxr-xr-x 4 root root 4096 Aug 17 01:21 projects/
-rw-r--r-- 1 root root 368 Aug 17 01:21 requirements.txt
drwxr-xr-x 3 root root 4096 Aug 17 01:21 third_party/
-rwxr-xr-x 1 root root 584 Aug 16 02:38 toy_example.yaml*
drwxr-xr-x 4 root root 4096 Aug 16 02:38 toy_example_skip24/
-rw-r--r-- 1 root root 4130 Aug 17 01:21 train.py
root@bbc348e95135:/workspace/neuralangelo#
root@bbc348e95135:/workspace/neuralangelo#
root@bbc348e95135:/workspace/neuralangelo# EXPERIMENT=toy_example
root@bbc348e95135:/workspace/neuralangelo# GROUP=example_group
E=examproot@bbc348e95135:/workspace/neuralangelo# NAME=example_name
root@bbc348e95135:/workspace/neuralangelo#
root@bbc348e95135:/workspace/neuralangelo# CONFIG=./toy_example.yaml
root@bbc348e95135:/workspace/neuralangelo# GPUS=1
root@bbc348e95135:/workspace/neuralangelo#
root@bbc348e95135:/workspace/neuralangelo# torchrun --nproc_per_node=${GPUS} train.py \
> --logdir=logs/${GROUP}/${NAME} \
> --config=${CONFIG} \
> --show_pbar
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/pynvml/nvml.py", line 1478, in _LoadNvmlLibrary
nvmlLib = CDLL("libnvidia-ml.so.1")
File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1: file too short
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 20, in <module>
from imaginaire.utils.gpu_affinity import set_affinity
File "/workspace/neuralangelo/imaginaire/utils/gpu_affinity.py", line 22, in <module>
pynvml.nvmlInit()
File "/usr/local/lib/python3.8/dist-packages/pynvml/nvml.py", line 1450, in nvmlInit
nvmlInitWithFlags(0)
File "/usr/local/lib/python3.8/dist-packages/pynvml/nvml.py", line 1433, in nvmlInitWithFlags
_LoadNvmlLibrary()
File "/usr/local/lib/python3.8/dist-packages/pynvml/nvml.py", line 1480, in _LoadNvmlLibrary
_nvmlCheckReturn(NVML_ERROR_LIBRARY_NOT_FOUND)
File "/usr/local/lib/python3.8/dist-packages/pynvml/nvml.py", line 765, in _nvmlCheckReturn
raise NVMLError(ret)
pynvml.nvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 414) of binary: /usr/bin/python
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.1.0a0+fe05266', 'console_scripts', 'torchrun')())
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-08-17_01:26:31
host : bbc348e95135
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 414)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
root@bbc348e95135:/workspace/neuralangelo#
```
Is there anything missing ? | closed | 2023-08-17T01:34:51Z | 2023-08-26T04:49:54Z | https://github.com/NVlabs/neuralangelo/issues/29 | [] | altava-sgp | 16 |
matplotlib/mplfinance | matplotlib | 615 | Error in module of Python 3.11.3, Pmw, BLT. | I am using python 3.11.3. I am building a stock market package in python. I need menus in that package. For creating menus I want to use PMW package with tinker.
To try my first steps, I used the code snippet at : https://www.slac.stanford.edu/grp/cd/soft/pmw/blt/python/html/HelloBLT.html
When I ran this script, I am getting this error:
```
Traceback (most recent call last):
File "D:\PYTHON_3.11.3\Lib\site-packages\Pmw\Pmw_2_1_1\lib\PmwBlt.py", line 103, in __del__
self.tk.call(_vectorCommand, 'destroy', self._name)
_tkinter.TclError: invalid command name "::blt::vector"
```
Any help please
| closed | 2023-05-01T17:30:09Z | 2023-05-30T20:49:37Z | https://github.com/matplotlib/mplfinance/issues/615 | [
"question"
] | Blessvskp | 6 |
deepfakes/faceswap | deep-learning | 1,290 | Missing alignements faces | Hello first of all I have searched the forum but there is no answer to my problem.
When I extract the frames of any video the alignment file is not created and no error messages, I tried on several videos and it's the same result, I also uninstalled completely the faceswap and conda environment then I reinstalled but still the same problem.
I have created an extract.log file hoping that someone could help me.
12/28/2022 14:02:50 MainProcess MainThread logger log_setup INFO Log level set to: INFO
12/28/2022 14:02:52 MainProcess MainThread plugin_loader _import INFO Loading Detect from S3Fd plugin...
12/28/2022 14:02:52 MainProcess MainThread plugin_loader _import INFO Loading Align from Fan plugin...
12/28/2022 14:02:52 MainProcess MainThread plugin_loader _import INFO Loading Mask from Components plugin...
12/28/2022 14:02:52 MainProcess MainThread plugin_loader _import INFO Loading Mask from Extended plugin...
12/28/2022 14:02:52 MainProcess MainThread plugin_loader _import INFO Loading Mask from Bisenet_Fp plugin...
12/28/2022 14:02:52 MainProcess MainThread pipeline _set_plugin_batchsize INFO Reset batch sizes due to available VRAM: Detect: 1, Align: 1, Mask: 1
12/28/2022 14:02:52 MainProcess MainThread extract process INFO Starting, this may take a while...
12/28/2022 14:02:52 MainProcess MainThread extract __init__ INFO Output Directory: /home/cedric/faceswap/workspace/A
12/28/2022 14:02:52 MainProcess MainThread _base initialize INFO Initializing S3FD (Detect)...
12/28/2022 14:02:53 MainProcess MainThread _base initialize INFO Initialized S3FD (Detect) with batchsize of 1
12/28/2022 14:02:53 MainProcess MainThread _base initialize INFO Initializing FAN (Align)...
12/28/2022 14:03:01 MainProcess MainThread _base initialize INFO Initialized FAN (Align) with batchsize of 1
12/28/2022 14:03:01 MainProcess MainThread _base initialize INFO Initializing Components (Mask)...
12/28/2022 14:03:01 MainProcess MainThread _base initialize INFO Initialized Components (Mask) with batchsize of 1
12/28/2022 14:03:01 MainProcess MainThread _base initialize INFO Initializing Extended (Mask)...
12/28/2022 14:03:01 MainProcess MainThread _base initialize INFO Initialized Extended (Mask) with batchsize of 1
12/28/2022 14:03:01 MainProcess MainThread _base initialize INFO Initializing BiSeNet - Face Parsing (Mask)...
12/28/2022 14:03:03 MainProcess MainThread _base initialize INFO Initialized BiSeNet - Face Parsing (Mask) with batchsize of 1
AND the save project ->
{
"convert": {
"Input Dir": "",
"Output Dir": "",
"Alignments": "",
"Reference Video": "",
"Model Dir": "",
"Color Adjustment": "avg-color",
"Mask Type": "extended",
"Writer": "opencv",
"Output Scale": 100,
"Frame Ranges": "",
"Input Aligned Dir": "",
"Nfilter": "",
"Filter": "",
"Ref Threshold": 0.4,
"Jobs": 0,
"Trainer": "",
"On The Fly": false,
"Keep Unchanged": false,
"Swap Model": false,
"Singleprocess": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"extract": {
"Input Dir": "/home/cedric/faceswap/workspace/video/Test.mp4",
"Output Dir": "/home/cedric/faceswap/workspace/A",
"Alignments": "",
"Batch Mode": false,
"Detector": "s3fd",
"Aligner": "fan",
"Masker": "bisenet-fp",
"Normalization": "hist",
"Re Feed": 9,
"Re Align": false,
"Rotate Images": "",
"Identity": false,
"Min Size": 20,
"Nfilter": "",
"Filter": "",
"Ref Threshold": 0.6,
"Size": 512,
"Extract Every N": 1,
"Save Interval": 0,
"Debug Landmarks": false,
"Singleprocess": false,
"Skip Existing": false,
"Skip Existing Faces": false,
"Skip Saving Faces": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": "/home/cedric/faceswap/workspace/extract.log"
},
"train": {
"Input A": "",
"Input B": "",
"Model Dir": "",
"Load Weights": "",
"Trainer": "original",
"Summary": false,
"Freeze Weights": false,
"Batch Size": 16,
"Iterations": 1000000,
"Distributed": false,
"Distribution Strategy": "default",
"Save Interval": 250,
"Snapshot Interval": 25000,
"Timelapse Input A": "",
"Timelapse Input B": "",
"Timelapse Output": "",
"Preview": false,
"Write Image": false,
"No Logs": false,
"Warp To Landmarks": false,
"No Flip": false,
"No Augment Color": false,
"No Warp": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"alignments": {
"Job": "",
"Output": "console",
"Alignments File": "",
"Faces Folder": "",
"Frames Folder": "",
"Extract Every N": 1,
"Size": 512,
"Min Size": 0,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"effmpeg": {
"Action": "extract",
"Input": "input",
"Output": "",
"Reference Video": "",
"Fps": "-1.0",
"Extract Filetype": ".png",
"Start": "00:00:00",
"End": "00:00:00",
"Duration": "00:00:00",
"Mux Audio": false,
"Transpose": "",
"Degrees": "",
"Scale": "1920x1080",
"Quiet": false,
"Verbose": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"manual": {
"Alignments": "",
"Frames": "",
"Thumb Regen": false,
"Single Process": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"mask": {
"Alignments": "",
"Input": "",
"Input Type": "frames",
"Masker": "extended",
"Processing": "missing",
"Output Folder": "",
"Blur Kernel": 3,
"Threshold": 4,
"Output Type": "combined",
"Full Frame": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"model": {
"Model Dir": "",
"Job": "",
"Format": "h5",
"Swap Model": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"preview": {
"Input Dir": "",
"Alignments": "",
"Model Dir": "",
"Swap Model": false,
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"sort": {
"Input": "",
"Output": "",
"Batch Mode": false,
"Sort By": "face",
"Group By": "none",
"Keep": false,
"Threshold": -1.0,
"Final Process": "",
"Bins": 5,
"Log Changes": false,
"Log File": "sort_log.json",
"Exclude Gpus": "",
"Configfile": "",
"Loglevel": "INFO",
"Logfile": ""
},
"tab_name": "extract"
} | closed | 2022-12-28T12:55:39Z | 2023-01-29T03:26:30Z | https://github.com/deepfakes/faceswap/issues/1290 | [] | gravitydeepper | 2 |
yezyilomo/django-restql | graphql | 185 | Add a way to define a serializer with a self referencing nested field | The way which I recommend
```py
class UserSerializer(DynamicFieldsMixin, NestedModelSerializer):
follows = NestedField(
'self',
many=True,
return_pk=True,
create_ops=[],
update_ops=['add', 'remove'],
required=False
)
``` | closed | 2020-08-04T21:01:19Z | 2021-09-18T06:14:34Z | https://github.com/yezyilomo/django-restql/issues/185 | [
"enhancement"
] | yezyilomo | 8 |
ipyflow/ipyflow | jupyter | 10 | support context managers (i.e. `with` clause) | closed | 2020-04-30T15:11:47Z | 2020-05-07T04:26:25Z | https://github.com/ipyflow/ipyflow/issues/10 | [] | smacke | 0 |
|
ploomber/ploomber | jupyter | 613 | Better error message when incomplete pipeline.yaml | A new user may try something like this:
```yaml
tasks:
- source: something
product: out
```
The error message isn't helpful. We should show them how to create a script/function task.
```
Error: Failed to determine task class for source 'something': Invalid dotted path 'something'. Value must be a dot separated string, with at least two parts: [module_name].[function_name].
```
| closed | 2022-02-22T16:51:12Z | 2022-02-27T23:49:16Z | https://github.com/ploomber/ploomber/issues/613 | [] | edublancas | 0 |
newpanjing/simpleui | django | 319 | Simpletags not defined | **Bug description**
Invalid template library specified. ImportError raised when trying to load 'simpleui.templatetags.simpletags': No module named 'simpleui.templatetags'
This happens with the new version of simpleUI (2020.9.26) and not with the older version (2020.7).
**Repeat step**
1. pip install django-simpleui
2. Go to localhost:8000/admin
**Environment**
_Operating System_:Windows
_Python Version_:3.7
_Django Version_:2.2
_SimpleUI Version_:2020.9.26
Sorry I don't speak Chinese, but I will try to help as much as I can :) | closed | 2020-11-18T09:05:29Z | 2020-12-22T04:25:57Z | https://github.com/newpanjing/simpleui/issues/319 | [
"bug"
] | leogout | 2 |
python-gino/gino | asyncio | 698 | Gino ORM query not working using Geoalchemy2 functions | * GINO version: 1.0.0
* Python version: 3.8.2
* asyncpg version: 0.20.1
* aiocontextvars version:
* PostgreSQL version: 12.2
### Description
The ORM query in SANIC written with Geoalchemy2 is not working
### What I Did
I have this table in a `postgreSQL` database with `postGIS` extension installed and enabled.
```
Table "public.crime_data"
Column | Type | Collation | Nullable | Default
-------------|-----------------------------|-----------|----------|----------------------------------------
id | integer | | not null | nextval('crime_data_id_seq'::regclass)
state | character varying | | |
district | character varying | | |
location | character varying | | |
sub_type_id | integer | | |
date_time | timestamp without time zone | | |
latitude | double precision | | |
longitude | double precision | | |
geom_point | geography(Point,4326) | | |
Indexes:
"crime_data_pkey" PRIMARY KEY, btree (id)
"idx_crime_data_geom_point" gist (geom_point)
Foreign-key constraints:
"crime_data_sub_type_id_fkey" FOREIGN KEY (sub_type_id) REFERENCES sub_type(id)
```
I am using `Sanic` web framework and along with it `Gino ORM` since it's asynchronous.
I am able to write and run raw SQL queries in the command line and also using `Gino`. I just want to know if it's possible to convert a certain query to ORM syntax.
This is the raw query that is _working_. This code snippet is inside an async view function and this is returning the expected result.
```python
data_points = await db.status(db.text('''
SELECT
location,
sub_type_id,
latitude,
longitude,
date_time
FROM
crime_data
WHERE
ST_Distance(
geom_point,
ST_SetSRID(ST_MakePoint(:lng, :lat), 4326)
) <= 5 * 1609.34;
'''), {
'lat': lat,
'lng': lng,
})
```
This is my attempt to convert it to an ORM query, which _**isn't** working_.
```
data_points = await CrimeData.query.where(
geo_func.ST_Distance(
'geom_point',
geo_func.ST_SetSRID(
geo_func.ST_MakePoint(lng, lat),
4326
)
) <= (5 * 1609.34)
).gino.all()
```
While trying to run this query and return the response as `text`, I'm getting this error.
```
⚠️ 500 — Internal Server Error
parse error - invalid geometry HINT: "ge" <-- parse error at position 2 within geometry
Traceback of __main__ (most recent call last):
InternalServerError: parse error - invalid geometry HINT: "ge" <-- parse error at position 2 within geometry
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/venv/lib/python3.8/site-packages/sanic/app.py, line 973, in handle_request
response = await response
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/backend/services/crime_plot.py, line 30, in test
data_points = await CrimeData.query.where(
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/venv/lib/python3.8/site-packages/gino/api.py, line 127, in all
return await self._query.bind.all(self._query, *multiparams, **params)
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/venv/lib/python3.8/site-packages/gino/engine.py, line 740, in all
return await conn.all(clause, *multiparams, **params)
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/venv/lib/python3.8/site-packages/gino/engine.py, line 316, in all
return await result.execute()
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/venv/lib/python3.8/site-packages/gino/dialects/base.py, line 214, in execute
rows = await cursor.async_execute(
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/venv/lib/python3.8/site-packages/gino/dialects/asyncpg.py, line 184, in async_execute
result, stmt = await getattr(conn, "_do_execute")(query, executor, timeout)
File /home/disciple/Documents/Code/MyProject-All/MyProject-Sanic/venv/lib/python3.8/site-packages/asyncpg/connection.py, line 1433, in _do_execute
result = await executor(stmt, None)
File asyncpg/protocol/protocol.pyx, line 196, in bind_execute
InternalServerError: parse error - invalid geometry HINT: "ge" <-- parse error at position 2 within geometry while handling path /crime-plot/test1
```
I understand the ORM query is a `SELECT *` and that is fine as long as I actually get results. I don't understand what I'm doing wrong. I'm getting the work done but I just want to make sure that it's possible with the ORM too.
| closed | 2020-06-07T18:39:34Z | 2020-06-09T02:28:39Z | https://github.com/python-gino/gino/issues/698 | [
"question"
] | KoustavCode | 3 |
taverntesting/tavern | pytest | 362 | [Feature request]: slurp openapi spec and produce coverage report | I think a really neat idea would be to take the OpenAPI (aka swagger) spec, and produce a "coverage report", i.e., show how many of the API endpoints in the API were successfully tested/hit by tavern. What do you think of this idea? | open | 2019-05-24T13:03:12Z | 2019-05-30T15:33:22Z | https://github.com/taverntesting/tavern/issues/362 | [
"Type: Enhancement"
] | tommyjcarpenter | 1 |
PokeAPI/pokeapi | api | 433 | Publish as a JS package | As a tool author, I'd like to rely on your data without directly querying your endpoint (it might be because I don't want to rely on third-party websites, or because I don't want to eat your bandwidth, or because I want to have a finer control on the API layout).
What would you think of publishing a npm package containing the csv bundled as an sqlite database after each update to the `csv` directory? This could be automated fairly easily using the GitHub actions (or an Azure job in the worst case if we can't find out how to give Actions access to this repository).
Generating the sqlite database is as simple as:
```bash
DATA_DIR=v2/csv
cd "${DATA_DIR}"
(
echo .mode csv
for TABLE in *.csv; do
echo .import "${TABLE}" "$(basename "${TABLE}" .csv)"
done
) | sqlite3 pokemon.db
```
The end result is ~30MB, but we can decrease it by providing a few different partial databases (for example one that would only contain the pokedex data). | closed | 2019-06-07T11:48:33Z | 2020-08-19T10:05:49Z | https://github.com/PokeAPI/pokeapi/issues/433 | [] | arcanis | 7 |
vimalloc/flask-jwt-extended | flask | 490 | Tests fails | HI,
I'm working in the Debian packaging of flask-jwt-extended 4.4.3. And during the build I have this
error tests:
```
__________________________________________________ test_add_context_processor ______________________________________________________
app = <Flask 'tests.test_add_context_processor'>
def test_add_context_processor(app):
jwt_manager = JWTManager(app, add_context_processor=True)
@jwt_manager.user_lookup_loader
def user_lookup_callback(_jwt_header, _jwt_data):
return "test_user"
test_client = app.test_client()
with app.test_request_context():
access_token = create_access_token("username")
access_headers = {"Authorization": "Bearer {}".format(access_token)}
response = test_client.get("/context_current_user", headers=access_headers)
> assert response.text == "test_user"
E AttributeError: 'WrapperTestResponse' object has no attribute 'text'
tests/test_add_context_processor.py:37: AttributeError
____________________________________________________ test_no_add_context_processor ____________________________________________________
app = <Flask 'tests.test_add_context_processor'>
def test_no_add_context_processor(app):
jwt_manager = JWTManager(app)
@jwt_manager.user_lookup_loader
def user_lookup_callback(_jwt_header, _jwt_data):
return "test_user"
test_client = app.test_client()
with app.test_request_context():
access_token = create_access_token("username")
access_headers = {"Authorization": "Bearer {}".format(access_token)}
response = test_client.get("/context_current_user", headers=access_headers)
> assert response.text == ""
E AttributeError: 'WrapperTestResponse' object has no attribute 'text'
tests/test_add_context_processor.py:54: AttributeError
``` | closed | 2022-08-01T18:27:09Z | 2022-08-01T20:45:27Z | https://github.com/vimalloc/flask-jwt-extended/issues/490 | [] | eamanu | 2 |
nonebot/nonebot2 | fastapi | 3,074 | Plugin: nonebot-plugin-leetcodeapi-khasa | ### PyPI 项目名
nonebot-plugin-leetcodeAPI-KHASA
### 插件 import 包名
nonebot_plugin_leetcodeAPI_KHASA
### 标签
[{"label":"leetcode","color":"#ea5252"}]
### 插件配置项
_No response_ | closed | 2024-10-27T09:12:51Z | 2024-11-04T13:10:28Z | https://github.com/nonebot/nonebot2/issues/3074 | [
"Plugin"
] | KhasAlushird | 5 |
pydantic/pydantic-core | pydantic | 1,468 | Work with Python coroutine in Rust? | I am wondering if there is anyway to deal with Python coroutine in `pydantic_core`. I found [the async-await section of PyO3 docs](https://pyo3.rs/v0.22.2/async-await), but the feature seems not enabled for `pydantic_core`. Is there any other workarounds that is equivalent to `async def` and `await` in Python?
### Context
I am suspecting the [`return_validator` logic in `pydantic._validate_call`](https://github.com/pydantic/pydantic/blob/c7497c56a71504a9ddd4c374dd5479f408484043/pydantic/_internal/_validate_call.py#L70C1-L93C54) is actually a duplicate of [the similar logic in `call.rs`](https://github.com/pydantic/pydantic-core/blob/f389728432949ecceddecb1f59bb503b0998e9aa/src/validators/call.rs#L95-L102). I tried just remove the Python part and every thing worked fine except for async function, which is currently working because of pydantic/pydantic#7046. The approach taken was to wrap an async function to await the coroutine:
```py
async def return_val_wrapper(aw: Awaitable[Any]) -> None:
return validator.validate_python(await aw)
self.__return_pydantic_validator__ = return_val_wrapper
```
Now that I want to remove the `return_validator` logic in Python and keep the Rust side, I will have to move this wrapper into `call.rs`, which is the reason I am opening this issue.
| open | 2024-09-26T13:22:18Z | 2024-09-30T12:39:47Z | https://github.com/pydantic/pydantic-core/issues/1468 | [] | kc0506 | 5 |
OFA-Sys/Chinese-CLIP | nlp | 259 | 这个支持词性标注吗 | 如题 | closed | 2024-02-27T12:26:50Z | 2024-03-01T03:44:18Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/259 | [] | hu394854434 | 1 |
lepture/authlib | flask | 353 | What is in the version 0.15.4 | A new version of Authlib has been released on the `pypi` but it's nowhere to be found on GitHub tags nor the changelog. Did anything changed or was that version released due to some error?
https://pypi.org/project/Authlib/
Thanks! | closed | 2021-06-08T14:53:01Z | 2021-07-17T03:09:03Z | https://github.com/lepture/authlib/issues/353 | [
"question"
] | kawa-marcin | 6 |
KaiyangZhou/deep-person-reid | computer-vision | 381 | load_pretrained_weights(model, weight_path) Warning | closed | 2020-10-20T09:06:27Z | 2020-10-30T03:58:50Z | https://github.com/KaiyangZhou/deep-person-reid/issues/381 | [] | nanHeK | 0 |
|
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 490 | How do I train a model with my own data? Where can I find the instruction? | How do I train a model with my own data? Where can I find the instructions on how to do it?
Need help | closed | 2020-08-13T12:58:53Z | 2020-08-25T23:29:24Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/490 | [] | justinjohn0306 | 4 |
aminalaee/sqladmin | sqlalchemy | 21 | Offset query issue while using MSSQL | **Error while query the Model :**
`sqlalchemy.exc.CompileError: MSSQL requires an order_by when using an OFFSET or a non-simple LIMIT clause`
**Why?**
MSSQL requires an order_by when using an offset.
Setup:
- fastapi
- MSSQL Server 2019
- PyODBC and SqlAlchemy
**Original code:**
https://github.com/aminalaee/sqladmin/blob/2205de1706ed6dd8c429a34664238f95d8f0a2ad/sqladmin/models.py#L261
**My change:**
https://github.com/bigg01/sqladmin/blob/main/sqladmin/models.py#L262
```python
# sqlalchemy.exc.CompileError: MSSQL requires an order_by when using an OFFSET or a non-simple LIMIT clause
query = select(cls.model).order_by("id").limit(page_size).offset((page - 1) * page_size)
```
I think in general it would make sense to add an option for ordering ?
What do you think ?
Regards
| closed | 2022-01-18T20:24:59Z | 2022-01-19T17:19:39Z | https://github.com/aminalaee/sqladmin/issues/21 | [
"enhancement"
] | bigg01 | 1 |
eamigo86/graphene-django-extras | graphql | 117 | Permissions with graphene-django-extras | Hi everyone,
I would like to implement an easy permissions system. With original `graphene-django`, it was quite straightforward. It was sufficient to make a similar method for each field on an object:
```python
def resolve_field(self):
if not has_permission():
raise PermissionError("Access Denied!")
return self.field
```
Here it is a bit more difficult, since `DjangoObjectListField` just bypasses these methods. The docs say that they are not needed, but even if they are present, they are just simply ignored.
Do you have any advice how to implement permissions here? Either how to force `DjangoObjectListField` not to ignore `resolve_field` method, or suggest a completely different approach.
Thanks! | open | 2019-08-08T14:20:18Z | 2020-06-26T05:55:00Z | https://github.com/eamigo86/graphene-django-extras/issues/117 | [] | karlosss | 5 |
python-gino/gino | asyncio | 56 | Error when create record in Sanic | * GINO version: 0.5.0
* Python version: 3.6.2
* Operating System: Ubuntu 14.04
### Description
Error to create record in Sanic after upgraded to version 0.5.0
### What I Did
```
@bp.post('/users')
async def add_new_user(request):
new_obj = request.json #dict
u = await User.create(**new_obj)
return ajax_maint_ok(u.id)
```
Call it like this:
```
DATA='{"nickname":"n1"}'
curl \
http://dserver:9901/demo/users \
-X POST \
-H "Content-Type: application/json" \
-H "Accept: text/html,application/json" \
-d ${DATA}
```
| closed | 2017-09-05T09:07:48Z | 2017-09-06T02:10:59Z | https://github.com/python-gino/gino/issues/56 | [
"question",
"wontfix"
] | jonahfang | 10 |
pydantic/pydantic | pydantic | 10,508 | Inconsistent schema generation resulting from `Any` in generic types | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
There's some inconsistency around schema generation for types that explicitely vs implicitly have a type of `Any`.
```python
from pydantic import TypeAdapter
implicit = TypeAdapter(list).core_schema
explicit = TypeAdapter(list[Any]).core_schema
assert implicit == {'type': 'list', 'items_schema': {'type': 'any'}}
assert explicit == {'type': 'list'}
```
This also happens for `dict`, and I suspect other generic types:
```python
from pydantic import TypeAdapter
implicit = TypeAdapter(dict).core_schema
explicit = TypeAdapter(dict[Any, Any]).core_schema
assert implicit == {'type': 'dict', 'keys_schema': {'type': 'any'}, 'values_schema': {'type': 'any'}}
assert explicit == {'type': 'dict', 'strict': False}
```
In my particular case I'm implementing [custom type adapters](https://github.com/pydantic/pydantic/issues/8279) via [`walk_core_schema`](https://github.com/pydantic/pydantic/issues/8279#issuecomment-2135935095). This approach relies on Pydantic handling `Any` consistently since I'd like to replace all instances of `{"type": "any"}` in the schema with some dynamic serialization logic. Under the current
### Example Code
```Python
See above.
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.7.3
pydantic-core version: 2.18.4
pydantic-core build: profile=release pgo=true
install path: /Users/ryanmorshead/miniconda3/envs/abraxas-env/lib/python3.11/site-packages/pydantic
python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:34:54) [Clang 16.0.6 ]
platform: macOS-14.2.1-arm64-arm-64bit
related packages: typing_extensions-4.12.1 pyright-1.1.379
commit: unknown
```
| closed | 2024-09-27T19:43:52Z | 2024-09-27T19:44:16Z | https://github.com/pydantic/pydantic/issues/10508 | [
"bug V2",
"pending"
] | rmorshea | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,637 | [Bug]: AttributeError: 'NoneType' object has no attribute 'lowvram' -- Clean install on Mac | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
On clean install, selecting a downloaded model or preloaded v1-5 model will result in a AttributeError.
Terminal:
`e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053
Loading weights [e1441589a6] from /Users/[obfuscated]/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt
Creating model from config: /Users/[obfuscated]/stable-diffusion-webui/configs/v1-inference.yaml
changing setting sd_model_checkpoint to v1-5-pruned.ckpt: AttributeError
Traceback (most recent call last):
File "/Users/[obfuscated]/stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "/Users/[obfuscated]/stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'`
### Steps to reproduce the problem
Upon clean install and webui launch, attempt to select the v1-5 pruned ckpt file.
### What should have happened?
A model should be able to be selected, and generation should be able to proceed.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
{
"Platform": "macOS-12.1-arm64-arm-64bit",
"Python": "3.10.14",
"Version": "v1.9.3",
"Commit": "1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0",
"Script path": "/Users/[obfuscated]/stable-diffusion-webui",
"Data path": "/Users/[obfuscated]/stable-diffusion-webui",
"Extensions dir": "/Users/[obfuscated]/stable-diffusion-webui/extensions",
"Checksum": "d56275202269240dd6f316f3de94fd6195326487d0a53de5de030e8cc3084cb7",
"Commandline": [
"launch.py",
"--skip-torch-cuda-test",
"--upcast-sampling",
"--no-half-vae",
"--use-cpu",
"interrogate"
],
"Torch env info": {
"torch_version": "2.1.0",
"is_debug_build": "False",
"cuda_compiled_version": null,
"gcc_version": null,
"clang_version": "13.1.6 (clang-1316.0.21.2.5)",
"cmake_version": "version 3.29.2",
"os": "macOS 12.1 (arm64)",
"libc_version": "N/A",
"python_version": "3.10.14 (main, Mar 20 2024, 03:57:45) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)",
"python_platform": "macOS-12.1-arm64-arm-64bit",
"is_cuda_available": "False",
"cuda_runtime_version": null,
"cuda_module_loading": "N/A",
"nvidia_driver_version": null,
"nvidia_gpu_models": null,
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.1.0",
"torchdiffeq==0.2.3",
"torchmetrics==1.3.2",
"torchsde==0.2.6",
"torchvision==0.16.0"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": "Apple M1 Pro"
},
"Exceptions": [
{
"exception": "Torch not compiled with CUDA enabled",
"traceback": [
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 620, get_sd_model",
"load_model()"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 770, load_model",
"with devices.autocast(), torch.no_grad():"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py, line 218, autocast",
"if has_xpu() or has_mps() or cuda_no_autocast():"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py, line 28, cuda_no_autocast",
"device_id = get_cuda_device_id()"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py, line 40, get_cuda_device_id",
") or torch.cuda.current_device()"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py, line 769, current_device",
"_lazy_init()"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py, line 289, _lazy_init",
"raise AssertionError(\"Torch not compiled with CUDA enabled\")"
]
]
},
{
"exception": "'NoneType' object has no attribute 'lowvram'",
"traceback": [
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/options.py, line 165, set",
"option.onchange()"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/call_queue.py, line 13, f",
"res = func(*args, **kwargs)"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/initialize_util.py, line 181, <lambda>",
"shared.opts.onchange(\"sd_model_checkpoint\", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 860, reload_model_weights",
"sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 793, reuse_model_from_already_loaded",
"send_model_to_cpu(sd_model)"
],
[
"/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 662, send_model_to_cpu",
"if m.lowvram:"
]
]
}
],
"CPU": {
"model": "arm",
"count logical": 10,
"count physical": 10
},
"RAM": {
"total": "16GB",
"used": "5GB",
"free": "62MB",
"active": "3GB",
"inactive": "3GB"
},
"Extensions": [],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate",
"GIT": "git",
"GRADIO_ANALYTICS_ENABLED": "False",
"TORCH_COMMAND": "pip install torch==2.1.0 torchvision==0.16.0"
},
"Config": {
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"sd_model_checkpoint": "v1-5-pruned.ckpt [e1441589a6]",
"sd_checkpoint_hash": "e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053"
},
"Startup": {
"total": 68.00557136535645,
"records": {
"initial startup": 0.0009272098541259766,
"prepare environment/checks": 4.220008850097656e-05,
"prepare environment/git version info": 0.018723011016845703,
"prepare environment/install torch": 13.662177085876465,
"prepare environment/torch GPU test": 6.175041198730469e-05,
"prepare environment/install clip": 3.7877581119537354,
"prepare environment/install open_clip": 4.085432052612305,
"prepare environment/clone repositores": 7.612929821014404,
"prepare environment/install requirements": 29.78075909614563,
"prepare environment/run extensions installers": 0.004931211471557617,
"prepare environment": 58.95587396621704,
"launcher": 0.022570133209228516,
"import torch": 4.146008729934692,
"import gradio": 0.765498161315918,
"setup paths": 1.2596769332885742,
"import ldm": 0.013821840286254883,
"import sgm": 5.245208740234375e-06,
"initialize shared": 0.3825209140777588,
"other imports": 1.01145601272583,
"opts onchange": 0.00033593177795410156,
"setup SD model": 6.604194641113281e-05,
"setup codeformer": 0.003963947296142578,
"setup gfpgan": 0.010995149612426758,
"set samplers": 3.886222839355469e-05,
"list extensions": 0.0009171962738037109,
"restore config state file": 8.821487426757812e-06,
"list SD models": 0.008134841918945312,
"list localizations": 0.00017118453979492188,
"load scripts/custom_code.py": 0.002298116683959961,
"load scripts/img2imgalt.py": 0.0015780925750732422,
"load scripts/loopback.py": 0.0011126995086669922,
"load scripts/outpainting_mk_2.py": 0.002089977264404297,
"load scripts/poor_mans_outpainting.py": 0.0015411376953125,
"load scripts/postprocessing_codeformer.py": 0.0005950927734375,
"load scripts/postprocessing_gfpgan.py": 0.0011141300201416016,
"load scripts/postprocessing_upscale.py": 0.0018849372863769531,
"load scripts/prompt_matrix.py": 0.001984834671020508,
"load scripts/prompts_from_file.py": 0.0018491744995117188,
"load scripts/sd_upscale.py": 0.0013020038604736328,
"load scripts/xyz_grid.py": 0.008707761764526367,
"load scripts/ldsr_model.py": 0.3379373550415039,
"load scripts/lora_script.py": 0.1310436725616455,
"load scripts/scunet_model.py": 0.016871929168701172,
"load scripts/swinir_model.py": 0.02359914779663086,
"load scripts/hotkey_config.py": 0.000881195068359375,
"load scripts/extra_options_section.py": 0.0009827613830566406,
"load scripts/hypertile_script.py": 0.04871392250061035,
"load scripts/hypertile_xyz.py": 0.0001862049102783203,
"load scripts/postprocessing_autosized_crop.py": 0.0010879039764404297,
"load scripts/postprocessing_caption.py": 0.0004470348358154297,
"load scripts/postprocessing_create_flipped_copies.py": 0.00043702125549316406,
"load scripts/postprocessing_focal_crop.py": 0.0026140213012695312,
"load scripts/postprocessing_split_oversized.py": 0.0008080005645751953,
"load scripts/soft_inpainting.py": 0.0022139549255371094,
"load scripts/comments.py": 0.01715993881225586,
"load scripts/refiner.py": 0.002248048782348633,
"load scripts/sampler.py": 0.0008349418640136719,
"load scripts/seed.py": 0.0009102821350097656,
"load scripts": 0.6150598526000977,
"load upscalers": 0.0033631324768066406,
"refresh VAE": 0.0006058216094970703,
"refresh textual inversion templates": 0.0002219676971435547,
"scripts list_optimizers": 0.0002779960632324219,
"scripts list_unets": 1.3113021850585938e-05,
"reload hypernetworks": 0.00030112266540527344,
"initialize extra networks": 0.006253719329833984,
"scripts before_ui_callback": 0.0002532005310058594,
"create ui": 0.23605775833129883,
"gradio launch": 0.5480811595916748,
"add APIs": 0.016994953155517578,
"app_started_callback/lora_script.py": 0.0005769729614257812,
"app_started_callback": 0.000576019287109375
}
},
"Packages": [
"accelerate==0.21.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.9.5",
"aiosignal==1.3.1",
"altair==5.3.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==23.2.0",
"blendmodes==2022",
"certifi==2024.2.2",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip==1.0",
"contourpy==1.2.1",
"cycler==0.12.1",
"deprecation==2.1.0",
"diskcache==5.6.3",
"einops==0.4.1",
"exceptiongroup==1.2.1",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.2",
"filelock==3.13.4",
"filterpy==1.4.5",
"fonttools==4.51.0",
"frozenlist==1.4.1",
"fsspec==2024.3.1",
"ftfy==6.2.0",
"gitdb==4.0.11",
"gitpython==3.1.32",
"gradio-client==0.5.0",
"gradio==3.41.2",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.22.2",
"idna==3.7",
"imageio==2.34.1",
"importlib-resources==6.4.0",
"inflection==0.5.1",
"jinja2==3.1.3",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.12.1",
"jsonschema==4.21.1",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy-loader==0.4",
"lightning-utilities==0.11.2",
"llvmlite==0.42.0",
"markupsafe==2.1.5",
"matplotlib==3.8.4",
"mpmath==1.3.0",
"multidict==6.0.5",
"networkx==3.3",
"numba==0.59.1",
"numpy==1.26.2",
"omegaconf==2.2.3",
"open-clip-torch==2.20.0",
"opencv-python==4.9.0.80",
"orjson==3.10.1",
"packaging==24.0",
"pandas==2.2.2",
"piexif==1.1.3",
"pillow-avif-plugin==1.4.3",
"pillow==9.5.0",
"pip==24.0",
"protobuf==3.20.0",
"psutil==5.9.5",
"pydantic==1.10.15",
"pydub==0.25.1",
"pyparsing==3.1.2",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.9",
"pytorch-lightning==1.9.4",
"pytz==2024.1",
"pywavelets==1.6.0",
"pyyaml==6.0.1",
"referencing==0.35.0",
"regex==2024.4.16",
"requests==2.31.0",
"resize-right==0.0.2",
"rpds-py==0.18.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scipy==1.13.0",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.2.0",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"spandrel==0.1.6",
"starlette==0.26.1",
"sympy==1.12",
"tifffile==2024.4.24",
"timm==0.9.16",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"toolz==0.12.1",
"torch==2.1.0",
"torchdiffeq==0.2.3",
"torchmetrics==1.3.2",
"torchsde==0.2.6",
"torchvision==0.16.0",
"tqdm==4.66.2",
"trampoline==0.1.2",
"transformers==4.30.2",
"typing-extensions==4.11.0",
"tzdata==2024.1",
"urllib3==2.2.1",
"uvicorn==0.29.0",
"wcwidth==0.2.13",
"websockets==11.0.3",
"yarl==1.9.4"
]
}
### Console logs
```Shell
Last login: Fri Apr 26 12:46:05 on ttys002
[obfuscated]@binhyboy-M1-Pro ~ % cd stable-diffusion-webui/
[obfuscated]@binhyboy-M1-Pro stable-diffusion-webui % ./webui.sh
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on [obfuscated] user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 20 2024, 03:57:45) [Clang 14.0.0 (clang-1400.0.29.202)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Installing torch and torchvision
Collecting torch==2.1.0
Using cached torch-2.1.0-cp310-none-macosx_11_0_arm64.whl.metadata (24 kB)
Collecting torchvision==0.16.0
Using cached torchvision-0.16.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (6.6 kB)
Collecting filelock (from torch==2.1.0)
Using cached filelock-3.13.4-py3-none-any.whl.metadata (2.8 kB)
Collecting typing-extensions (from torch==2.1.0)
Using cached typing_extensions-4.11.0-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.1.0)
Using cached sympy-1.12-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.1.0)
Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch==2.1.0)
Using cached Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
Collecting fsspec (from torch==2.1.0)
Using cached fsspec-2024.3.1-py3-none-any.whl.metadata (6.8 kB)
Collecting numpy (from torchvision==0.16.0)
Using cached numpy-1.26.4-cp310-cp310-macosx_11_0_arm64.whl.metadata (61 kB)
Collecting requests (from torchvision==0.16.0)
Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.16.0)
Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (9.2 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.1.0)
Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision==0.16.0)
Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torchvision==0.16.0)
Using cached idna-3.7-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision==0.16.0)
Using cached urllib3-2.2.1-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision==0.16.0)
Using cached certifi-2024.2.2-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath>=0.19 (from sympy->torch==2.1.0)
Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Using cached torch-2.1.0-cp310-none-macosx_11_0_arm64.whl (59.5 MB)
Using cached torchvision-0.16.0-cp310-cp310-macosx_11_0_arm64.whl (1.6 MB)
Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl (3.4 MB)
Using cached filelock-3.13.4-py3-none-any.whl (11 kB)
Using cached fsspec-2024.3.1-py3-none-any.whl (171 kB)
Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB)
Using cached networkx-3.3-py3-none-any.whl (1.7 MB)
Using cached numpy-1.26.4-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Using cached typing_extensions-4.11.0-py3-none-any.whl (34 kB)
Using cached certifi-2024.2.2-py3-none-any.whl (163 kB)
Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl (120 kB)
Using cached idna-3.7-py3-none-any.whl (66 kB)
Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl (18 kB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached urllib3-2.2.1-py3-none-any.whl (121 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.4 fsspec-2024.3.1 idna-3.7 jinja2-3.1.3 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 pillow-10.3.0 requests-2.31.0 sympy-1.12 torch-2.1.0 torchvision-0.16.0 typing-extensions-4.11.0 urllib3-2.2.1
Installing clip
Installing open_clip
Cloning assets into /Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-webui-assets...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0
Receiving objects: 100% (20/20), 132.70 KiB | 1.35 MiB/s, done.
Cloning Stable Diffusion into /Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (306/306), done.
remote: Total 580 (delta 278), reused 446 (delta 247), pack-reused 9
Receiving objects: 100% (580/580), 73.44 MiB | 42.75 MiB/s, done.
Resolving deltas: 100% (278/278), done.
Cloning Stable Diffusion XL into /Users/[obfuscated]/stable-diffusion-webui/repositories/generative-models...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/generative-models'...
remote: Enumerating objects: 941, done.
remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941
Receiving objects: 100% (941/941), 43.85 MiB | 35.95 MiB/s, done.
Resolving deltas: 100% (489/489), done.
Cloning K-diffusion into /Users/[obfuscated]/stable-diffusion-webui/repositories/k-diffusion...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/k-diffusion'...
remote: Enumerating objects: 1340, done.
remote: Counting objects: 100% (1340/1340), done.
remote: Compressing objects: 100% (433/433), done.
remote: Total 1340 (delta 940), reused 1259 (delta 900), pack-reused 0
Receiving objects: 100% (1340/1340), 238.52 KiB | 1.77 MiB/s, done.
Resolving deltas: 100% (940/940), done.
Cloning BLIP into /Users/[obfuscated]/stable-diffusion-webui/repositories/BLIP...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
Receiving objects: 100% (277/277), 7.03 MiB | 18.28 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
==============================================================================
You are running torch 2.1.0.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
Calculating sha256 for /Users/[obfuscated]/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt: Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 68.0s (prepare environment: 59.0s, import torch: 4.1s, import gradio: 0.8s, setup paths: 1.3s, initialize shared: 0.4s, other imports: 1.0s, load scripts: 0.6s, create ui: 0.2s, gradio launch: 0.5s).
e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053
Loading weights [e1441589a6] from /Users/[obfuscated]/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt
Creating model from config: /Users/[obfuscated]/stable-diffusion-webui/configs/v1-inference.yaml
changing setting sd_model_checkpoint to v1-5-pruned.ckpt: AttributeError
Traceback (most recent call last):
File "/Users/[obfuscated]/stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "/Users/[obfuscated]/stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: AssertionError
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "/Users/[obfuscated]/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
load_model()
File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 770, in load_model
with devices.autocast(), torch.no_grad():
File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 218, in autocast
if has_xpu() or has_mps() or cuda_no_autocast():
File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
device_id = get_cuda_device_id()
File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
) or torch.cuda.current_device()
File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 769, in current_device
_lazy_init()
File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Stable diffusion model failed to load
Exception in thread Thread-2 (load_model):
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize.py", line 154, in load_model
devices.first_time_calculation()
File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 267, in first_time_calculation
linear(x)
File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/[obfuscated]/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 503, in network_Linear_forward
return originals.Linear_forward(self, input)
File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
```
### Additional information
MacBook Pro (16-inch, 2021) with Apple M1 Pro, 16GB on macOS Monterey 12.1 | open | 2024-04-26T20:02:57Z | 2024-05-06T12:48:44Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15637 | [
"bug-report"
] | ghost | 2 |
wyfo/apischema | graphql | 325 | Please can you push the 0.17.2 release to PyPI? | The version bump commit is there, but no git tag or release on PyPI... | closed | 2022-01-14T20:52:09Z | 2022-01-15T19:57:58Z | https://github.com/wyfo/apischema/issues/325 | [] | thomascobb | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 1,005 | Is there any previous version that works on Mojave? | Cant upgrade OS due to program constraints, is there any previous version that has worked on Mojave? or has anyone figured out a workaround? Thank you | open | 2023-12-05T06:44:12Z | 2023-12-05T06:44:12Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1005 | [] | flugenhiemen | 0 |
amdegroot/ssd.pytorch | computer-vision | 526 | TypeError: not enough arguments for format string | Hi. I have the type error when run the training code(as shown as below). For your reference, I run custom object detection with 11 classes, which I already edited in config file (num_classes =12) and modify voc0712 file(VOC_CLASSES list as my custom class label),and some file path in VOCDetection class. Can you give me some advice if you know the reason behind this error and its solution? Thank you very much.
Loading base network...
Initializing weights...
Loading the dataset...
13500
/home/koh/ssd.pytorch/ssd.py:34: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
self.priors = Variable(self.priorbox.forward(), volatile=True)
<ipython-input-4-c184efe0fbfe>:13: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
init.xavier_uniform(param)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-2da0ffaf5447> in <module>
----> 1 train()
<ipython-input-3-8b7eddbf10e3> in train()
74
75 # load train data
---> 76 images, targets = next(batch_iterator)
77
78 images = Variable(images.cuda())
~/anaconda3/envs/mydl/lib/python3.8/site-packages/torch/utils/data/dataloader.py in __next__(self)
361
362 def __next__(self):
--> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \
~/anaconda3/envs/mydl/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _next_data(self)
987 else:
988 del self._task_info[idx]
--> 989 return self._process_data(data)
990
991 def _try_put_index(self):
~/anaconda3/envs/mydl/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)
1012 self._try_put_index()
1013 if isinstance(data, ExceptionWrapper):
-> 1014 data.reraise()
1015 return data
1016
~/anaconda3/envs/mydl/lib/python3.8/site-packages/torch/_utils.py in reraise(self)
393 # (https://bugs.python.org/issue2651), so we work around it.
394 msg = KeyErrorMessage(msg)
--> 395 raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/koh/anaconda3/envs/mydl/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/home/koh/anaconda3/envs/mydl/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/koh/anaconda3/envs/mydl/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/koh/ssd.pytorch/data/voc0712.py", line 129, in __getitem__
im, gt, h, w = self.pull_item(index)
File "/home/koh/ssd.pytorch/data/voc0712.py", line 139, in pull_item
target = ET.parse(self._annopath % img_id).getroot()
TypeError: not enough arguments for format string
| closed | 2020-11-09T12:43:42Z | 2020-11-10T07:39:30Z | https://github.com/amdegroot/ssd.pytorch/issues/526 | [] | junkoh88 | 0 |
521xueweihan/HelloGitHub | python | 1,906 | 项目推荐 | city-roads 在线生成手绘风格的城市地图 | ## 项目推荐
- 项目地址:https://github.com/anvaka/city-roads
- 类别:JS
- 项目后续更新计划:不清楚
- 项目描述:city-roads 通过 [overpass API](http://overpass-turbo.eu/) 调用 [OpenStreetMap](https://www.openstreetmap.org/) 数据生成手绘风格的城市地图
- 推荐理由:生成的手绘风格地图很适合当壁纸
- 截图:


| closed | 2021-09-27T04:51:30Z | 2021-10-28T02:05:53Z | https://github.com/521xueweihan/HelloGitHub/issues/1906 | [
"已发布",
"JavaScript 项目"
] | SekiBetu | 0 |
ultralytics/ultralytics | deep-learning | 18,766 | How to determine which way the dataset rotation box is defined? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How to determine which way the dataset rotation box is defined?
Definition of rotated box
Due to the difference in the definition range of theta, the following three definitions of the rotated box gradually emerge in rotated object detection:
{math}D_{oc^{\prime}}: OpenCV Definition, angle∈(0, 90°], theta∈(0, pi / 2], The angle between the width of the rectangle and the positive semi-axis of x is a positive acute angle. This definition comes from the cv2.minAreaRect function in OpenCV, which returns an angle in the range (0, 90°].
{math}D_{le135}: Long Edge Definition (135°),angle∈[-45°, 135°), theta∈[-pi / 4, 3 * pi / 4) and width > height.
{math}D_{le90}: Long Edge Definition (90°),angle∈[-90°, 90°), theta∈[-pi / 2, pi / 2) and width > height.
### Additional
_No response_ | open | 2025-01-20T03:35:06Z | 2025-01-21T04:37:37Z | https://github.com/ultralytics/ultralytics/issues/18766 | [
"question",
"OBB"
] | yangershuai627 | 7 |
albumentations-team/albumentations | deep-learning | 2,302 | [SpeedUp] ThinPlateSpline | Benchmark shows that `kornia` has faster `ThinPlateSpline` implementation => need to learn from it and fix. | closed | 2025-01-24T16:02:46Z | 2025-01-25T23:37:46Z | https://github.com/albumentations-team/albumentations/issues/2302 | [
"Speed Improvements"
] | ternaus | 1 |
huggingface/datasets | nlp | 7,092 | load_dataset with multiple jsonlines files interprets datastructure too early | ### Describe the bug
likely related to #6460
using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data.
### Steps to reproduce the bug
real world example:
data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure.
```python
from datasets import load_dataset
ds = load_dataset("json", data_dir="./data/annotated/api")
```
you get a long error trace, where in the middle it says something like
```cs
TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null
```
toy example: (on request)
### Expected behavior
Some suggestions
1. give a better error message to the user
2. consider all files before deciding on a data structure for a given column.
3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow)
as a workaround I have lazily implemented the following (essentially step 2)
```python
import os
import jsonlines
import datasets
api_files = os.listdir("./data/annotated/api")
api_files = [f"./data/annotated/api/{f}" for f in api_files]
api_file_contents = []
for f in api_files:
with jsonlines.open(f) as reader:
for obj in reader:
api_file_contents.append(obj)
ds = datasets.Dataset.from_list(api_file_contents)
```
this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place).
### Environment info
- `datasets` version: 2.20.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.10.0 | open | 2024-08-06T17:42:55Z | 2024-08-08T16:35:01Z | https://github.com/huggingface/datasets/issues/7092 | [] | Vipitis | 5 |
joeyespo/grip | flask | 137 | Anchor Tags not properly being rendered. | `#### <a name="setup_PHPStorm"></a>`
`#### 7. Setup PHPStorm`
becomes
`<h4>
<a id="user-content--6" class="anchor" href="#-6" aria-hidden="true"><span class="octicon octicon-link"></span></a><a name="user-content-setup_PHPStorm"></a>
</h4>`
`<h4>
<a id="user-content-7-setup-phpstorm" class="anchor" href="#7-setup-phpstorm" aria-hidden="true"><span class="octicon octicon-link"></span></a>7. Setup PHPStorm</h4>`
thereby disallowing clickable menus at the top to jump to #setup_PHPStorm.
| closed | 2015-07-10T23:29:39Z | 2019-04-19T19:29:36Z | https://github.com/joeyespo/grip/issues/137 | [
"not-a-bug"
] | dreamingbinary | 14 |
ContextLab/hypertools | data-visualization | 119 | return PCA factor loadings | perhaps using a sklearn-style API? | closed | 2017-05-24T17:06:44Z | 2017-10-22T01:17:13Z | https://github.com/ContextLab/hypertools/issues/119 | [] | jeremymanning | 1 |
deepinsight/insightface | pytorch | 2,103 | Question about creating Arcface using tensorflow2.x | Hi
I'm trying to create a Arcface code using Tensorflow2.x
I have a problem in custom layer and the whole ArcFace Model
Please help me if you can
This is my implementation for arcface layer based on the algorithm from your article
```
class MyArcFaceLayer(tf.keras.layers.Layer):
"""ArcMarginPenaltyLogists"""
def __init__(self, num_classes, kernel_regularizer, margin=0.5, logist_scale=64., **kwargs):
super(MyArcFaceLayer, self).__init__(**kwargs)
self.num_classes = num_classes
self.margin = margin
self.logist_scale = logist_scale
self.kernel_regularizer = kernel_regularizer
def build(self, input_shape):
self.w = self.add_weight(name="arcface_weights", initializer='glorot_uniform', shape=[512, self.num_classes], trainable=True, regularizer=self.kernel_regularizer)
self.pi = tf.constant(pi)
def call(self, embds, labels):
normed_embds = tf.nn.l2_normalize(embds, axis=1, name='normed_embd')
normed_w = tf.nn.l2_normalize(self.w, axis=0, name='normed_weights')
fc7 = tf.matmul(normed_embds, normed_w, name='fc7')
theta = tf.math.acos(fc7)
marginal_target_logit = tf.math.maximum(tf.math.cos(theta + self.margin), tf.math.cos(self.pi - self.margin))
original_target_logit = tf.math.cos(theta)
print("original_target_logit = {}".format(original_target_logit.shape))
fc7 = fc7 + labels * (marginal_target_logit - original_target_logit)
fc7 = fc7 * self.logist_scale
return fc7
```
Do you see a difference or a problem in that?
I found some implementation from tensorflow1 that you suggested in your ReadMe but they are different from your algorithm
Here is the implementation from https://github.com/auroua/InsightFace_TF/blob/master/losses/face_losses.py
```
def arcface_loss(embedding, labels, out_num, w_init=None, s=64., m=0.5):
'''
:param embedding: the input embedding vectors
:param labels: the input labels, the shape should be eg: (batch_size, 1)
:param s: scalar value default is 64
:param out_num: output class num
:param m: the margin value, default is 0.5
:return: the final cacualted output, this output is send into the tf.nn.softmax directly
'''
cos_m = math.cos(m)
sin_m = math.sin(m)
mm = sin_m * m # issue 1
threshold = math.cos(math.pi - m)
with tf.variable_scope('arcface_loss'):
# inputs and weights norm
embedding_norm = tf.norm(embedding, axis=1, keep_dims=True)
embedding = tf.div(embedding, embedding_norm, name='norm_embedding')
weights = tf.get_variable(name='embedding_weights', shape=(embedding.get_shape().as_list()[-1], out_num),
initializer=w_init, dtype=tf.float32)
weights_norm = tf.norm(weights, axis=0, keep_dims=True)
weights = tf.div(weights, weights_norm, name='norm_weights')
# cos(theta+m)
cos_t = tf.matmul(embedding, weights, name='cos_t')
cos_t2 = tf.square(cos_t, name='cos_2')
sin_t2 = tf.subtract(1., cos_t2, name='sin_2')
sin_t = tf.sqrt(sin_t2, name='sin_t')
cos_mt = s * tf.subtract(tf.multiply(cos_t, cos_m), tf.multiply(sin_t, sin_m), name='cos_mt')
# this condition controls the theta+m should in range [0, pi]
# 0<=theta+m<=pi
# -m<=theta<=pi-m
cond_v = cos_t - threshold
cond = tf.cast(tf.nn.relu(cond_v, name='if_else'), dtype=tf.bool)
keep_val = s*(cos_t - mm)
cos_mt_temp = tf.where(cond, cos_mt, keep_val)
mask = tf.one_hot(labels, depth=out_num, name='one_hot_mask')
# mask = tf.squeeze(mask, 1)
inv_mask = tf.subtract(1., mask, name='inverse_mask')
s_cos_t = tf.multiply(s, cos_t, name='scalar_cos_t')
output = tf.add(tf.multiply(s_cos_t, inv_mask), tf.multiply(cos_mt_temp, mask), name='arcface_loss_output')
return output
```
This is my implementation for my whole ArcFace Model
```
def create_face_verification_model_v2(input_shape=(112, 112), num_class=8732, weight_decay=0.0001):
rgb_input_shape = input_shape + (3, )
input_layer = Input(rgb_input_shape)
global_initializer = 'glorot_uniform'
global_regularizer = l2(weight_decay)
global_bias = False
backbone = tf.keras.applications.ResNet101V2(input_shape=rgb_input_shape, weights=None, include_top=False)
backbone_output = backbone(input_layer)
x = BatchNormalization(gamma_regularizer=global_regularizer, beta_regularizer=global_regularizer)(backbone_output)
x = Dropout(0.5)(x)
# x = GlobalAveragePooling2D()(x)
x = Flatten()(x)
x = Dense(512, kernel_initializer=global_initializer, kernel_regularizer=global_regularizer, bias_regularizer=global_regularizer, use_bias=True)(x)
x = BatchNormalization(gamma_regularizer=global_regularizer, beta_regularizer=global_regularizer)(x)
embed_model = tf.keras.models.Model(input_layer, x)
embed_model.summary()
# NESSESERY?
# x = BatchNormalization()(x)
# OPTION 1 (ARCFACE)
label_inputs = Input((num_class, ))
x, original_target_logit = ArcFaceLayer(num_class, kernel_regularizer=global_regularizer)(x, label_inputs)
arcface_model = tf.keras.models.Model([input_layer, label_inputs], [x, original_target_logit])
# OPTION 2 (SOFTMAX)
# x = Dense(num_class, kernel_initializer=global_initializer, kernel_regularizer=global_regularizer, bias_regularizer=global_regularizer, use_bias=True)(x)
# arcface_model = tf.keras.models.Model([input_layer, label_inputs], [x, x])
arcface_model.summary()
for var in arcface_model.trainable_variables:
print(var.name)
return embed_model, arcface_model
```
When in your article you said that you trained your model with weight_decay=0.0005, which layers did you meant?
Thank you | open | 2022-09-14T11:03:27Z | 2022-09-14T11:05:30Z | https://github.com/deepinsight/insightface/issues/2103 | [] | RezaAkhoondzade | 1 |
MycroftAI/mycroft-core | nlp | 2,883 | Tools for debugging intent parsing issues in mycroft | There are currently a number of issues open against Adapt that reference mycroft-core concepts (skills, vocab files, etc). It's extremely difficult to diagnose (or even reproduce) these issues outside of the context of a fully spun-up mycroft instance, including all skills installed, all vocab registered, reload/restart state, and interaction with padatious.
In order to better help users of Adapt (and mycroft!), we need to build some better logging (and potentially tooling).
As a first pass, there should be an intent-parsing debug mode that logs the following:
All tagged entities from the utterance
All context state
All possible parse results
All valid parse results (and the intents they matched to)
All potential intents (in order that they would be generated from IntentDeterminationEngines)
Which intent was selected (if any) and its source parser (Adapt vs Padatious).
A longer term goal might be build a state-dump tool that can be shared for easier debugging across developers. There are potentially some data privacy concerns with this tool, and I don't immediately want to unpack that can of worms.
Assuming there's no objections to the existence of this tool, there's a few Adapt issues that I'll mark as blocked by this.
| closed | 2021-04-17T19:29:23Z | 2024-09-08T08:29:21Z | https://github.com/MycroftAI/mycroft-core/issues/2883 | [
"enhancement"
] | clusterfudge | 9 |
milesmcc/shynet | django | 242 | Automatically update geoip database | Currently, the geoip database is only updated when there is a new release, but outdated geoip data will lead to some mistakes in identifying countries, especially when there is no new release for a long time.
Maybe we can provide an environment variable to use the user's maxmind `license_key` to automatically update the geoip library on a regular basis? For users who do not care about the accuracy of the country, they can use the database in the docker by setting the environment variable to empty string.
If this is difficult to implement, you can also schedule a cronjob on the host which use `docker exec` to update it. This is what I do now. This requires additionally installing `curl` in the container. | open | 2022-11-07T13:07:52Z | 2022-11-07T16:51:29Z | https://github.com/milesmcc/shynet/issues/242 | [] | cmj2002 | 2 |
Skyvern-AI/skyvern | automation | 1,921 | Create option for Azure GPT4-Turbo - Error creating workflow run from prompt | I got the same error after fix #1854 #1846
 | closed | 2025-03-11T17:20:52Z | 2025-03-12T07:27:44Z | https://github.com/Skyvern-AI/skyvern/issues/1921 | [] | devtony10 | 4 |
dask/dask | numpy | 11,534 | `divisions` for dataframe is ignored. | **Describe the issue**:
The following snippet tries to partition the dataframe according to a groupby result, and assert that the division parameter is respected. Dask ignores the division parameter in `repartition` and packs all data into a single partition, as shown by the `assert` statement.
The objective is to partition an arbitrary dataframe according to a specific division.
**Minimal Complete Verifiable Example**:
```python
import math
import dask
import numpy as np
import pandas as pd
from dask import array as da
from dask import dataframe as dd
from distributed import Client, LocalCluster, wait
from sklearn.datasets import make_classification
def get_client_workers(client: Client) -> list[str]:
workers = client.scheduler_info()["workers"]
return list(workers.keys())
def make_ltr(
client: Client, n_samples: int, n_features: int, n_rel: int
) -> dd.DataFrame:
workers = get_client_workers(client)
n_samples_per_worker = math.floor(n_samples / len(workers))
last = 0
MAX_Q = 4
def make(n: int, seed: int) -> pd.DataFrame:
rng = np.random.default_rng(seed)
X, y = make_classification(n, n_features, n_informative=n_features, n_redundant=0, n_classes=n_rel)
qid = rng.integers(size=(n,), low=0, high=MAX_Q)
df = pd.DataFrame(X, columns=[f"f{i}" for i in range(n_features)])
df["qid"] = qid
df["y"] = y
return df
futures = []
for k in range(0, n_samples, n_samples_per_worker):
fut = client.submit(make, n=n_samples_per_worker, seed=last)
futures.append(fut)
last += n_samples_per_worker
meta = make(1, 0)
df = dd.from_delayed(futures, meta=meta)
assert isinstance(df, dd.DataFrame)
return df
def distribute_groups(client: Client, df_train: dd.DataFrame) -> dd.DataFrame:
df_train = df_train.sort_values(by="qid")
cnt = df_train.groupby("qid").qid.count()
div = da.cumsum(cnt.to_dask_array(lengths=True)).compute()
div = np.concatenate([np.zeros(shape=(1,), dtype=div.dtype), div])
df_train = df_train.set_index("qid").persist()
df_train = dd.repartition(df_train, divisions=list(div), force=True).persist()
def pm(part):
print("part.shape:", part.shape)
assert part.shape[0] != 0
return part
df_train = df_train.map_partitions(pm)
wait([df_train])
return df_train
if __name__ == "__main__":
with LocalCluster() as cluster:
with Client(cluster) as client:
# Generate synthetic data for demo
df = make_ltr(client, n_samples=int(2**18), n_features=32, n_rel=5)
# Repartition the data
df = distribute_groups(client, df)
df.compute()
```
**Anything else we need to know?**:
I tried to use the `divisions` in the `set_index` call, result is the same.
**Environment**:
- Dask version: dask, version 2024.9.0
- Python version: Python 3.12.0
- Operating System: Ubuntu 24.04
- Install method (conda, pip, source): conda
| closed | 2024-11-19T08:06:24Z | 2024-11-19T22:08:36Z | https://github.com/dask/dask/issues/11534 | [
"needs triage"
] | trivialfis | 5 |
pytest-dev/pytest-cov | pytest | 395 | Running tests in parallel leads to "coverage.misc.CoverageException: Couldn't read data from" | I am using python 3.7.1, pytest 4.6.3 and pytest-cov 2.8.1 and my .coveragerc file contains parallel set to true.
With the above configuration when i run tests in parallel It consistently fails with "coverage.misc.CoverageException: Couldn't read data from xyz data file". I have seen many people reported this issue and i tried multiple solutions but no help.
No issues observed when i try to run these tests in serial manner.
Please someone help me. | open | 2020-03-11T05:20:50Z | 2020-03-11T12:28:32Z | https://github.com/pytest-dev/pytest-cov/issues/395 | [] | revunayar | 1 |
pallets-eco/flask-sqlalchemy | flask | 851 | .paginate() seems to override .options(load_only('col_1', 'col_2')) and include all columns | ### Expected Behavior
Hi, I'm trying to use paginated results of a query and only selected a handful of columns in the table, however, it seems to always return all columns in the table, not just what I specify.
```python
stuff = Table.query.filter_by(col_1='this')).options(load_only('col_1', 'col_2')).paginate(1, 10, False)
```
stuff.items this should return data with only two columns, col 1 and 2.
### Actual Behavior
stuff.items is returning all columns in the table.
### Environment
* Python version: 3.8.2
* Flask-SQLAlchemy version:2.4.3
* SQLAlchemy version:1.3.17
| closed | 2020-07-13T18:23:53Z | 2020-12-05T19:58:22Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/851 | [] | christopherpickering | 5 |
gevent/gevent | asyncio | 1,977 | Does it support python-zeep library? | As the title says does it support python-zeep library? | closed | 2023-07-26T20:25:45Z | 2023-07-26T21:22:56Z | https://github.com/gevent/gevent/issues/1977 | [
"Type: Question"
] | AzikDeveloper | 3 |
ydataai/ydata-profiling | jupyter | 706 | Is it possible to convert a script with pandas_profiling to executable using pyinstaller? | **Missing functionality**
I have been writing a very simple tkinter application reading a csv and running pandas profiling. I couldn't convert my application to windows executable using pyinstaller. Is it possible for you to share .spec file which is working ? I have already tried to write mine without success. My intention is to share pandas_profiling withouth necessarily asking people to install python.
The spec that I tried is below. Here I include extra packages like zmq because the error I get seemed related. But frankly I am not sure if I am on the right track.
Thanks,
# main spec
block_cipher = None
import sys ; sys.setrecursionlimit(sys.getrecursionlimit() * 5)
import zmq
a = Analysis(['main.py'],
pathex=['/venv/Lib/site-packages', 'C:\\Working\\git\\rds_ui', '/venv/Lib/site-packages/zmq'],
binaries=[],
datas=[],
#hiddenimports=[zmq.backend,zmq.backend.cython, zmq.backend.cffi, zmq.error, zmq.sugar, zmq.utils],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
a.datas += Tree('./venv/Lib/site-packages/pandas_profiling', prefix='pandas_profiling')
a.datas += Tree('./venv/Lib/site-packages/pandas_profiling/report/presentation/flavours/html/templates/', '.')
a.datas += Tree('./venv/Lib/site-packages/pandas_profiling/visualisation/', '.')
a.datas += Tree('./venv/Lib/site-packages/zmq', '.')
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='main',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True )
| open | 2021-02-19T08:29:29Z | 2022-12-20T14:50:42Z | https://github.com/ydataai/ydata-profiling/issues/706 | [
"feature request 💬",
"help wanted 🙋"
] | ahmetbaglan | 4 |
xuebinqin/U-2-Net | computer-vision | 290 | Slow to load on iOS device | I tried to convert the model to a ML model using this article: https://rockyshikoku.medium.com/u2net-to-coreml-machine-learning-segmentation-on-iphone-eac0c721d67b
The problem is that the model loads very slowly on a iOS device with this 176MB model (29 seconds).
Using the quantize_weights with 1 bit it arrives to 5.6MB but it’s still very slow to load the model on iOS (26 seconds).
If I try to use the already converted model u2netp.mlmodel it loads in less than 1 second.
Is there an issue with the conversion? | open | 2022-02-22T16:48:40Z | 2022-05-18T10:29:23Z | https://github.com/xuebinqin/U-2-Net/issues/290 | [] | DanielZanchi | 1 |
voila-dashboards/voila | jupyter | 1,422 | Allow users to disable fix_notebook to check/resolve kernel validity | <!--
Welcome! Thanks for thinking of a way to improve Voilà. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
<!-- Provide a clear and concise description of what problem this feature will solve. For example:
* I'm always frustrated when [...] because [...]
* I would like it if [...] happened when I [...] because [...]
-->
[fix_notebook](https://github.com/voila-dashboards/voila/blob/main/voila/notebook_renderer.py#L324) can be very heavy handed for kernel resolution. If a user wanted to customize the kernel matching logic, they would have to fork voila to achieve any customization in this regard. While I agree it's good to have some guard rail in, users should be able to disable this logic in favor of handling it within their own kernel manager
### Proposed Solution
<!-- Provide a clear and concise description of a way to accomplish what you want. For example:
* Add an option so that when [...] [...] will happen
-->
Add a toggle that can disable this logic
| closed | 2023-11-29T21:49:57Z | 2023-11-30T22:16:58Z | https://github.com/voila-dashboards/voila/issues/1422 | [
"enhancement"
] | ClaytonAstrom | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 975 | gaussian render is always giving two output images no matter what my input images are | what is possibly the issue?

these same set of images someone else ran on theirs and it worked beautifully but for mine it just outputs 2 images which are distorted | open | 2024-09-06T12:13:26Z | 2025-01-07T08:54:16Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/975 | [] | malamutes | 3 |
huggingface/diffusers | pytorch | 10,414 | [<languageCode>] Translating docs to Chinese | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐.
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about Diffusers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63).
Thank you so much for your help! 🤗
| closed | 2024-12-31T06:45:21Z | 2024-12-31T06:49:52Z | https://github.com/huggingface/diffusers/issues/10414 | [] | S20180576 | 0 |
PokemonGoF/PokemonGo-Bot | automation | 5,996 | API Updated to 0.59.1. Wait for PR | Dear All,
PGoAPI API is now using 0.59.1 API
I would advise all to wait for a new PR for the changes.
IF you have, unfortunately updated and run into the bot telling you non stop to update... despite the fact that you have already updated, and you come here looking for solution, here's the solution to get you back running:
Edit run.sh
At Line 25, change pgoapi==1.1.6 to pgoapi==1.2.0
Edit pokecli.py
At Line 69, change pgoapi==1.1.6 to pgoapi==1.2.0 | closed | 2017-04-05T05:47:55Z | 2017-04-08T10:35:02Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5996 | [] | MerlionRock | 1 |
numba/numba | numpy | 9,706 | TypingError when argument is None | ## Reporting a bug
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
```python
@jit
def f(x, y=None):
if y is not None:
return np.add(x, y)
else:
y = x
return np.add(x, y)
f(np.zeros(3), None)
```
It seems y is treated as NoneType in the ELSE branch:

The following code will work:
```python
@jit
def f(x, y=None):
if y is not None:
return np.add(x, y)
else:
# rename to _y
_y = x
return np.add(x, _y)
f(np.zeros(3), None)
# array([0., 0., 0.])
```
version 0.60.0 | closed | 2024-08-16T05:27:37Z | 2024-10-01T18:41:58Z | https://github.com/numba/numba/issues/9706 | [
"SSA",
"bug - typing"
] | auderson | 2 |
pytest-dev/pytest-cov | pytest | 533 | Final case -> exit reported as uncovered branch on exhaustive match statement | Consider the following code:
```
from enum import Enum
class MyEnum(Enum):
A = 1
B = 2
C = 3
def print_value(x: MyEnum) -> None:
match x:
case MyEnum.A:
print("A")
case MyEnum.B:
print("B")
case MyEnum.C:
print("C")
```
And the following unit test:
```
def test() -> None:
print_value(MyEnum.A)
print_value(MyEnum.B)
print_value(MyEnum.C)
```
This should have 100% line coverage and 100% branch coverage, since the match statement is exhaustive. However pytest-cov reports missing branch coverage from `case MyEnum.C` to the `exit` of the function. Such a branch is impossible, so should not be counted as uncovered.
pytest version = 7.1.1
pytest-cov version = 3.0.0
python version = 3.10.4 | open | 2022-04-25T11:31:03Z | 2023-12-09T06:56:59Z | https://github.com/pytest-dev/pytest-cov/issues/533 | [] | sirrus233 | 8 |
microsoft/unilm | nlp | 820 | Why not use the cls_token instead of average pooling in BeiT ? | As claimed in the section 2.2 of BeiT, "Moreover, we prepend a special token [S] to the input sequence.''
But at the finetuning stage, "Specifically, we use average pooling to aggregate the representations, and feed the global to a softmax classifier. " The implementation of BeiT also shows that BeiT uses average pooling to aggregate the final outputs for image classification instead of using the hidden output corresponding to the cls_token , so I have some questions
```
1. "a specical token [S]" is indeed the "cls_token" according to the code and the paper, but it's meaningless due to the average pooling.
2. "cls_token" vs. average pooling, which is better? or just say both are ok due to the powerful transformer arch.
``` | closed | 2022-08-09T13:01:31Z | 2022-08-13T13:04:27Z | https://github.com/microsoft/unilm/issues/820 | [] | JosephChenHub | 1 |
alirezamika/autoscraper | automation | 70 | ssl.SSLCertVerificationError: |
I followed all instruction and run the sample program using the AutoScraper
as shown below
from autoscraper import AutoScraper
url = 'https://stackoverflow.com/questions/2081586/web-scraping-with-python'
# We can add one or multiple candidates here.
# You can also put urls here to retrieve urls.
wanted_list = ["What are metaclasses in Python?"]
scraper = AutoScraper()
result = scraper.build(url, wanted_list )
print(result)
But I get the follwoing error
============ RESTART: D:/PythonCode-1/Web Scraping/AutoSraper 001.py ===========
Traceback (most recent call last):
File "C:\Python39\lib\site-packages\urllib3\connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "C:\Python39\lib\site-packages\urllib3\connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "C:\Python39\lib\site-packages\urllib3\connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "C:\Python39\lib\site-packages\urllib3\connection.py", line 416, in connect
self.sock = ssl_wrap_socket(
File "C:\Python39\lib\site-packages\urllib3\util\ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "C:\Python39\lib\site-packages\urllib3\util\ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Python39\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "C:\Python39\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "C:\Python39\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python39\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "C:\Python39\lib\site-packages\urllib3\connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "C:\Python39\lib\site-packages\urllib3\util\retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='stackoverflow.com', port=443): Max retries exceeded with url: /questions/2081586/web-scraping-with-python (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:/PythonCode-1/Web Scraping/AutoSraper 001.py", line 11, in <module>
result = scraper.build(url, wanted_list )
File "C:\Python39\lib\site-packages\autoscraper\auto_scraper.py", line 227, in build
soup = self._get_soup(url=url, html=html, request_args=request_args)
File "C:\Python39\lib\site-packages\autoscraper\auto_scraper.py", line 119, in _get_soup
html = cls._fetch_html(url, request_args)
File "C:\Python39\lib\site-packages\autoscraper\auto_scraper.py", line 105, in _fetch_html
res = requests.get(url, headers=headers, **request_args)
File "C:\Python39\lib\site-packages\requests\api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "C:\Python39\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python39\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python39\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\Python39\lib\site-packages\requests\adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='stackoverflow.com', port=443): Max retries exceeded with url: /questions/2081586/web-scraping-with-python (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1129)')))
| closed | 2021-12-29T14:18:51Z | 2022-07-17T20:38:38Z | https://github.com/alirezamika/autoscraper/issues/70 | [] | rosarion | 1 |
giotto-ai/giotto-tda | scikit-learn | 77 | TerminatedWorkerError when calling transform on VietorisRipsPersistence | <!-- Instructions For Filing a Bug: https://github.com/giotto-learn/giotto-learn/blob/master/CONTRIBUTING.rst -->
#### Description
<!-- Example: Joblib Error thrown when calling fit on VietorisRipsPersistence
-->
When calling transform on VietorisRipsPersistence I sometimes get the following error:
```
TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGABRT(-6)}
```
#### Steps/Code to Reproduce
<!--
If the code is too long, feel free to put it in a public gist and link
it in the issue: https://gist.github.com
-->
The error is surprisingly hard to reproduce as it appears to depend on how much RAM is available at runtime. The best I can provide at this stage is the following snippet:
```python
homologyDimensions = (0, 1)
persistenceDiagram = hl.VietorisRipsPersistence(metric='euclidean', max_edge_length=10,
homology_dimensions=homologyDimensions,
n_jobs=-1)
persistenceDiagram.fit(doc_matrix)
Diagrams = persistenceDiagram.transform(doc_matrix[:n_docs])
```
where `doc_matrix` has shape `(1902, 778, 300}` and takes 1775707200 bytes in memory.
#### Expected Results
<!-- Example: No error is thrown. Please paste or describe the expected results.-->
I would expect that when `n_jobs=-1`, `VietorisRipsPersistence` would simply try to access the available cores / memory and not throw an error.
#### Actual Results
<!-- Please paste or specifically describe the actual output or traceback. -->
```
---------------------------------------------------------------------------
TerminatedWorkerError Traceback (most recent call last)
<ipython-input-40-af8c35fe8d70> in <module>
7 persistenceDiagram.fit(doc_matrix[:n_docs])
8
----> 9 Diagrams = persistenceDiagram.transform(doc_matrix[:n_docs])
~/git/gw_nlp/env/lib/python3.7/site-packages/giotto/homology/point_clouds.py in transform(self, X, y)
194
195 Xt = Parallel(n_jobs=self.n_jobs)(delayed(self._ripser_diagram)(X[i])
--> 196 for i in range(n_samples))
197
198 max_n_points = {dim: max(1, np.max([Xt[i][dim].shape[0]
~/git/gw_nlp/env/lib/python3.7/site-packages/joblib/parallel.py in __call__(self, iterable)
1014
1015 with self._backend.retrieval_context():
-> 1016 self.retrieve()
1017 # Make sure that we get a last message telling us we are done
1018 elapsed_time = time.time() - self._start_time
~/git/gw_nlp/env/lib/python3.7/site-packages/joblib/parallel.py in retrieve(self)
906 try:
907 if getattr(self._backend, 'supports_timeout', False):
--> 908 self._output.extend(job.get(timeout=self.timeout))
909 else:
910 self._output.extend(job.get())
~/git/gw_nlp/env/lib/python3.7/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
552 AsyncResults.get from multiprocessing."""
553 try:
--> 554 return future.result(timeout=timeout)
555 except LokyTimeoutError:
556 raise TimeoutError()
/usr/local/anaconda3/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
/usr/local/anaconda3/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGABRT(-6)
```
#### Versions
<!--
Please run the following snippet and paste the output below.
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import joblib; print("joblib", joblib.__version__)
import sklearn; print("Scikit-Learn", sklearn.__version__)
import giotto; print("giotto-Learn", giotto.__version__)
-->
Darwin-19.0.0-x86_64-i386-64bit
Python 3.7.3 (default, Mar 27 2019, 16:54:48)
[Clang 4.0.1 (tags/RELEASE_401/final)]
NumPy 1.17.3
SciPy 1.3.1
joblib 0.14.0
Scikit-Learn 0.21.3
giotto-Learn 0.1.1
<!-- Thanks for contributing! -->
| closed | 2019-11-01T10:40:51Z | 2020-08-23T15:40:16Z | https://github.com/giotto-ai/giotto-tda/issues/77 | [] | lewtun | 0 |
kizniche/Mycodo | automation | 788 | Input I2C address not selectable when list of addresses specified in Input Module | v8.5.8
When a list of I2C addresses are specified in the Input Module, only the first is available. May be related to the option 'i2c_address_editable': False
See: https://github.com/kizniche/Mycodo/blob/868a836e96ff793f7e46058c785fe9d9f47fd3dd/mycodo/inputs/ads1x15.py#L61
Ref: https://kylegabriel.com/forum/general-discussion/ads1x15-ads1115-module-cant-set-address-other-than-0x48-in-mycodo-but-hardware-supports-multiple-addresses | closed | 2020-07-15T15:27:52Z | 2020-07-23T01:36:26Z | https://github.com/kizniche/Mycodo/issues/788 | [
"bug"
] | kizniche | 0 |
onnx/onnx | pytorch | 6,484 | Objective of test: "Verify ONNX with ONNX Runtime PyPI package"? | # Ask a Question
### Question
What exactly do we try to test here? When should/could we upgrade onnxruntime and the other two variables?
Maybe we can the rule/idea in a comment? Could we use python 3.12 now? Comparing the different os, I would assume at lease_linux_aarch64, it should also be onnxruntime==1.17.3 ?
```
release_linux_aarch64.yml
- name: Verify ONNX with ONNX Runtime PyPI package
if: matrix.python-version != 'cp312-cp312'
run: |
docker run --rm -v ${{ github.workspace }}:/ws:rw --workdir=/ws \
${{ env.img }} \
bash -exc '\
source .env/bin/activate && \
python -m pip uninstall -y protobuf numpy && python -m pip install -q -r requirements-release.txt && \
python -m pip install -q onnxruntime==1.16.3 && \
export ORT_MAX_IR_SUPPORTED_VERSION=9 \
export ORT_MAX_ML_OPSET_SUPPORTED_VERSION=3 \
export ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION=20 \
pytest && \
deactivate'
release_linux_x86_64.ym
- name: Verify ONNX with ONNX Runtime PyPI package
if: matrix.python-version != '3.12'
run: |
python -m pip uninstall -y protobuf numpy && python -m pip install -q -r requirements-release.txt
python -m pip install -q onnxruntime==1.17.3
export ORT_MAX_IR_SUPPORTED_VERSION=9
export ORT_MAX_ML_OPSET_SUPPORTED_VERSION=3
export ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION=20
pytest
release_win.yml
- name: Verify ONNX with ONNX Runtime PyPI package
if: matrix.python-version != '3.12'
run: |
cd onnx
python -m pip uninstall -y protobuf numpy
python -m pip install -q -r requirements-release.txt
python -m pip install -q onnxruntime==1.17.3
$Env:ORT_MAX_IR_SUPPORTED_VERSION=9
$Env:ORT_MAX_ML_OPSET_SUPPORTED_VERSION=3
$Env:ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION=20
pytest
release_mac.yml
- name: Verify ONNX with ONNX Runtime PyPI package
if: matrix.python-version != '3.12'
run: |
arch -${{ matrix.target-architecture }} python -m pip uninstall -y protobuf numpy
arch -${{ matrix.target-architecture }} python -m pip install -q -r requirements-release.txt
arch -${{ matrix.target-architecture }} python -m pip install -q onnxruntime==1.17.3
export ORT_MAX_IR_SUPPORTED_VERSION=9
export ORT_MAX_ML_OPSET_SUPPORTED_VERSION=3
export ORT_MAX_ONNX_OPSET_SUPPORTED_VERSION=20
arch -${{ matrix.target-architecture }} pytest
``` | closed | 2024-10-22T19:56:10Z | 2024-11-21T20:01:54Z | https://github.com/onnx/onnx/issues/6484 | [
"question"
] | andife | 2 |
PaddlePaddle/PaddleHub | nlp | 1,377 | hub serving 显存泄漏 | 欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
- 构建方式:使用如下的方式构建镜像,启动,一台gpu部署若干docker,
- 高峰性能gpu显存打满,且不释放,有点像这个 如果有11G,开两个,每个占5G,但是都想要6G,形成竞争,此时每台分5.5G,陷入僵局,然后都开始等待分配显存,不处理请求,没有日志,也不会挂掉,没任何提示。
- 客户端现象, 客户端请求超时

```
FROM registry.baidubce.com/paddlepaddle/paddle:2.0.0-gpu-cuda10.1-cudnn7
# PaddleOCR base on Python3.7
#
RUN mkdir /PaddleOCR
ADD ./PaddleOCR /PaddleOCR
RUN pip3.7 install --upgrade pip -i https://mirror.baidu.com/pypi/simple
RUN pip3.7 install paddlehub --upgrade -i https://mirror.baidu.com/pypi/simple
WORKDIR /PaddleOCR
RUN pip3.7 install -r requirements.txt -i https://mirror.baidu.com/pypi/simple
EXPOSE 8868
# CMD ["/bin/bash","-c","hub install deploy/hubserving/ocr_system/ && hub serving start -m ocr_system"]
CMD ["/bin/bash","-c","hub install deploy/hubserving/ocr_system/ && hub serving start -c /PaddleOCR/deploy/hubserving/ocr_system/config.json"]
```
```
λ 203f70ee6fe0 /PaddleOCR cat /PaddleOCR/deploy/hubserving/ocr_system/config.json
{
"modules_info": {
"ocr_system": {
"init_args": {
"version": "1.0.0",
"use_gpu": true
},
"predict_args": {
}
}
},
"port": 8868,
"use_multiprocess": false,
"workers": 10
}
```
| open | 2021-04-21T09:54:54Z | 2021-04-30T09:56:00Z | https://github.com/PaddlePaddle/PaddleHub/issues/1377 | [
"serving"
] | xealml | 1 |
wagtail/wagtail | django | 12,658 | Default URL param value for Gravatar URL have been deprecated (`mm` -> `mp`) | ### Issue Summary
We currently pass in `mm` to the `d` (default) param, this is used to determine what avatar will show if there's no matching avatar. However, the latest documentation advises that this should be `mp` (mystery person) instead.
https://github.com/wagtail/wagtail/blob/c2676af857a41440e05e03038d85a540dcca3ce2/wagtail/users/utils.py#L28-L29
https://github.com/wagtail/wagtail/blob/c2676af857a41440e05e03038d85a540dcca3ce2/wagtail/users/utils.py#L45
https://docs.gravatar.com/api/avatars/images/#default-image
### Describe the solution you'd like
Update the param value from `mm` to `mp` and ensure any unit tests are updated.
This way, if the support for this legacy value gets dropped, it will not be a breaking change for Wagtail users.
### Describe alternatives you've considered
It might be nice to have a better approach to this by allowing the param to be passed into the function / overridden somehow. Best to discuss that in a different issue though - see https://github.com/wagtail/wagtail/issues/12659
### Additional context
Two PRs have attempted this (and other changes), see the feedback and the PRs for reference.
- #11077
- #11800
### Working on this
- Anyone can contribute to this, be sure you understand how to reproduce the avatar scenario.
- It might be good to tackle this small change before tackling the other related issues.
- View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| closed | 2024-12-04T10:33:50Z | 2024-12-06T03:46:14Z | https://github.com/wagtail/wagtail/issues/12658 | [
"type:Enhancement",
"good first issue",
"component:User Management",
"Compatibility"
] | lb- | 4 |
MagicStack/asyncpg | asyncio | 935 | Cancelled query doesn't properly close transaction | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**:
* **PostgreSQL version**:
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**:
* **Python version**:
* **Platform**:
* **Do you use pgbouncer?**:
* **Did you install asyncpg with pip?**:
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**:
<!-- Enter your issue details below this comment. -->
| closed | 2022-07-08T07:53:33Z | 2022-07-08T08:35:09Z | https://github.com/MagicStack/asyncpg/issues/935 | [] | arnaudsjs | 0 |
InstaPy/InstaPy | automation | 6,174 | session.follow_user_followers - this function not working | closed | 2021-05-07T15:01:23Z | 2021-05-07T15:56:40Z | https://github.com/InstaPy/InstaPy/issues/6174 | [] | saradindu-bairagi | 0 |
|
tensorflow/tensor2tensor | machine-learning | 1,030 | Question: How can I add new tf.Variables? | ### Description
I wanna add new tf.Variables object in the `tensor2tensor/models/transformer.py`,
and use in the
```
class Transformer()t2t_model.T@TModel:
def __init__(self):
self.W = tf.Variable(np.random.randn(), name='W')
...
def body(self, features):
...
encoder_output = self.W(encoder_output)
...
```
But I have error below:
```
# Error logs:
...
tensorflow.python.framework.errors_impl.NotFoundError: Key W not found in checkpoint
...
```
How can I add new tf.Variables?
| open | 2018-08-30T05:45:47Z | 2018-08-30T05:45:47Z | https://github.com/tensorflow/tensor2tensor/issues/1030 | [] | siida36 | 0 |
horovod/horovod | tensorflow | 3,933 | Missing `-iface` argument in mpirun command generated by Horovod runner | **Environment:**
1. Framework: TensorFlow, PyTorch
2. Framework version: 2.12.0, 2.0.1
3. Horovod version: 0.28.0
4. MPI version: MPICH (HYDRA) 4.1.1
5. CUDA version: 11.8
6. NCCL version: 2.16.5
7. Python version: 3.11.3
8. Spark / PySpark version: N/A
9. Ray version: N/A
10. OS and version: Ubuntu 22.04
11. GCC version: 11.3.0
12. CMake version: 3.26.3
**Bug report:**
**Issue Description:**
The current implementation of the Horovod runner translates the command
```
horovodrun -n 3 --network-interface enp94s0 -H server2:3
```
into an mpirun command. However, it seems that the generated mpirun command
```
mpirun -l -np 3 -ppn 3 -hosts server2 -bind-to none -map-by slot -genv NCCL_SOCKET_IFNAME=enp94s0
```
is missing the `-iface enp94s0` argument. This omission can cause errors in setups with multiple servers.
**Steps to Reproduce:**
- Install MPICH and Horovod.
- Run the command `horovodrun -n 3 --network-interface enp94s0 -H server2:3 echo hello`.
**Error Message:**
```
[proxy:0:0@server1] HYDU_sock_connect (lib/utils/sock.c:110): unable to get host address for server1
[proxy:0:0@server1] main (proxy/pmip.c:105): unable to connect to server server2 at port 39647 (check for firewalls!)
[mpiexec@server2] ui_cmd_cb (mpiexec/pmiserv_pmci.c:51): Launch proxy failed.
[mpiexec@server2] HYDT_dmxu_poll_wait_for_event (lib/tools/demux/demux_poll.c:76): callback returned error status
[mpiexec@server2] HYD_pmci_wait_for_completion (mpiexec/pmiserv_pmci.c:181): error waiting for event
[mpiexec@server2] main (mpiexec/mpiexec.c:247): process manager error waiting for completion
```
**Expected Fix:**
The mpirun command generated by the Horovod runner should include the `-iface` argument to ensure proper network interface binding and avoid the mentioned error.
**References:**
Possible culprit code snippet:
https://github.com/horovod/horovod/blob/b93a87a6c79233d85113b8a42f5bd513d6c0de91/horovod/runner/mpi_run.py#L169-L170
**Workaround:**
To work around this issue, I can manually add the `--mpi-args="-iface enp94s0"` flag to the horovodrun command. This flag allows me to pass custom arguments directly to the underlying mpirun command. I am not sure if this is the intended way to work with MPICH.
| closed | 2023-05-29T17:36:27Z | 2023-06-26T06:46:57Z | https://github.com/horovod/horovod/issues/3933 | [
"bug",
"contribution welcome"
] | alumik | 4 |
PokeAPI/pokeapi | api | 272 | Not compatible with DRF 3.5 | In requirements.txt it calls for djangorestframework>=3.1.0, current version is 3.5.1
Error:
Creating a ModelSerializer without either the 'fields' attribute or the 'exclude' attribute has been deprecated since 3.3.0, and is now disallowed. Add an explicit fields = '**all**' to the PokemonMoveSerializer serializer."
Will submit pull request, with adding the fields attribute to the PokemonMoveSerializer using the fields from here; https://pokeapi.co/docsv2/#moves
| closed | 2016-10-25T14:56:40Z | 2017-06-12T12:51:54Z | https://github.com/PokeAPI/pokeapi/issues/272 | [] | dhcrain | 2 |
jupyter-book/jupyter-book | jupyter | 1,427 | file open not working in Goodle Colab | ### Describe the enhancement you'd like
While opening data files for a Jupyter Book,
f = open("filename", "r")
works in Binder, it does not in Google Colab. Is there a way to fix it?
### Does this solve a specific problem?
_No response_
### What alternatives exist?
_No response_
### Additional context
_No response_ | open | 2021-08-17T13:44:06Z | 2021-08-17T13:44:08Z | https://github.com/jupyter-book/jupyter-book/issues/1427 | [
"enhancement"
] | bronwojtek | 1 |
vanna-ai/vanna | data-visualization | 169 | Root functions will be deprecated | In order to have a consistent experience regardless of the configuration that you choose, all the root functions will be deprecated:
https://github.com/vanna-ai/vanna/blob/main/src/vanna/__init__.py
The docstrings for each function should be removed. The functions themselves should not be removed -- instead, they should raise an exception with an example of how to transition old code to the new method of:
```python
vn = VannaDefault(model=vanna_model_name, api_key=api_key)
```
The root functions are legacy from a time before we had multiple configuration options, which are enabled by the `VannaBase` abstract base class. The sample notebooks have already been switched. Now what remains are code samples from third parties, which is why these root functions should raise exceptions so that people can transition more easily.
For now, the remaining function that will stay in the base class will be `get_api_key` for the sake of being able to run the Chinook demo. | closed | 2024-01-22T00:01:48Z | 2024-03-14T02:50:48Z | https://github.com/vanna-ai/vanna/issues/169 | [] | zainhoda | 2 |
gradio-app/gradio | deep-learning | 9,978 | gr.Sketchpad() cannot be cleared twice or more times in the event of a button click | ### Describe the bug
gr.Image() **can be cleared twice or mor times** in the event of a button click, the code:
input_image = gr.Image()
def clear_image():
return None
submit_btn.click(fn=clear_image, outputs=input_image)
gr.Sketchpad() **cannot be cleared twice or mor times** in the event of a button click, the code:
input_sketchpad=gr.Sketchpad()
def clear_sketchpad():
return None
submit_btn.click(fn=clear_sketchpad, outputs=input_sketchpad)
Then, i used the JS code to fix it, but it doesn't work, the code:
js = """
<script>
function clearCanvas(sketchpadId) {
console.log('clearCanvas called with id:', sketchpadId);
var canvasContainer = document.querySelector(`#${sketchpadId}`);
if (canvasContainer) {
var canvas = canvasContainer.querySelector('.svelte-1h72pol canvas');
if (canvas) {
var ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, canvas.width, canvas.height);
} else {
console.log('Canvas element not found');
}
} else {
console.log('Canvas container not found');
}
}
window.clearCanvas = clearCanvas;
</script>
"""
gr.HTML(js_code)
button.click(js="(sketchpad_id) => { window.clearCanvas(sketchpad_id); return []; }")
And, i **find another same problem** which is input_image can be cleared twice or mor times , but input_sketchpad cannot be cleared twice or mor times when use **ClearButton**, the code:
input_image = gr.Image()
input_sketchpad=gr.Sketchpad()
clear_btn = gr.ClearButton([*input_image, input_sketchpad])
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
input_sketchpad=gr.Sketchpad()
def clear_sketchpad():
return None
submit_btn.click(fn=clear_sketchpad, outputs=input_sketchpad)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio 5.5.0
```
### Severity
Blocking usage of gradio | closed | 2024-11-18T02:01:51Z | 2025-01-24T01:26:44Z | https://github.com/gradio-app/gradio/issues/9978 | [
"bug",
"🖼️ ImageEditor"
] | youngxz | 0 |
clovaai/donut | nlp | 323 | Hi | open | 2024-12-03T20:19:47Z | 2024-12-03T20:19:47Z | https://github.com/clovaai/donut/issues/323 | [] | Harjgrewa1 | 0 |
|
ading2210/poe-api | graphql | 94 | I have been banned for two accounts. Has Poe changed the protocol? | - | closed | 2023-06-02T18:03:38Z | 2023-06-02T22:52:42Z | https://github.com/ading2210/poe-api/issues/94 | [
"duplicate"
] | Seikaijyu | 2 |
huggingface/transformers | tensorflow | 36,106 | Requesting support in Pipeline using Florence-2 models and tasks | ### Feature request
Hi!
Currently, microsoft/Florence-2-large-ft or related models cannot be loaded with HF pipeline("image-to-text") as its config is not recognised by AutoModelForVision2Seq.
When attempting to load it, Transformers raises:
“Unrecognised configuration class Florence2Config for this kind of AutoModel: AutoModelForVision2Seq.”
Florence-2 also requires trust_remote_code=True to be passed to the functions.
The current standard method works by loading Florence-2 with AutoModelForCausalLM and AutoProcessor, but this adds another flow if you are already using pipeline, Lora support also works well, having these in the pipeline would making it an amazing addition for its capable tasks.
Thanks!
Model:
https://huggingface.co/microsoft/Florence-2-large
### Motivation
Adding support for pipeline with these models would give it another great set of options with tasks while lowering the barrier for entry, as the pipeline is a great feature that simplifies the writing and reusability of code for people. (Like me!)
Thanks again for all the amazing work.
### Your contribution
I can test any proposed updates. | open | 2025-02-10T02:52:47Z | 2025-02-20T16:47:12Z | https://github.com/huggingface/transformers/issues/36106 | [
"Feature request"
] | mediocreatmybest | 8 |
strawberry-graphql/strawberry | asyncio | 3,544 | Slow performance for queries that return many items | Hello, I'm trying to improve the performance of a graphql query that looks like:
```graphql
query GetSampleEidMapQuery($project: String!) {
project(name: $project) {
samples {
assays {
id
externalIds
meta
}
}
}
}
```
This returns a result that has around 3000 samples, and each sample has between 1 and 4 assays. So less than 10,000 objects in total. The query takes between 3 and 5 seconds and returns around 600kB of json. So it's not a small amount of data but also not exactly huge. I initially thought this might be slow SQL queries but it turns out around 85% of the query time is in strawberry processing the results. Here's the pyinstrument profiling that shows this [pyinstrument.html.zip](https://github.com/user-attachments/files/15923872/pyinstrument.html.zip)
Is there anything that can be done to reduce the time that it takes for strawberry to handle results? I've tried both the `ParserCache` and `ValidationCache` as well as disabling validation entirely but unfortunately that made very little difference.
Not sure if it helps but this is our graphql schema: https://github.com/populationgenomics/metamist/blob/dev/api/graphql/schema.py
Thank you! | closed | 2024-06-21T07:02:26Z | 2025-03-20T15:56:46Z | https://github.com/strawberry-graphql/strawberry/issues/3544 | [] | dancoates | 7 |
open-mmlab/mmdetection | pytorch | 12,011 | how does this category information keep up to date during the interface multi-task training run? | Hello developers, I have a scene here encountered a problem, I very much hope that you can provide solutions or solutions, I through the python interface training of different detection tasks, the first task can be started smoothly, the second task will always report errors, the error is as follows: ValueError: need at least one array to concatenate.
So I looked for the reasons myself, probably because of these two things:
1. classes and palette METAINFO in `\mmdet\datasets\coco.py` did not update the class and palette information in time.
2. `\mmdet\evaluation\functional\class_names.py` `coco_classes()` does not return updated class information.
So I would like to ask you, how does this category information keep up to date during the interface multi-task training run? What I tried before didn't seem to work.
Here's what I tried to fix `\mmdet\datasets\coco.py`, the file Objectdataset_config.yaml changes category and color palette information every time you change a different task:
```
with open('./Configs/Objectdataset_config.yaml', 'r', encoding='utf-8') as f:
Object_config = yaml.safe_load(f)
classes = tuple(Object_config['classes'])
palette = Object_config['palette']
METAINFO = {
'classes':classes,
'palette':palette
}
```
Here's what I tried to fix `\mmdet\evaluation\functional\class_names.py` `coco_classes()`:
```
def coco_classes() -> list:
"""Class names of COCO."""
with open('./Configs/Objectdataset_config.yaml', 'r', encoding='utf-8') as f:
Object_config = yaml.safe_load(f)
classes = list(Object_config['classes'])
return classes
```
| closed | 2024-10-23T01:56:04Z | 2024-10-31T07:52:58Z | https://github.com/open-mmlab/mmdetection/issues/12011 | [] | 1wang11lijian1 | 3 |
jina-ai/serve | fastapi | 5,316 | docs: jina Flow the hard way | The idea would be to create a how to / tutorials named
`jina Flow the hard way` (named inspire [kubernetes the hard way](https://github.com/kelseyhightower/kubernetes-the-hard-way) )
The idea would be to show that you can start an X Executor by yourself with the CLI, start the gateway and pass the topology graph and the connection list as cli paramters and the "Flow" will live.
This will gave in depth understanding to whoever want to deep dive in Jina concept. It will help user to understand how we deploy on k8s as well | closed | 2022-10-26T12:31:22Z | 2023-03-13T00:23:35Z | https://github.com/jina-ai/serve/issues/5316 | [
"Stale"
] | samsja | 5 |
PaddlePaddle/PaddleHub | nlp | 1,669 | paddlehub 预测报错C++ Traceback (most recent call last): 0 paddle::framework::SignalHandle(char const*, int)1 paddle::platform::GetCurrentTraceBackStringabi:cxx11 | 欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
1)PaddleHub和PaddlePaddle版本:请提供您的PaddleHub和PaddlePaddle版本号,例如PaddleHub1.4.1,PaddlePaddle1.6.2
2)系统环境:请您描述系统类型,例如Linux/Windows/MacOS/,python版本
- 复现信息:如为报错,请给出复现环境、复现步骤
ubuntu18 T4显卡 cuda10.2 cudnn 7.6 python3.6.9
python -m pip install paddlepaddle-gpu -i https://mirror.baidu.com/pypi/simple 安装后显示成功
paddlehub 2.1.1
paddlenlp 2.1.1
paddlepaddle-gpu 2.1.3
使用paddlehub 进行预测
object_detector = hub.Module(name="yolov3_darknet53_coco2017")
frame_start= cv2.imread('real_see.jpg')
results = object_detector.object_detection(images=[frame_start],use_gpu=True)
print(results)
报错信息如下:
[2021-10-27 19:13:15,827] [ WARNING] - The _initialize method in HubModule will soon be deprecated, you can use the init() to handle the initialization of the object
W1027 19:13:15.827836 43261 analysis_predictor.cc:1183] Deprecated. Please use CreatePredictor instead.
C++ Traceback (most recent call last):
0 paddle::framework::SignalHandle(char const*, int)
1 paddle::platform::GetCurrentTraceBackStringabi:cxx11
Error Message Summary:
FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1635333198 (unix time) try "date -d @1635333198" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 43261 (TID 0x7f6ab0072740) from PID 0 ***] | open | 2021-10-27T11:15:56Z | 2021-10-28T01:49:57Z | https://github.com/PaddlePaddle/PaddleHub/issues/1669 | [
"installation"
] | xiaomujiang | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.