repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
widgetti/solara | jupyter | 965 | Issue with custom_exceptions and astropy | The following demonstrates the issue:
```python
python -m venv clean
source clean/bin/activate
pip install solara[pytest] astropy
echo 'import astropy' > test.py
pytest test.py
```
the output is:
```
______________________________________________________________________________ ERROR collecting test.py ______________________________________________________________________________
test.py:1: in <module>
import astropy
clean/lib/python3.11/site-packages/astropy/__init__.py:176: in <module>
log = _init_log()
clean/lib/python3.11/site-packages/astropy/logger.py:122: in _init_log
log._set_defaults()
clean/lib/python3.11/site-packages/astropy/logger.py:499: in _set_defaults
if self.exception_logging_enabled():
clean/lib/python3.11/site-packages/astropy/logger.py:321: in exception_logging_enabled
return _AstLogIPYExc in get_ipython().custom_exceptions
E AttributeError: 'NoneType' object has no attribute 'custom_exceptions'
============================================================================== short test summary info ===============================================================================
ERROR test.py - AttributeError: 'NoneType' object has no attribute 'custom_exceptions'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================================================== 1 error in 0.14s ==================================================================================
```
This doesn't happen if we don't install solara into the test environment. | closed | 2025-01-10T12:41:37Z | 2025-01-29T13:53:15Z | https://github.com/widgetti/solara/issues/965 | [] | astrofrog | 2 |
alteryx/featuretools | data-science | 1,865 | Update to support pandas 1.4.0 | The max pandas version has been restricted due to test failures that were introduced with version 1.4.0 of pandas. We should update to support this new version of pandas. Note, this will also require dropping support for Python 3.7, as pandas 1.4.0 no longer supports Python 3.7. | closed | 2022-01-24T15:20:17Z | 2022-02-09T15:05:11Z | https://github.com/alteryx/featuretools/issues/1865 | [] | thehomebrewnerd | 2 |
ipython/ipython | jupyter | 14,372 | Run magic on module fails with debug flag (`%run -d -m my_module`) | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
Decided to port some code to run as a module with a `__main__.py` file, but now when I try to debug it using `%run -d -m my_module`, I get the following error:
```
End of file
End of file
End of file
End of file
End of file
End of file
End of file
End of file
End of file
End of file
End of file
UsageError:
I failed to find a valid line to set a breakpoint
after trying up to line: 11.
Please set a valid breakpoint manually with the -b option.
```
I changed my `__main__.py` to just be a single print statement and still got the error. I set the breakpoint manually using `%run -d -b my_module/__main__.py:1 -m my_module` and then the code ran but didn't ever break. In general it just seems like `-d` and `-m` don't play well together. | open | 2024-03-21T12:18:57Z | 2024-10-24T20:28:04Z | https://github.com/ipython/ipython/issues/14372 | [] | carschandler | 2 |
davidsandberg/facenet | computer-vision | 1,164 | TypeError: Cannot create initializer for non-floating point type. | Trying to train with the Cassia Webface dataset. with the following command:
`python src/train_tripletloss.py --logs_base_dir ~/logs/facenet/ --models_base_dir ./Models/new/ --data_dir ./Dataset/processed --image_size 160 --lfw_dir ./lfw --optimizer RMSPROP --learning_rate 0.01 --weight_decay 1e-4 --max_nrof_epochs 500 --pretrained_model ./Models/facenet/20180402-114759.pb`
Getting the following error:
```
Traceback (most recent call last):
File "src/train_tripletloss.py", line 486, in <module>
main(parse_arguments(sys.argv[1:]))
File "src/train_tripletloss.py", line 134, in main
weight_decay=args.weight_decay)
File "D:\facenet-master\src\models\inception_resnet_v1.py", line 149, in inference
dropout_keep_prob=keep_probability, bottleneck_layer_size=bottleneck_layer_size, reuse=reuse)
File "D:\facenet-master\src\models\inception_resnet_v1.py", line 180, in inception_resnet_v1
scope='Conv2d_1a_3x3')
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1159, in convolution2d
conv_dims=2)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1057, in convolution
outputs = layer.apply(inputs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 1700, in apply
return self.__call__(inputs, *args, **kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\layers\base.py", line 548, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 824, in __call__
self._maybe_build(inputs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 2146, in _maybe_build
self.build(input_shapes)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\keras\layers\convolutional.py", line 165, in build
dtype=self.dtype)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\layers\base.py", line 461, in add_weight
**kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 529, in add_weight
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\tracking\base.py", line 712, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 1500, in get_variable
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 1243, in get_variable
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 550, in get_variable
return custom_getter(**custom_getter_kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1761, in layer_variable_getter
return _model_variable_getter(getter, *args, **kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1752, in _model_variable_getter
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\framework\python\ops\variables.py", line 351, in model_variable
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\framework\python\ops\variables.py", line 281, in variable
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 519, in _true_getter
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 933, in _get_single_variable
aggregation=aggregation)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variables.py", line 258, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variables.py", line 219, in _variable_v1_call
shape=shape)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variables.py", line 197, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 2519, in default_variable_creator
shape=shape)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variables.py", line 262, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variables.py", line 1688, in __init__
shape=shape)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variables.py", line 1818, in _init_from_args
initial_value(), name="initial_value", dtype=dtype)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 905, in <lambda>
partition_info=partition_info)
File "C:\Users\MSI\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\contrib\layers\python\layers\initializers.py", line 120, in _initializer
raise TypeError('Cannot create initializer for non-floating point type.')
TypeError: Cannot create initializer for non-floating point type.
``` | closed | 2020-07-21T00:21:10Z | 2023-11-03T10:31:47Z | https://github.com/davidsandberg/facenet/issues/1164 | [] | Asif1405 | 3 |
ckan/ckan | api | 8,560 | Patch releases December 2024 | This is an issue to track progress on the patch releases scheduled for **11th December 2024** (2.10.6 and 2.11.1)
[Full docs](http://docs.ckan.org/en/latest/contributing/release-process.html)
### Preparing
* [x] [Backports](https://github.com/ckan/ckan/labels/Backport%20dev-v2.11)
* [x] Security issues
* [x] Translations
* [x] Rebuild Frontend
* [x] Changelog
### Release day
* [x] Change version and tag
* [x] PyPI
* [x] Create GitHub release
* [x] Update docs (RTD)
* [x] Build Docker images
* [x] Build and upload deb packages
* [x] Cherry-pick i18n and changelog changes to master
* [x] Announce
| closed | 2024-12-02T14:29:03Z | 2024-12-11T12:54:43Z | https://github.com/ckan/ckan/issues/8560 | [
"Releases"
] | amercader | 1 |
proplot-dev/proplot | data-visualization | 267 | Error in some geographic plots | ### Description
Some basic geographic plots cannot be plotted with proplot 0.7.
Originally, I found that some values are not covered by the automatic colorbar levels in geographic plots. Some large values are not colored unless `vmin` and `vmax` are explicitly defined. I couldn't reproduce this error in a simple example but I guess it's related to this issue as we can see from the error message.
### Steps to reproduce
```python
import xarray as xr
import proplot as pplt
ds = xr.tutorial.open_dataset('air_temperature').load()
fig, ax = pplt.subplots(proj='cyl')
#fig, ax = pplt.subplots(proj='eqearth') # This works
ax.format(coast=True)
ax.contourf(ds.isel(time=0)['air'])
```
**Actual behavior**: [What actually happened]
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-92d29737d5eb> in <module>
6 fig, ax = pplt.subplots(proj='cyl')
7 ax.format(coast=True)
----> 8 ax.contourf(ds.isel(time=0)['air'])
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/ui.py in _iterator(*args, **kwargs)
780 result = []
781 for func in objs:
--> 782 result.append(func(*args, **kwargs))
783 if len(self) == 1:
784 return result[0]
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in <lambda>(self, _func, _method, *args, **kwargs)
4468 method = functools.wraps(method)(
4469 lambda self, *args, _func=func, _method=method, **kwargs:
-> 4470 _func(self, *args, _method=_method, **kwargs)
4471 )
4472
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in default_transform(self, transform, *args, **kwargs)
521 if transform is None:
522 transform = PlateCarree()
--> 523 return method(self, *args, transform=transform, **kwargs)
524
525
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in <lambda>(self, _func, _method, *args, **kwargs)
4468 method = functools.wraps(method)(
4469 lambda self, *args, _func=func, _method=method, **kwargs:
-> 4470 _func(self, *args, _method=_method, **kwargs)
4471 )
4472
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in standardize_2d(self, data, autoformat, order, globe, *args, **kwargs)
1270
1271 # Call function
-> 1272 return method(self, x, y, *zs, **kwargs)
1273
1274
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in <lambda>(self, _func, _method, *args, **kwargs)
4468 method = functools.wraps(method)(
4469 lambda self, *args, _func=func, _method=method, **kwargs:
-> 4470 _func(self, *args, _method=_method, **kwargs)
4471 )
4472
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/internals/warnings.py in deprecate_kwargs(*args, **kwargs)
102 'removed in the next major release. Please use {key_new!r} instead.'
103 )
--> 104 return func_orig(*args, **kwargs)
105 return deprecate_kwargs
106 return decorator
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in apply_cmap(self, cmap, cmap_kw, norm, norm_kw, extend, levels, N, values, vmin, vmax, locator, locator_kw, symmetric, positive, negative, nozero, discrete, edgefix, labels, labels_kw, fmt, precision, inbounds, colorbar, colorbar_kw, *args, **kwargs)
3478 else:
3479 kw.update(levels=levels, values=values, cmap=cmap, minlength=2 - int(contour))
-> 3480 norm, cmap, levels, ticks = _build_discrete_norm(self, *args, **kw)
3481
3482 # Call function with correct keyword args
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in _build_discrete_norm(self, levels, values, cmap, norm, norm_kw, extend, vmin, vmax, minlength, *args, **kwargs)
3158 else:
3159 # Determine levels automatically
-> 3160 levels, locator = _auto_levels_locator(
3161 self, *args, N=levels, norm=norm, vmin=vmin, vmax=vmax, extend=extend, **kwargs # noqa: E501
3162 )
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/proplot/axes/plot.py in _auto_levels_locator(self, N, norm, norm_kw, extend, vmin, vmax, locator, locator_kw, symmetric, positive, negative, nozero, inbounds, centers, counts, *args)
2962 z = ma.masked_invalid(z, copy=False)
2963 if automin:
-> 2964 vmin = float(z.min())
2965 if automax:
2966 vmax = float(z.max())
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/numpy/ma/core.py in min(self, axis, out, fill_value, keepdims)
5698 # No explicit output
5699 if out is None:
-> 5700 result = self.filled(fill_value).min(
5701 axis=axis, out=out, **kwargs).view(type(self))
5702 if result.ndim:
~/miniconda3/envs/basicf4/lib/python3.8/site-packages/numpy/core/_methods.py in _amin(a, axis, out, keepdims, initial, where)
42 def _amin(a, axis=None, out=None, keepdims=False,
43 initial=_NoValue, where=True):
---> 44 return umr_minimum(a, axis, None, out, keepdims, initial, where)
45
46 def _sum(a, axis=None, dtype=None, out=None, keepdims=False,
ValueError: zero-size array to reduction operation minimum which has no identity
```
### Proplot version
matplotlib 3.3.4
proplot 0.7.0
| closed | 2021-08-05T21:26:40Z | 2021-08-18T20:45:18Z | https://github.com/proplot-dev/proplot/issues/267 | [
"bug"
] | kinyatoride | 1 |
coqui-ai/TTS | deep-learning | 3,000 | [Bug] AttributeError: 'VoiceBpeTokenizer' object has no attribute 'preprocess' | ### Describe the bug
When I execute the hugging face demo (https://huggingface.co/spaces/Olivier-Truong/XTTS_V1_CPU_working) on my local pc it loads the model fine and it opens a web gui on localhost. However, when I select an audio and type some text and click on the submit button to start the voice cloning process. The demo shows processing for few seconds then gives an attribute error . (I'm trying to run XTTS V1).
### To Reproduce
1. python3 app.py ( The hugging face space app script)
2. Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1431, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/home/test/XTTS_V1_CPU_working/app.py", line 49, in predict
tts.tts_to_file(
File "/home/test/.local/lib/python3.9/site-packages/TTS/api.py", line 390, in tts_to_file
wav = self.tts(text=text, speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
File "/home/test/.local/lib/python3.9/site-packages/TTS/api.py", line 337, in tts
wav = self.synthesizer.tts(
File "/home/test/.local/lib/python3.9/site-packages/TTS/utils/synthesizer.py", line 375, in tts
outputs = self.tts_model.synthesize(
File "/home/test/.local/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 428, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, **kwargs)
File "/home/test/.local/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 450, in inference_with_config
return self.inference(text, ref_audio_path, language, **settings)
File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/test/.local/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 529, in inference
text_tokens = torch.IntTensor(self.tokenizer.encode(text, lang=language)).unsqueeze(0).to(self.device)
File "/home/test/.local/lib/python3.9/site-packages/TTS/tts/layers/xtts/tokenizer.py", line 274, in encode
if self.preprocess:
AttributeError: 'VoiceBpeTokenizer' object has no attribute 'preprocess'
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
Python version = 3.10.3
Tts = 0.17. 5
Pytorch version = latest
transformers and numpy versions = latest
Ram available is 16gb ram
GPU is gtx 1050 ti which has 4gb Vram
It's an intel i7 7th gen laptop (I'm running this as CPU only on a kali Linux machine in virtualbox with 14gb ram. Your_TTS runs very well on this same machine.
```
### Additional context
_No response_ | closed | 2023-09-26T16:29:01Z | 2023-09-28T09:55:33Z | https://github.com/coqui-ai/TTS/issues/3000 | [
"bug"
] | Lenos500 | 3 |
sunscrapers/djoser | rest-api | 646 | Is there a way to customize HTTP Response Content (More Specifically, Code)? | For Password Reset, `POST http://localhost:8000/auth/users/reset_password/`,
1. First of all, the response header doesn't seem to tell me whether user is already registered or not. Of course the password reset link only gets send to the ones who are registered, but is there some other way to automatically tell if user is registered beforehand?
2. One idea for 1 is to return a customized HTTP response, or at least change the HTTP Code. So far, regardless of whether user is registered, the response code is `204`, the response headers are always the same, and there's no response body. Can we customized this?
Thanks!
--------
Edit:
It looks like, in `djoser/djoser/views.py`, line 236-247:
```py
@action(["post"], detail=False)
def reset_password(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
user = serializer.get_user()
if user:
context = {"user": user}
to = [get_user_email(user)]
settings.EMAIL.password_reset(self.request, context).send(to)
return Response(status=status.HTTP_204_NO_CONTENT)
```
The function would return the same response regardless of user exists... is there a way to customize/overwrite this? (Or would it be allowed to add another return statement under the if statement with different status?) | closed | 2022-01-08T00:38:44Z | 2022-01-12T22:52:44Z | https://github.com/sunscrapers/djoser/issues/646 | [] | anonymousDog12 | 4 |
gevent/gevent | asyncio | 1,434 | AttributeError: '_wrefsocket' object has no attribute 'read' | * gevent version: 0.4.15
* Python version: Python 3.6.4
* Operating System: Windows 10 Home x64
### Description:
I'm trying to listen to a socket, created from the socket class in gevent.socket. This works fine with small payloads, yet breaks for large ones, which would at first leave me to believe this is an error on my side. However, the traceback in incredibly bizzare, at least to me.
```
Connection Made!
Traceback (most recent call last):
File "src\gevent\greenlet.py", line 766, in gevent._greenlet.Greenlet.run
File "C:\Users\User\PycharmProjects\zoey\zoey\chain.py", line 112, in collect_frames
frame = Frame.load(self.client.socket)
File "C:\Users\User\PycharmProjects\zoey\zoey\framing.py", line 70, in load
code_header = unpack_from("!B", stream)[0]
File "C:\Users\User\PycharmProjects\zoey\zoey\framing.py", line 15, in unpack_from
data = stream.read(size)
File "C:\Users\User\PycharmProjects\zoey\venv\lib\site-packages\gevent\_socket3.py", line 128, in __getattr__
return getattr(self._sock, name)
AttributeError: '_wrefsocket' object has no attribute 'read'
2019-06-30T03:32:08Z <Greenlet at 0x443dd78: <bound method WSConstructor.collect_frames of <zoey.chain.WSConstructor object at 0x03A9F7B0>>> failed with AttributeError
```
### What I've run:
Here is a static URL to the repository with the code I've run. I run the `test.py` file in the root directory. The socket is created in `zoey/client.py` file but is used in the `zoey/framing.py`.
https://github.com/Zwork101/Zoey/tree/bc0d5d927e4306b4d54fdf6cb0d868293849bf45
Any help would be greatly appreciated. I tried to look for previous similar issue but couldn't find anything, maybe I didn't look hard enough. | closed | 2019-06-30T03:35:56Z | 2019-06-30T12:38:43Z | https://github.com/gevent/gevent/issues/1434 | [
"Type: Question"
] | Zwork101 | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 138 | Related paper? | do you have any research paper written on this library?i want to write a paper where my experiments are based on this library but i don't know how to cite your models? | closed | 2020-02-03T07:42:20Z | 2020-02-06T15:24:42Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/138 | [
"question"
] | mobassir94 | 1 |
openapi-generators/openapi-python-client | rest-api | 1,030 | Duplicate key in Enum when using both lowercase and uppercase strings | **Describe the bug**
enums are case-sensitive in OpenAPI spec[[1](https://stackoverflow.com/questions/60772786/case-insensitive-string-parameter-in-schema-of-openapi)] but the openapi generator does not treat them that way. When I try to use both lowercase and uppercase values in enum like so:
```
components:
schemas:
DocumentType:
type: string
enum: [txt, TXT]
```
I get this error:
```
Traceback (most recent call last):
File "/Users/xxx/.local/bin/openapi-python-client", line 8, in <module>
sys.exit(app())
^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/cli.py", line 175, in update
errors = update_existing_client(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/__init__.py", line 336, in update_existing_client
project = _get_project_for_url_or_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/__init__.py", line 295, in _get_project_for_url_or_path
openapi = GeneratorData.from_dict(data_dict, config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/parser/openapi.py", line 503, in from_dict
schemas = build_schemas(components=openapi.components.schemas, schemas=schemas, config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/parser/properties/__init__.py", line 386, in build_schemas
schemas = _create_schemas(components=components, schemas=schemas, config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/parser/properties/__init__.py", line 307, in _create_schemas
schemas_or_err = update_schemas_with_data(ref_path=ref_path, data=data, schemas=schemas, config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/parser/properties/schemas.py", line 114, in update_schemas_with_data
prop, schemas = property_from_data(
^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/parser/properties/__init__.py", line 178, in property_from_data
return EnumProperty.build(
^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/parser/properties/enum_property.py", line 124, in build
values = EnumProperty.values_from_list(value_list)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.local/pipx/venvs/openapi-python-client/lib/python3.12/site-packages/openapi_python_client/parser/properties/enum_property.py", line 203, in values_from_list
raise ValueError(f"Duplicate key {key} in Enum")
ValueError: Duplicate key TXT in Enum
```
**OpenAPI Spec File**
see above
**Desktop (please complete the following information):**
- OS: macOS 15.x
- Python Version: 3.12
- openapi-python-client version 0.19.1
| open | 2024-04-18T18:13:02Z | 2024-04-18T18:13:02Z | https://github.com/openapi-generators/openapi-python-client/issues/1030 | [] | siddhsql | 0 |
pytest-dev/pytest-cov | pytest | 14 | TypeError: %d format: a number is required, not NoneType | I get this exception.
Versions:
pytest==2.5.2
pytest-cov==1.7.0
Python 2.7.3
Please tell me, if you need further information to solve this.
Thank you
```
Traceback (most recent call last):
File "/localhome/foo_vums_dtg/bin/py.test", line 9, in <module>
load_entry_point('pytest==2.5.2', 'console_scripts', 'py.test')()
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/config.py", line 20, in main
return config.hook.pytest_cmdline_main(config=config)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 377, in __call__
return self._docall(methods, kwargs)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 388, in _docall
res = mc.execute()
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 289, in execute
res = method(**kwargs)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/main.py", line 112, in pytest_cmdline_main
return wrap_session(config, _main)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/main.py", line 105, in wrap_session
exitstatus=session.exitstatus)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 377, in __call__
return self._docall(methods, kwargs)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 388, in _docall
res = mc.execute()
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 289, in execute
res = method(**kwargs)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/terminal.py", line 338, in pytest_sessionfinish
self.config.hook.pytest_terminal_summary(terminalreporter=self)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 377, in __call__
return self._docall(methods, kwargs)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 388, in _docall
res = mc.execute()
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/_pytest/core.py", line 289, in execute
res = method(**kwargs)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/pytest_cov.py", line 130, in pytest_terminal_summary
self.cov_controller.summary(terminalreporter._tw)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/cov_core.py", line 166, in summary
CovController.summary(self, stream)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/cov_core.py", line 123, in summary
self.cov.html_report(ignore_errors=True)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/control.py", line 662, in html_report
return reporter.report(morfs)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/html.py", line 113, in report
self.report_files(self.html_file, morfs, self.config.html_dir)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/report.py", line 84, in report_files
report_fn(cu, self.coverage._analyze(cu))
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/control.py", line 592, in _analyze
return Analysis(self, it)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/results.py", line 24, in __init__
self.statements, self.excluded = self.parser.parse_source()
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/parser.py", line 210, in parse_source
self._raw_parse()
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/parser.py", line 167, in _raw_parse
self.statement_starts.update(self.byte_parser._find_statements())
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/parser.py", line 73, in _get_byte_parser
ByteParser(text=self.text, filename=self.filename)
File "/localhome/foo_vums_dtg/local/lib/python2.7/site-packages/coverage/parser.py", line 354, in __init__
(filename, synerr.msg, synerr.lineno)
TypeError: %d format: a number is required, not NoneType
foo_vums_dtg@berry:~$
```
| closed | 2014-07-14T09:43:36Z | 2014-08-15T12:47:56Z | https://github.com/pytest-dev/pytest-cov/issues/14 | [
"invalid"
] | guettli | 2 |
albumentations-team/albumentations | deep-learning | 2,059 | [Documentation] Doc on mapping from torchaudio to Albumentations | Was told that noone is using Albumentation in the audio community, although all transforms from torchaudio exist in Albumentations, although may have different names.
Need document / blog post. | open | 2024-11-05T22:47:38Z | 2024-11-05T22:54:54Z | https://github.com/albumentations-team/albumentations/issues/2059 | [
"good first issue",
"documentation"
] | ternaus | 0 |
widgetti/solara | jupyter | 1,004 | Having problem with `use_change` | I am trying to build a python console on the web using Solara but having problems with implementing `use_change` correctly.
Here is my python file:
```python
from typing import Callable, List, Tuple, cast
import ipyvue
import solara
import sys
import io
import code
from solara.components.input import use_change
class Interpreter(code.InteractiveInterpreter):
def __init__(self):
super().__init__()
self.output_buffer = io.StringIO()
def run_code(self, command: str) -> str:
"""Execute code and capture output including errors."""
if not command.strip():
return ""
sys.stdout = self.output_buffer
sys.stderr = self.output_buffer
try:
result = self.runsource(command)
output = self.output_buffer.getvalue()
return output.strip() if output else ""
except Exception as e:
error_output = self.output_buffer.getvalue()
if error_output:
return error_output
return f"{type(e).__name__}: {str(e)}"
finally:
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
self.output_buffer.truncate(0)
self.output_buffer.seek(0)
class ConsoleHistory:
def __init__(self):
self.history: List[Tuple[str, str]] = []
def add_entry(self, command: str, output: str) -> None:
self.history.append((f">>> {command}", output))
def clear(self) -> None:
self.history.clear()
def get_entries(self) -> List[Tuple[str, str]]:
return self.history
class OutputFormatter:
@staticmethod
def format_error_output(output: str) -> str:
"""Clean up error output to display only the relevant error message."""
if not output:
return ""
error_lines = output.strip().splitlines()
if len(error_lines) >= 1:
for line in reversed(error_lines):
if "line" in line and "File" in line:
continue
if ": " in line:
return line.strip()
return output
@staticmethod
def format_entry(command: str, result: str) -> str:
"""Format a single console entry for display."""
escaped_result = result.replace("<", "<").replace(">", ">")
is_error = any(err in result for err in [
"Error", "Exception", "TypeError",
"ValueError", "NameError", "ZeroDivisionError"
])
if result:
return f"""
<div style="margin: 0px 0 0 0;">
<div style="background-color: #f5f5f5; padding: 6px 8px; border-radius: 4px; font-family: 'Consolas', monospace; font-size: 0.9em;">
<span style="color: #2196F3;">{">>> "}</span><span>{command.removeprefix(">>> ")}</span>
</div>
<div style="background-color: #ffffff; padding: 6px 8px; border-left: 3px solid {'#ff3860' if is_error else '#2196F3'}; margin-top: 2px; font-family: 'Consolas', monospace; font-size: 0.9em; {'color: #ff3860;' if is_error else ''}">
{escaped_result}
</div>
</div>
"""
else:
return f"""
<div style="margin: 0px 0 0 0;">
<div style="background-color: #f5f5f5; padding: 6px 8px; border-radius: 4px; font-family: 'Consolas', monospace; font-size: 0.9em;">
<span style="color: #2196F3;">{">>> "}</span><span>{command.removeprefix(">>> ")}</span>
</div>
</div>
"""
class ConsoleManager:
def __init__(self):
self.interpreter = Interpreter()
self.history = ConsoleHistory()
self.formatter = OutputFormatter()
def execute_code(self, input_text: str, set_input_text: Callable) -> None:
"""Execute code and update history with cleaned output."""
if input_text.strip():
output = self.interpreter.run_code(input_text)
cleaned_output = self.formatter.format_error_output(output)
if "Traceback" in cleaned_output:
cleaned_output = cleaned_output.splitlines()[-1]
self.history.add_entry(input_text, f"Error ({cleaned_output})")
set_input_text("")
def clear_console(self) -> None:
"""Clear the console history."""
self.history.clear()
console_manager = ConsoleManager()
@solara.component
def ConsoleSidebar():
input_text, set_input_text = solara.use_state("")
_, set_refresh = solara.use_state(0)
with solara.Sidebar():
solara.Markdown("## Console")
with solara.Column(style={
"height": "300px",
"overflow-y": "auto",
"gap": "0px",
"box-shadow": "inset 0 0 10px rgba(0,0,0,0.1)",
"border": "3px solid #e0e0e0",
"border-radius": "6px",
"padding": "8px"
}):
for cmd, result in console_manager.history.get_entries():
solara.Markdown(console_manager.formatter.format_entry(cmd, result))
input_element = solara.v.TextField(
v_model=input_text,
on_v_model=set_input_text,
flat=True,
style_="font-family: monospace;",
label=">>>",
outlined=True,
placeholder="Enter Python code...",
attributes={"spellcheck": "false"},
)
use_change(input_element, console_manager.execute_code(input_text, set_input_text), update_events=["keyup.enter"])
with solara.Row():
solara.Button(
"Run",
on_click=lambda: console_manager.execute_code(input_text, set_input_text),
size="small"
)
solara.Button(
"Clear",
on_click=lambda: [console_manager.clear_console(), set_refresh(lambda x: x + 1)],
size="small"
)
@solara.component
def Page():
ConsoleSidebar()
solara.Markdown("# Main Content")
Page()
```
Error Message:
```python
Traceback (most recent call last):
File "C:\MASTER-FOLDER\GitHub\mesa-task\env\Lib\site-packages\reacton\core.py", line 1900, in _reconsolidate
effect()
~~~~~~^^
File "C:\MASTER-FOLDER\GitHub\mesa-task\env\Lib\site-packages\reacton\core.py", line 1131, in __call__
self._cleanup = self.callable()
~~~~~~~~~~~~~^^
File "C:\MASTER-FOLDER\GitHub\mesa-task\env\Lib\site-packages\solara\components\input.py", line 24, in add_events
widget = cast(ipyvue.VueWidget, solara.get_widget(el))
~~~~~~~~~~~~~~~~~^^^^
File "C:\MASTER-FOLDER\GitHub\mesa-task\env\Lib\site-packages\reacton\core.py", line 766, in get_widget
raise KeyError(f"Element {el} not found in all known widgets") # for the component {context.widgets}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: "Element ipyvuetify.TextField(v_model = '', on_v_model = <function ...E77A728E0>, flat = True, style_ = 'font-family: monospace;', label = '>>>', outlined = True, placeholder = 'Enter Python code...', attributes = {'spellcheck': 'false'}) not found in all known widgets"
```
Any kind of help is appreciated! | closed | 2025-02-15T08:17:58Z | 2025-03-17T22:32:15Z | https://github.com/widgetti/solara/issues/1004 | [] | Sahil-Chhoker | 2 |
matplotlib/matplotlib | data-science | 29,131 | [Bug]: Automated test failing | ### Bug summary
One of the tests in the automated suite has been failing during PRs since this morning: https://github.com/matplotlib/matplotlib/actions/workflows/tests.yml
Tests #25844 and on
### Code for reproduction
```Python
=========================== short test summary info ============================
FAILED lib/matplotlib/tests/test_backends_interactive.py::test_interactive_backend[toolmanager-MPLBACKEND=wxagg-BACKEND_DEPS=wx] - Failed: Subprocess failed to test intended behavior
<frozen _collections_abc>:982: UserWarning: Treat the new Tool classes introduced in v1.5 as experimental for now; the API and rcParam may change in future versions.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/runner/work/matplotlib/matplotlib/lib/matplotlib/tests/test_backends_interactive.py", line 232, in _test_interactive_impl
assert result.getvalue() == result_after.getvalue()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
```
### Actual outcome
=========================== short test summary info ============================
FAILED lib/matplotlib/tests/test_backends_interactive.py::test_interactive_backend[toolmanager-MPLBACKEND=wxagg-BACKEND_DEPS=wx] - Failed: Subprocess failed to test intended behavior
<frozen _collections_abc>:982: UserWarning: Treat the new Tool classes introduced in v1.5 as experimental for now; the API and rcParam may change in future versions.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/runner/work/matplotlib/matplotlib/lib/matplotlib/tests/test_backends_interactive.py", line 232, in _test_interactive_impl
assert result.getvalue() == result_after.getvalue()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
### Expected outcome
No hard failures.
### Additional information
https://github.com/matplotlib/matplotlib/actions/workflows/tests.yml
### Operating system
_No response_
### Matplotlib Version
GitHub Repo / Dev Version
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
git checkout | closed | 2024-11-13T01:57:33Z | 2024-11-13T02:24:39Z | https://github.com/matplotlib/matplotlib/issues/29131 | [
"status: duplicate"
] | NGWi | 2 |
d2l-ai/d2l-en | computer-vision | 1,722 | Batch Normalization with batch size of 1. | In 7.5.1: "Note that if we tried to apply batch normalization with minibatches of size 1, we would not be able to learn anything. That is because after subtracting the means, each hidden unit would take value 0!".
I think, a hidden unit wouldn't take value 0, since we compute means and variances axis(channel)-wise and subtract them elementwise.
Minimal example:
`x = torch.FloatTensor(1, 1, 2, 1) # [[[[50.], [5.]]]]`
`means = x.mean(dim=(0, 2, 3)) # [27.5000]`
`x - means # [[[[ 22.5000], [-22.5000]]]]` | open | 2021-04-14T22:00:40Z | 2021-04-14T22:00:40Z | https://github.com/d2l-ai/d2l-en/issues/1722 | [] | bsuleymanov | 0 |
deepfakes/faceswap | deep-learning | 582 | ERROR :Caught exception in child process: 14128 | GUI Extract error
### GUI log
Loading...
01/08/2019 21:48:29 INFO Log level set to: INFO
01/08/2019 21:48:31 INFO Output Directory: F:\Python\faceswap-master\output
01/08/2019 21:48:31 INFO Input Video: F:\Python\faceswap-master\input\1.mp4
01/08/2019 21:48:31 INFO Loading Detect from Mtcnn plugin...
01/08/2019 21:48:31 INFO Loading Align from Fan plugin...
01/08/2019 21:48:31 INFO NB: Parallel processing disabled.You may get faster extraction speeds by enabling it with the -mp switch
01/08/2019 21:48:31 INFO Starting, this may take a while...
01/08/2019 21:48:32 INFO Initializing MTCNN Detector...
**01/08/2019 21:48:32 ERROR Caught exception in child process: 14128**
01/08/2019 21:49:31 INFO Waiting for Detector... Time out in 4 minutes
01/08/2019 21:50:31 INFO Waiting for Detector... Time out in 3 minutes
01/08/2019 21:51:31 INFO Waiting for Detector... Time out in 2 minutes
01/08/2019 21:52:31 INFO Waiting for Detector... Time out in 1 minutes
### crash_report
01/08/2019 21:48:32 Detector.run MainThread mtcnn initialize INFO Initializing MTCNN Detector...
01/08/2019 21:48:32 Detector.run MainThread _base run ERROR Caught exception in child process: 14128
01/08/2019 21:49:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 4 minutes
01/08/2019 21:50:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 3 minutes
01/08/2019 21:51:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 2 minutes
01/08/2019 21:52:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 1 minutes
Traceback (most recent call last):
File "F:\Python\faceswap-master\lib\cli.py", line 90, in execute_script
process.process()
File "F:\Python\faceswap-master\scripts\extract.py", line 49, in process
self.run_extraction()
File "F:\Python\faceswap-master\scripts\extract.py", line 143, in run_extraction
self.run_detection(to_process)
File "F:\Python\faceswap-master\scripts\extract.py", line 194, in run_detection
self.plugins.launch_detector()
File "F:\Python\faceswap-master\scripts\extract.py", line 379, in launch_detector
raise ValueError("Error initializing Detector")
ValueError: Error initializing Detector
============ System Information ============
git_branch: Not Found
git_commits: Not Found
gpu_cuda: 9.0
gpu_cudnn: 7.4.2
gpu_devices: GPU_0: GeForce GTX 750
gpu_driver: 417.22
gpu_vram: GPU_0: 1024MB
os_machine: AMD64
os_platform: Windows-10-10.0.17134-SP0
os_release: 10
py_command: F:\Python\faceswap-master\faceswap.py extract -i F:/Python/faceswap-master/input/1.mp4 -o F:/Python/faceswap-master/output -l 0.6 --serializer json -D mtcnn -A fan -mtms 20 -mtth 0.6 0.7 0.7 -mtsc 0.709 -sz 256 -L INFO
py_conda_version: N/A
py_implementation: CPython
py_version: 3.6.6
py_virtual_env: False
sys_cores: 4
sys_processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
sys_ram: Total: 8129MB, Available: 3269MB, Used: 4860MB, Free: 3269MB
-------------------------------
| closed | 2019-01-08T14:19:09Z | 2019-01-11T07:49:28Z | https://github.com/deepfakes/faceswap/issues/582 | [] | dream80 | 3 |
miguelgrinberg/Flask-SocketIO | flask | 800 | ws http |
Excuse me, does flask socket.io support ws protocol? | closed | 2018-09-25T07:25:08Z | 2018-09-30T02:19:29Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/800 | [
"question"
] | zhangatao | 9 |
mljar/mercury | data-visualization | 466 | importError DLL (cryptography) when running demos | I have python v3.9.
I installed Mercury (using pip) on Windows10, on a dedicated env.
When running demo examples, I have an importError (cryptography python module) message.
Could you please help what's wrong ?

| open | 2024-09-15T12:43:24Z | 2024-10-15T14:04:38Z | https://github.com/mljar/mercury/issues/466 | [] | yvanblanchard | 4 |
davidsandberg/facenet | tensorflow | 532 | how to update the model to recognize the 3D face like Apple faceid | how to update the model to recognize the 3D face like Apple faceid? thanks | closed | 2017-11-16T03:09:21Z | 2018-04-01T21:10:46Z | https://github.com/davidsandberg/facenet/issues/532 | [] | xiaochongs | 0 |
miguelgrinberg/Flask-Migrate | flask | 233 | stuck: cannot migrate, upgrade or downgrade etc | After making a minor change to my models (added last_seen column), running flask db migrate was not working. After some googling I found a couple people who said deleting the alembic_version table from their db helped, so I tried that.
It didn't work, and now when I try to run **flask db migrate** I receive the following:
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
ERROR [alembic.env] Target database is not up to date.
When I try **flask db upgrade** I receiving the following error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "user" already exists
[SQL: '\nCREATE TABLE "user" (\n\tid SERIAL NOT NULL, \n\tusername VARCHAR(40), \n\temail VARCHAR(120), \n\tpassword_hash VARCHAR(128), \n\tPRIMARY KEY (id)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/f405)
I've also tried upgrading / downgrading to specific versions , but also receive errors.
When I try to run **flask db current** I receive no info like so:
$ flask db current
....../__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
My guess is that it has to do with the fact that I dropped the alembic_version table.
I couuuuuuld drop my db and start fresh as I'm in development, but if there is another fix that would be ideal .
Cheers, | closed | 2018-10-11T05:42:05Z | 2022-05-16T18:22:07Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/233 | [] | aherzfeld | 4 |
lanpa/tensorboardX | numpy | 113 | RuntimeError: getTracingState: Assertion `var_state == state` failed. | Hi @lanpa , thanks for this amazing tool. I'm trying to use add_graph in my own project, where I met some problems.
My version is pytorch==0.3.1 and tensorboard==1.6.0.
Here is the error message:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-8cbfabcfa5a0> in <module>()
1 from tensorboardX import SummaryWriter
2 writer = SummaryWriter()
----> 3 writer.add_graph(model, (inputs,), verbose=True)
~/anaconda3/lib/python3.6/site-packages/tensorboardX/writer.py in add_graph(self, model, input_to_model, verbose)
398 print('add_graph() only supports PyTorch v0.2.')
399 return
--> 400 self.file_writer.add_graph(graph(model, input_to_model, verbose))
401
402 def add_embedding(self, mat, metadata=None, label_img=None, global_step=None, tag='default'):
~/anaconda3/lib/python3.6/site-packages/tensorboardX/graph.py in graph(model, args, verbose)
50 import torch
51 with torch.onnx.set_training(model, False):
---> 52 trace, _ = torch.jit.trace(model, args)
53 if LooseVersion(torch.__version__) >= LooseVersion("0.4"):
54 torch.onnx._optimize_trace(trace, False)
~/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py in trace(f, args, kwargs, nderivs)
239 if not isinstance(args, tuple):
240 args = (args,)
--> 241 return TracedModule(f, nderivs=nderivs)(*args, **kwargs)
242
243
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
353 hook(self, input)
354 if torch.jit._tracing:
--> 355 result = self._slow_forward(*input, **kwargs)
356 else:
357 result = self.forward(*input, **kwargs)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
331 def _slow_forward(self, *input, **kwargs):
332 input_vars = tuple(torch.autograd.function._iter_variables(input))
--> 333 tracing_state = torch.jit.get_tracing_state(input_vars)
334 if not tracing_state:
335 return self.forward(*input, **kwargs)
~/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py in get_tracing_state(args)
41 if not torch._C._is_tracing(args):
42 return None
---> 43 return torch._C._get_tracing_state(args)
44
45
RuntimeError: /opt/conda/conda-bld/pytorch_1518243271935/work/torch/csrc/jit/tracer.h:105: getTracingState: Assertion `var_state == state` failed.
``` | closed | 2018-03-25T08:56:34Z | 2018-05-08T17:16:32Z | https://github.com/lanpa/tensorboardX/issues/113 | [
"onnx"
] | Xeaver | 1 |
plotly/dash-bio | dash | 385 | Sequence Viewer app doesn't save selection/coverage data due to dcc.Loading component | To fix: Remove the wrapping `dcc.Loading` from the `SequenceViewer` component in `app_sequence_viewer.py`. | closed | 2019-07-03T15:53:41Z | 2019-07-12T14:12:25Z | https://github.com/plotly/dash-bio/issues/385 | [] | shammamah-zz | 0 |
miguelgrinberg/microblog | flask | 214 | Chapter 4: Database (v0.4) - How to convert SQL result into JSON format variable? | Hi Miguel, I copied the Chapter 4 zip file and added some code in `/microblog-0.4/app/routes.py` file.
https://github.com/miguelgrinberg/microblog/archive/v0.4.zip
How can I convert my SQL result **userz99** `User.query.all()` into a JSON format variable?
**routes.py**
```
from flask import render_template, flash, redirect, url_for
from app import app
from app.forms import LoginForm
from app.models import User
import json
@app.route('/')
@app.route('/index')
def index():
user = {'username': 'Miguel'}
posts = [
{
'author': {'username': 'John'},
'body': 'Beautiful day in Portland!'
},
{
'author': {'username': 'Susan'},
'body': 'The Avengers movie was so cool!'
}
]
#...............................................................................
userz99 = User.query.all()
print("\n * userz99")
print(userz99)
print("")
#...............................................................................
return render_template('index.html', title='Home', user=user, posts=posts)
```
| closed | 2020-03-06T19:50:47Z | 2020-03-30T13:22:38Z | https://github.com/miguelgrinberg/microblog/issues/214 | [
"question"
] | mrbiggleswirth | 5 |
OpenBB-finance/OpenBB | python | 6,903 | [🕹️] Copilot for Terminal Code Side-QUuest | ### What side quest or challenge are you solving?
Copilot for Terminal
### Points
300 - 750
### Description
Create a custom copilot that integrates a new language model (e.g., Cohere, Llama3.2, etc.) into OpenBB's Terminal.
### Provide proof that you've completed the task
... | closed | 2024-10-28T15:21:45Z | 2024-10-30T20:54:33Z | https://github.com/OpenBB-finance/OpenBB/issues/6903 | [] | FloatinggOnion | 7 |
OFA-Sys/Chinese-CLIP | computer-vision | 75 | No module named 'torch._C._distributed_rpc'; 'torch._C' is not a packageModuleNotFoundError | 请问微调代码可以在Windows操作系统上实现么?我在Windows操作系统上调试的时候出现torch的问题,这是因为linux和Windows上torch有差别的原因么? | closed | 2023-03-26T07:15:45Z | 2023-06-04T09:20:06Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/75 | [] | yourfathermyson | 1 |
tartiflette/tartiflette | graphql | 284 | GraphiQL JS error with tartiflette 1.0RC1 | I've been testing the tartiflette RC1 (https://github.com/tartiflette/tartiflette/pull/272) with tartiflette-aiohttp and noticed that GraphiQL crashes with this error:
```
GraphQLError: Syntax Error: Expected <EOF>, found Name "longer"
at syntaxError (https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:23522:10)
at expect (https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:28513:32)
at parseValue (https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:27279:3)
at buildInputValue (https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:33676:118)
at https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:25955:31
at Array.reduce (<anonymous>)
at keyValMap (https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:25954:15)
at buildInputValueDefMap (https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:33669:36)
at buildDirective (https://cdn.jsdelivr.net/npm/graphiql@0.12.0/graphiql.js:33696:13)
at Array.map (<anonymous>)
``` | closed | 2019-09-03T08:09:16Z | 2019-09-11T14:51:26Z | https://github.com/tartiflette/tartiflette/issues/284 | [
"bug"
] | aljinovic | 2 |
timkpaine/lantern | plotly | 170 | can use other library for emails? | https://github.com/lavr/python-emails | closed | 2018-07-25T19:09:10Z | 2018-08-07T14:13:40Z | https://github.com/timkpaine/lantern/issues/170 | [
"feature",
"question"
] | timkpaine | 1 |
donnemartin/data-science-ipython-notebooks | scikit-learn | 11 | Add simplified Spark installation instructions from the repo: https://github.com/donnemartin/dev-setup | Mac users can benefit from a much simplified installation method thanks to Homebrew.
| closed | 2015-07-21T11:39:23Z | 2015-08-20T10:47:44Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/11 | [
"enhancement"
] | donnemartin | 1 |
google-deepmind/graph_nets | tensorflow | 144 | Performance issue in /graph_nets/tests (by P3) | Hello! I've found a performance issue in /graph_nets/tests/utils_tf_test.py: `with tf.Session() as sess`[(here)](https://github.com/deepmind/graph_nets/blob/64771dff0d74ca8e77b1f1dcd5a7d26634356d61/graph_nets/tests/utils_tf_test.py#L587) is repeatedly called in the loop `for graph_dict in self.graphs_dicts_in`[(here)](https://github.com/deepmind/graph_nets/blob/64771dff0d74ca8e77b1f1dcd5a7d26634356d61/graph_nets/tests/utils_tf_test.py#L582).
`tf.Session` being defined repeatedly could lead to incremental overhead. If you define `tf.Session` out of the loop and pass `tf.Session` as a parameter to the loop, your program would be much more efficient. Here is [the Stack Overflow post](https://stackoverflow.com/questions/48051647/tensorflow-how-to-perform-image-categorisation-on-multiple-images) to support it.
Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy. | closed | 2021-08-25T10:27:10Z | 2021-12-14T11:06:36Z | https://github.com/google-deepmind/graph_nets/issues/144 | [] | DLPerf | 2 |
gradio-app/gradio | python | 10,066 | login to server failed: tls: failed to verify certificate: x509: certificate has expired or is not yet valid | ### Describe the bug
Since yesterday, I have been facing the issue of gradio not launching properly. It keeps printing this error and displays unexpected ui.
`Running Gradio in a Colab notebook requires sharing enabled. Automatically setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).
`Colab notebook detected. To show errors in colab notebook, set debug=True in launch()`
`

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app/.
2024/11/28 12:45:01 [W] [service.go:132] login to server failed: tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2024-11-28T12:45:01Z is after 2024-11-28T06:24:31Z
Running on [https://localhost:7860/](https://obk54oqrggb-496ff2e9c6d22116-7860-colab.googleusercontent.com/)`
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown("# Hello World")
demo.launch()
```
### Screenshot

### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.7.0
gradio_client version: 1.5.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.5.0 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.2
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.8.0
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-11-28T12:52:56Z | 2024-11-28T14:14:39Z | https://github.com/gradio-app/gradio/issues/10066 | [
"bug"
] | hasanshahid5678 | 2 |
betodealmeida/shillelagh | sqlalchemy | 194 | Don't log warning for "Couldn't load adapter" if adapter isn't specified | Currently I get a bunch of warnings like:
```
Couldn't load adapter datasetteapi = shillelagh.adapters.api.datasette:DatasetteAPI
```
even though I am explicitly passing in a list of adapters and not specifying that adapter.
These warnings should be printed only if the adapter is in the `adapters` list / there is no list:
https://github.com/betodealmeida/shillelagh/blob/a427de0b2d1ac27402d70b8a2ae69468f1f3dcad/src/shillelagh/backends/apsw/db.py#L510-L511 | closed | 2022-03-10T19:06:13Z | 2022-03-10T23:27:26Z | https://github.com/betodealmeida/shillelagh/issues/194 | [
"bug",
"help wanted",
"good first issue"
] | cancan101 | 1 |
python-gino/gino | asyncio | 350 | Rollback nested transactions | * Gino 0.7.5:
* Python 3.6:
* asyncpg version:
* asyncpg 0.17.0:
* PostgreSQL version:
I have TestCase in this test case, I want rollback all after each test.
For it, I use manual transaction in setUpAsync
```
self.conn = await db.acquire()
self.trans = await self.conn.transaction()
```
and then release rollback transaction in tearDownAsync
```
await self.trans.rollback()
await self.conn.release()
```
But when I'm trying on first step
```
User.create(name='name')
```
And on the another peace of code
```
User.query.where(name == 'name')
```
This query returns None.
I think it because ORM always create new connection.
If I don't run transaction, All is ok.
Can I use global rollback and ORM together? | closed | 2018-09-27T14:58:35Z | 2018-10-23T09:20:33Z | https://github.com/python-gino/gino/issues/350 | [
"question"
] | Deniallugo | 3 |
vimalloc/flask-jwt-extended | flask | 44 | Accessing get_jwt_identity() in another decorator. | Hey, really like the library. Very useful!
I want to use the identity from the JWT in another decorator.
Currently I have (details left out, but you get the gist):
```
@app.Route('/api/,,,'. methods=['GET']
@jwt_required
@service_supported(str(get_jwt_identity()), "SERVICE")
def method1():
pass
```
In this case the get_jwt_identity() returns {}.
When placed inside method1(), I get the correct result.
Any ideas? | closed | 2017-05-20T12:46:49Z | 2017-05-20T17:41:23Z | https://github.com/vimalloc/flask-jwt-extended/issues/44 | [] | genie137 | 2 |
freqtrade/freqtrade | python | 11,012 | Use same data with multiple bots | * Operating system: Kubuntu
* Python Version: 3.12.3 (`python -V`)
* CCXT version: 4.3.68
* * Freqtrade Version: 2024.8-dev
## Your question
I have multiple bots running on one PC and the config file is identical but the strategies are different but they use the same time frames
My question is:
Can bot 2 use the same data bot 1 used it instead of getting it again from the exchange and the same goes for the other instances?
This will reduce bandwidth consumption and lower the pressure on the network because some times there are more than 5 bots running at the same time and there is probability for more.
Thank you very much. | closed | 2024-12-01T23:10:47Z | 2024-12-02T15:01:32Z | https://github.com/freqtrade/freqtrade/issues/11012 | [
"Question"
] | Mohammad699 | 7 |
TencentARC/GFPGAN | deep-learning | 277 | 新训练数据的eye_mouth_landmarks要如何生成 | FFHQ中带眼镜的数据较少,想加入一部分戴眼镜的图片来训练,那该图片对应的eye_mouth_landmarks要如何生成? | open | 2022-09-30T09:29:02Z | 2022-10-08T07:21:29Z | https://github.com/TencentARC/GFPGAN/issues/277 | [] | nnmaitian | 1 |
lexiforest/curl_cffi | web-scraping | 77 | Content-type header | There's a bug on the `content-type` header that it doesn't override if you add it on your headers but it duplicates instead unlike, for example, the `user-agent` header. I don't know on other content-types but I only tested it in `requests.AsyncSession` with `content-type:application/json` | closed | 2023-07-08T11:34:08Z | 2023-11-02T11:40:48Z | https://github.com/lexiforest/curl_cffi/issues/77 | [] | mafuyuuu1 | 9 |
plotly/dash | plotly | 2,718 | [BUG] use Patch to append children,but init ui disappear |

the example code:
```py
from dash import Dash, html, Input, Output, Patch, callback
def init_ui():
ui = html.Div([
"init ui"
])
return ui
def add_ui():
ui = html.Div([
"add ui"
])
return ui
app = Dash(__name__)
app.layout = html.Div([
html.Button("Add element", id="dynamic-add-filter-btn", n_clicks=0),
html.Div(id='dynamic-dropdown-container-div', children=[]),
])
@callback(
Output('dynamic-dropdown-container-div', 'children'),
Input('dynamic-add-filter-btn', 'n_clicks')
)
def display_dropdowns(n_clicks):
patched_children = Patch()
if n_clicks ==0:
return init_ui()
else:
new_element = add_ui()
patched_children.append(new_element)
return patched_children
if __name__ == '__main__':
app.run(debug=True)
``` | closed | 2023-12-24T13:08:35Z | 2023-12-25T09:49:45Z | https://github.com/plotly/dash/issues/2718 | [] | Liripo | 2 |
Gerapy/Gerapy | django | 101 | 配置生成新任务 | 通过,最新git clone下载gerapy,但是貌似不能实现通过配置创建新的任务。 | closed | 2019-03-12T05:42:44Z | 2019-11-20T19:46:24Z | https://github.com/Gerapy/Gerapy/issues/101 | [] | whyfunction | 1 |
rio-labs/rio | data-visualization | 189 | Verify That `project-files` in `rio.toml` Works as Intended | `rio.toml` allows specifying which files are part of the project, and which ones aren't. This is used for change detection / reloading. I've frequently seen Rio not reload even when it should, though I don't have a specific case available. Play around with this and see if it works as intended.
For example, does it reload when an asset changes? | open | 2024-12-06T21:18:46Z | 2024-12-06T21:18:47Z | https://github.com/rio-labs/rio/issues/189 | [
"bug"
] | mad-moo | 0 |
healthchecks/healthchecks | django | 350 | TimeZone | Morning,
Maybe I'm being slightly dense but my timezone is now an hour out and it's causing my notifications to continually trigger.
I'm in the London BST timezone using docker to run the Healthchecks container.
How can I update my timezone or setup my system to not trigger the alerts when the timezones from container to running check do not line up?
Thanks in advance :) | closed | 2020-04-01T07:17:24Z | 2020-04-06T09:18:20Z | https://github.com/healthchecks/healthchecks/issues/350 | [] | Rustymage | 9 |
mwaskom/seaborn | pandas | 3,627 | Performance Issue: Seaborn Lineplot Execution Time Discrepancy with and without Timezones | **Issue Description:**
Hello. I encountered a notable performance difference when using Seaborn's `lineplot` function to visualize time series data, particularly when comparing plots with and without timezones.
**Code and Observation:**
```python
import seaborn as sns
data = np.random.randn(n)
# Prepare DataFrames
dates_no_tz = pd.date_range('2019-01-01', periods=n, freq='T')
dates_with_tz = pd.date_range('2019-01-01', periods=n, freq='T', tz='UTC')
df_no_tz = pd.DataFrame({'Time': dates_no_tz, 'Value': data})
df_with_tz = pd.DataFrame({'Time': dates_with_tz, 'Value': data})
# Plot Time Series without timezone using Seaborn
%time sns.lineplot(x='Time', y='Value', data=df_no_tz).set_title('No Timezone')
# Plot Time Series with timezone using Seaborn
%time sns.lineplot(x='Time', y='Value', data=df_with_tz).set_title('With UTC Timezone')
CPU times: user 782 ms, sys: 101 ms, total: 884 ms
Wall time: 898 ms
CPU times: user 6.93 s, sys: 437 ms, total: 7.37 s
Wall time: 6.89 s
```
This represents approximately a 10-fold performance difference. Would you kindly consider conducting an analysis?
Thank you!
**Additional Info**
matplotlib == 3.8.0
seaborn == 0.12.2
pandas == 2.1.4 | closed | 2024-01-30T06:45:52Z | 2024-02-10T19:38:29Z | https://github.com/mwaskom/seaborn/issues/3627 | [] | HarryCollins2 | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,228 | Information about output on visdom? | Hello, I am a beginner of cycleGAN
I deploy my own dataset on cycleGAN
At runtime I found that there are 8 pictures on visdom
I would like to ask what rec_A, idt_B and rec_b, idt_A mean respectively ?
This is the instruction I used
python train.py --dataroot ./datasets/fishboat --name fishboat_cyclegan --model cycle_gan
Thank you for your attention and answers | closed | 2021-01-20T12:29:23Z | 2021-01-27T05:42:43Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1228 | [] | yenai3726 | 2 |
graphql-python/graphql-core | graphql | 106 | GraphQLError double wrapping returning result | I found out something confusing and wanted to know if maybe this could be a bug or if I'm missing something and this is an expected behavior.
This is my `example.py` file. It only has one resolver that raises an error.
```python
from graphql import (GraphQLSchema, GraphQLObjectType, GraphQLField, GraphQLString, graphql_sync, )
def resolve_fail(*args, **kwargs):
raise ValueError("Some error")
schema = GraphQLSchema(
query=GraphQLObjectType(
name='RootQueryType',
fields={
'hello': GraphQLField(
GraphQLString,
resolve=resolve_fail)
}))
query = '{ hello }'
result = graphql_sync(schema, query)
```
I'm aware that `result.errors[0]` is a `GraphQLError` exception. But, I was expecting `result.errors[0].original_error` to be `ValueError`.
However, I can see that `result.errors[0].original_error` is a `GraphQLError` and `result.errors[0].original_error.original_error` is `ValueError`.
**Is this ok?**
```bash
>>> print(type(result.errors[0]))
<class 'graphql.error.graphql_error.GraphQLError'>
>>> print(type(result.errors[0].original_error))
<class 'graphql.error.graphql_error.GraphQLError'>
>>> print(type(result.errors[0].original_error.original_error))
<class 'ValueError'>
``` | closed | 2020-09-04T15:14:41Z | 2021-02-08T19:44:59Z | https://github.com/graphql-python/graphql-core/issues/106 | [
"bug"
] | Checho3388 | 6 |
HIT-SCIR/ltp | nlp | 678 | 添加词之后出现的报错 | 对于ltp4.2.0,在使用add_words之后特殊情况下的报错。直觉上是添加了word(如‘abc')之后,输入类似”xabc“这样的词会出现这样的问题:
输入
```
ltp.add_words(['800000股'])
ltp.pipeline(['3800000股'], tasks=["cws", "pos"])
```
报错信息为KeyError
```
Traceback (most recent call last)
Cell In[57], line 1
----> 1 ltp.pipeline(['3800000股'], tasks=["cws", "pos"])
File D:\software\anaconda\envs\EDEE\lib\site-packages\ltp\nerual.py:24, in no_grad.<locals>.wrapper(*args, **kwargs)
22 def wrapper(*args, **kwargs):
23 with torch.no_grad():
---> 24 return func(*args, **kwargs)
File D:\software\anaconda\envs\EDEE\lib\site-packages\ltp\nerual.py:185, in LTP.pipeline(self, inputs, tasks, raw_format, return_dict)
183 cache[cache_key] = (hidden_state, attention_mask)
184 result = self.model.task_heads[task](hidden_state, attention_mask)
--> 185 store[task] = self.post[task](result, hidden, store, inputs, tokenized)
187 if not raw_format:
188 if is_split_into_words:
File D:\software\anaconda\envs\EDEE\lib\site-packages\ltp\nerual.py:24, in no_grad.<locals>.wrapper(*args, **kwargs)
22 def wrapper(*args, **kwargs):
23 with torch.no_grad():
---> 24 return func(*args, **kwargs)
File D:\software\anaconda\envs\EDEE\lib\site-packages\ltp\nerual.py:293, in LTP._cws_post(self, result, hidden, store, inputs, tokenized)
291 for i, e in enumerate(word_end):
292 if i == 0:
--> 293 entities[-1].append((0, length2index[e]))
294 else:
295 entities[-1].append((length2index[word_end[i - 1]] + 1, length2index[e]))
KeyError: 1
```
| open | 2023-11-15T03:32:37Z | 2023-11-15T03:32:37Z | https://github.com/HIT-SCIR/ltp/issues/678 | [] | Jing-XING | 0 |
shibing624/text2vec | nlp | 155 | 是否支持ollama | - [ ] I checked to make sure that this is not a duplicate issue
### Describe the solution you'd like
目前部署不支持ollama ,部署难点比较大 | open | 2024-10-11T05:18:22Z | 2024-10-12T06:42:43Z | https://github.com/shibing624/text2vec/issues/155 | [
"enhancement"
] | smileyboy2019 | 1 |
huggingface/datasets | deep-learning | 7,142 | Specifying datatype when adding a column to a dataset. | ### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function.
IMO this functionality should be natively supported.
https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674
### Your contribution
I can submit a PR for this. | closed | 2024-09-08T07:34:24Z | 2024-09-17T03:46:32Z | https://github.com/huggingface/datasets/issues/7142 | [
"enhancement"
] | varadhbhatnagar | 1 |
python-gitlab/python-gitlab | api | 3,001 | Support for Related Issues in Python-GitLab Merge Requests API | ## Description of the problem, including code/CLI snippet
According to the documentation on the [Merge requests API | GitLab](https://docs.gitlab.com/ee/api/merge_requests.html#list-issues-related-to-the-merge-request), it supports finding related issues through merge requests. However, I confirmed that the [Merge requests - python-gitlab v4.11.1](https://python-gitlab.readthedocs.io/en/v4.11.1/gl_objects/merge_requests.html) documentation does not support this feature. Is there any expectation for support in the future?
## Specifications
- python-gitlab version: 4.11.1
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): 17.3
| closed | 2024-09-30T03:58:16Z | 2024-09-30T05:21:46Z | https://github.com/python-gitlab/python-gitlab/issues/3001 | [] | kkc-tonywu | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 644 | ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. | I have installed all of the requirements. I have installed the vs community extensions but I don't know what the issue is here. Tenserflow is 1.15.
Traceback (most recent call last):
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "demo_cli.py", line 4, in <module>
from synthesizer.inference import Synthesizer
File "C:\Users\iRazur\Desktop\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 1, in <module>
from synthesizer.tacotron2 import Tacotron2
File "C:\Users\iRazur\Desktop\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py", line 3, in <module>
from synthesizer.models import create_model
File "C:\Users\iRazur\Desktop\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\models\__init__.py", line 1, in <module>
from .tacotron import Tacotron
File "C:\Users\iRazur\Desktop\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 1, in <module>
import tensorflow as tf
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow\__init__.py", line 99, in <module>
from tensorflow_core import *
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\__init__.py", line 28, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\iRazur\miniconda3\envs\voice-clone\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. | closed | 2021-01-31T16:53:14Z | 2021-02-14T16:44:10Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/644 | [] | Fly-sudo | 3 |
uxlfoundation/scikit-learn-intelex | scikit-learn | 2,321 | Deprecation warnings when using patch_sklearn() | <!--
~ Copyright 2020 Intel Corporation
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
---
name: "Bug_report"
about: Create a report to help us improve
title: 'Deprecation warnings when using patch_sklearn()'
labels: bug
assignees: ''
---
**Describe the bug**
When enabling Intel optimizations via `patch_sklearn()` from scikit-learn-intelex, several `FutureWarning` messages are printed indicating that `'force_all_finite' was renamed to 'ensure_all_finite' in 1.6 and will be removed in 1.8`. These warnings do not appear when the patch is commented out, suggesting an unintended side effect of the extension.
**To Reproduce**
Steps to reproduce the behavior:
1. Install the following packages:
- scikit-learn 1.6.1
- scikit-learn-intelex 2025.1.0
2. Create a Python script with the following code (indented for clarity):
import platform
import sys
from sklearnex import patch_sklearn
patch_sklearn() # Enable Intel optimizations for scikit-learn
from sklearn.svm import SVC
from sklearn.datasets import make_classification
# Create data and train an SVC model
X, y = make_classification(n_samples=100, n_features=10, random_state=42)
clf = SVC()
clf.fit(X, y)
# Print system details
print("### System Info ###")
print(f"Python Version: {sys.version}")
print(f"Platform: {platform.platform()}")
print(f"Processor: {platform.processor()}")
3. Run the script.
4. Observe that the output includes the Intel extension banner and multiple `FutureWarning` messages regarding the `force_all_finite` parameter.
5. Comment out the `patch_sklearn()` call and re-run the script to see that the warnings are not present.
**Expected behavior**
Enabling scikit-learn-intelex via `patch_sklearn()` should not trigger deprecation warnings from scikit-learn. The extension should seamlessly integrate Intel optimizations without surfacing warnings related to parameter renaming.
**Output/Screenshots**
With `patch_sklearn()` enabled:
Intel(R) Extension for Scikit-learn* enabled (https://github.com/intel/scikit-learn-intelex)
c:\Users\david\anaconda3\envs\mfs_1\Lib\site-packages\sklearn\utils\deprecation.py:151: FutureWarning: 'force_all_finite' was renamed to 'ensure_all_finite' in 1.6 and will be removed in 1.8.
warnings.warn(
... (similar warnings repeated)
### System Info ###
Python Version: 3.13.2 | packaged by conda-forge | (main, Feb 17 2025, 13:52:56) [MSC v.1942 64 bit (AMD64)]
Platform: Windows-11-10.0.22000-SP0
Processor: Intel64 Family 6 Model 151 Stepping 2, GenuineIntel
With `patch_sklearn()` commented out:
No warnings are produced, only the system information is printed.
**Environment:**
- OS: Windows 11 (Version: Windows-11-10.0.22000-SP0)
- Python: 3.13.2 (packaged by conda-forge)
- scikit-learn: 1.6.1
- scikit-learn-intelex: 2025.1.0
- Processor: Intel64 Family 6 Model 151 Stepping 2, GenuineIntel
| open | 2025-02-19T08:28:49Z | 2025-02-21T09:06:40Z | https://github.com/uxlfoundation/scikit-learn-intelex/issues/2321 | [] | DavidCohen2 | 1 |
PaddlePaddle/PaddleHub | nlp | 2,313 | Enable Private Vulnerability Reporting in GitHub |
In your repository, we have found a bug that may require your attention. We do not want to disclose the details. Therefore, we request you to enable private vulnerability reporting in your repository.
### Sponsorship and Support:
This work is done by the security researchers from OpenRefactory and is supported by the [Open Source Security Foundation (OpenSSF)](https://openssf.org/): [Project Alpha-Omega](https://alpha-omega.dev/). Alpha-Omega is a project partnering with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code - and get them fixed - to improve global software supply chain security.
The bug is found by running the iCR tool by [OpenRefactory, Inc.](https://openrefactory.com/) and then manually triaging the results.
| closed | 2023-11-16T10:48:54Z | 2024-03-04T03:48:02Z | https://github.com/PaddlePaddle/PaddleHub/issues/2313 | [] | ZuhairORZaki | 0 |
serengil/deepface | machine-learning | 755 | Automatic download of shape_predictor_5_landmarks.dat file wouldn't work -> My solution | Apparently, the code that automatically downloads the shape_predictor_5_landmarks.dat file wouldn't work.
It was always stuck at `"shape_predictor_5_landmarks.dat" is going to be downloaded`. I executed the corresponding code separately and it said something about "content-type" but I could not figure out what it's about.
Finally, I manually downloaded the file and put them in the `.deepface/weights`-folder in the main user folder on my Macbook and it worked. 👍 | closed | 2023-05-15T17:56:43Z | 2023-05-15T18:00:49Z | https://github.com/serengil/deepface/issues/755 | [
"dependencies"
] | moerv9 | 1 |
automl/auto-sklearn | scikit-learn | 1,197 | [Request] Allow portfolio and selector models to be set through hyperparameters in ASKL2 | As per the title, it will be useful for ASKL2 to have a configurable `portfolio` and `policy selector`.
It's beneficial for research (avoiding 'cheating' through meta-learning in a benchmark) or for customization.
Issue opened on the request of @mfeurer | open | 2021-07-30T13:37:21Z | 2022-10-11T15:55:16Z | https://github.com/automl/auto-sklearn/issues/1197 | [
"enhancement"
] | PGijsbers | 4 |
aleju/imgaug | machine-learning | 52 | Unexpected determinism | Hi, I've got the following code:
```
def augment(im, y):
im_arr = np.array(im)
# See documentation for details regarding transformations: https://github.com/aleju/imgaug
fliplr_rate = 0.5
angle = 10
additive, contrast_norm = (45, 0.1)
gaussian_noise, dropout = (0.05, 0.01)
shear, shift = (2, 20)
aug_img_only = iaa.Sequential([
iaa.Sometimes(0.5, iaa.OneOf([
iaa.Add((-additive, additive)),
iaa.ContrastNormalization((1 - contrast_norm, 1 + contrast_norm))
])),
iaa.Sometimes(0.5, iaa.OneOf([
iaa.AdditiveGaussianNoise(scale=gaussian_noise * 255, per_channel=True),
iaa.Dropout(dropout)
]))
])
aug_img_mask = iaa.Sequential([
iaa.Fliplr(fliplr_rate),
iaa.Affine(rotate=(-angle, angle)),
iaa.Sometimes(0.5, iaa.Affine(
shear=(-shear, shear),
translate_px={'x': (-shift, shift), 'y': (-shift, shift)})
)
])
aug_img_only.reseed()
aug_img_only_det, aug_img_mask_det = aug_img_only.to_deterministic(), aug_img_mask.to_deterministic()
im_arr = aug_img_only_det.augment_images([im_arr])[0]
im_arr = aug_img_mask_det.augment_images([im_arr])[0]
y = aug_img_mask_det.augment_images([y])[0]
im = Image.fromarray(im_arr)
return im, y
```
I've got a ML system which has input images and known masks of areas of interest, which I later want to predict. I want to augment the images and the masks in the same way for some transformations, and apply other transformations (such as dropout, etc.) only to the original image.
Here, in the code, `im` is the original image in PIL object format, `im_arr` is the original image transformed to numpy array, and `y` is the mask numpy array.
Now, everytime I run this code, for example, 5 times, with the same picture and mask, I get the same 5 augmentations. Meaning, that the first picture comes out the same every time, so does the second and so on.
Just to clarify, here is the code I use to run it:
```
for i in range(5):
im = Image.open('image.jpg')
y = np.load('mask.npy')
im, y = augment(im, y)
```
Why would this behavior happen? I reinstantiate the augmenters every time the function is called (as can be seen in the code), and only after the reinstantiation do I call to_deterministic().
What am I missing?
Thanks in advance! | closed | 2017-08-06T15:52:04Z | 2017-08-07T09:02:39Z | https://github.com/aleju/imgaug/issues/52 | [] | itai-icx | 4 |
Ehco1996/django-sspanel | django | 364 | 添加isuee template | closed | 2020-08-02T12:11:05Z | 2020-08-02T23:24:19Z | https://github.com/Ehco1996/django-sspanel/issues/364 | [] | Ehco1996 | 0 |
|
AirtestProject/Airtest | automation | 855 | 更新后获取ios元素定位特别慢 | code11.4 airtest1.2.6 ,ios手机获取元素巨慢,一直处于重新连接中,这种是我版本不匹配吗?之前用的xcode9和ios-tagent没更新前不会出现这猴子那个情况

| closed | 2021-01-20T08:44:58Z | 2021-02-21T03:52:42Z | https://github.com/AirtestProject/Airtest/issues/855 | [] | zuiqingfengyang | 2 |
yezz123/authx | pydantic | 254 | MongoDBBackend has no attribute client | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to AuthX but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to AuthX but to [FastAPI](https://github.com/tiangolo/fastapi).
### Example Code
```python
from authx import Authentication, MongoDBBackend
import motor.motor_asyncio
import asyncio
auth = Authentication(
backend=MongoDBBackend(
client=motor.motor_asyncio.AsyncIOMotorClient(
'mongodb://localhost:27017',
io_loop=asyncio.get_event_loop()
),
database='authx',
collection='users'
)
)
```
### Description
This should ideally create an auth object that can be used to include routers. Instead this gives an error
```
backend=MongoDBBackend(
TypeError: __init__() got an unexpected keyword argument 'client'
```
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
0.77.1
### Python Version
Python 3.9.0
### Additional Context
This problem is arising as the MongoDBBackend class is not excepting any other parameters other than the database_name
```
class MongoDBBackend(BaseDBBackend):
"""
Setup Database for authx using MongoDB & Motor
"""
def __init__(self, database_name: str = "test") -> None:
self._database_name = database_name
def set_client(self, client: AsyncIOMotorClient) -> None:
self._client = client
self.init()
def init(self) -> None:
self._db: AsyncIOMotorDatabase = self._client[self._database_name]
self._users: AsyncIOMotorCollection = self._db["users"]
self._email_confirmations: AsyncIOMotorCollection = self._db[
"email_confirmations"
]
self._counters: AsyncIOMotorCollection = self._db["counters"]
self._settings: AsyncIOMotorCollection = self._db["settings"]
``` | closed | 2022-07-09T05:44:40Z | 2022-09-09T15:50:43Z | https://github.com/yezz123/authx/issues/254 | [
"bug",
"question"
] | YogeshUpdhyay | 1 |
plotly/plotly.py | plotly | 4,475 | Shape labels missing/not showing | I have used Plotly 5.18.0 to create this Gantt chart to which I have added two rectangle shapes:

Both rectangles come with labels that are not being displayed, no matter how hard I try. `fig["layout"]["shapes"]` looks like this:
```python
[{'fillcolor': 'LightSalmon',
'label': {'text': 'G/g', 'textposition': 'top left'},
'layer': 'below',
'line': {'width': 0},
'opacity': 0.5,
'type': 'rect',
'x0': -7.0,
'x1': 1.0,
'y0': -0.5,
'y1': 9.5},
{'label': {'font': {'color': 'black', 'size': 20},
'text': 'Keys of G or g',
'textposition': 'top left'},
'showlegend': True,
'type': 'rect',
'x0': -4,
'x1': -2,
'y0': 3,
'y1': 1}]
```
I've spent a lot of time playing around with the options from [the manual](https://plotly.com/python/shapes/) but Plotly is not treating me to displaying any desired label. Am I doing something wrong or is it a bug?
On a side note, since it might point to the solution of the problem, the small black rectangle is set to `showlegend=True` but it does not appear in the legend.
<details><summary>Code to reproduce</summary>
<p>
```python
import plotly.graph_objects as go
figure = {'data': [{'fill': 'toself',
'fillcolor': 'rgb(103, 232, 249)',
'hoverinfo': 'name',
'legendgroup': 'rgb(103, 232, 249)',
'mode': 'none',
'name': '2',
'x': [-1.0,
-0.0,
-0.0,
-1.0,
-1.0,
-7.0,
-6.0,
-6.0,
-7.0,
-7.0,
-2.0,
-1.0,
-1.0,
-2.0,
-2.0,
-4.0,
-3.0,
-3.0,
-4.0,
-4.0,
-6.0,
-5.0,
-5.0,
-6.0],
'y': [5.8,
5.8,
6.2,
6.2,
None,
5.8,
5.8,
6.2,
6.2,
None,
5.8,
5.8,
6.2,
6.2,
None,
5.8,
5.8,
6.2,
6.2,
None,
5.8,
5.8,
6.2,
6.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(120, 113, 108)',
'hoverinfo': 'name',
'legendgroup': 'rgb(120, 113, 108)',
'mode': 'none',
'name': 'b7 (7)',
'x': [-7.0, -6.0, -6.0, -7.0],
'y': [1.8, 1.8, 2.2, 2.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(134, 25, 143)',
'hoverinfo': 'name',
'legendgroup': 'rgb(134, 25, 143)',
'mode': 'none',
'name': 'b3 (3)',
'x': [-3.0,
-2.5,
-2.5,
-3.0,
-3.0,
-5.0,
-4.0,
-4.0,
-5.0,
-5.0,
-2.5,
-2.0,
-2.0,
-2.5],
'y': [0.8,
0.8,
1.2,
1.2,
None,
0.8,
0.8,
1.2,
1.2,
None,
0.8,
0.8,
1.2,
1.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(192, 38, 211)',
'hoverinfo': 'name',
'legendgroup': 'rgb(192, 38, 211)',
'mode': 'none',
'name': '3 (#3)',
'x': [-0.0, 1.0, 1.0, -0.0],
'y': [7.8, 7.8, 8.2, 8.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(239, 68, 68)',
'hoverinfo': 'name',
'legendgroup': 'rgb(239, 68, 68)',
'mode': 'none',
'name': '4',
'x': [-2.0, -1.0, -1.0, -2.0, -2.0, -2.5, -2.0, -2.0, -2.5],
'y': [2.8, 2.8, 3.2, 3.2, None, 2.8, 2.8, 3.2, 3.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(250, 204, 21)',
'hoverinfo': 'name',
'legendgroup': 'rgb(250, 204, 21)',
'mode': 'none',
'name': 'b6 (6)',
'x': [-3.0,
-2.5,
-2.5,
-3.0,
-3.0,
-2.5,
-2.0,
-2.0,
-2.5,
-2.5,
-2.0,
-1.0,
-1.0,
-2.0],
'y': [-0.2,
-0.2,
0.2,
0.2,
None,
-0.2,
-0.2,
0.2,
0.2,
None,
-0.2,
-0.2,
0.2,
0.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(34, 197, 94)',
'hoverinfo': 'name',
'legendgroup': 'rgb(34, 197, 94)',
'mode': 'none',
'name': '1',
'x': [0.0,
0.0,
0.0,
0.0,
0.0,
-5.0,
-4.0,
-4.0,
-5.0,
-5.0,
-0.0,
1.0,
1.0,
-0.0,
-0.0,
-2.5,
-2.0,
-2.0,
-2.5,
-2.5,
-2.0,
-1.0,
-1.0,
-2.0,
-2.0,
-3.0,
-2.5,
-2.5,
-3.0],
'y': [6.8,
6.8,
7.2,
7.2,
None,
3.8,
3.8,
4.2,
4.2,
None,
3.8,
3.8,
4.2,
4.2,
None,
3.8,
3.8,
4.2,
4.2,
None,
3.8,
3.8,
4.2,
4.2,
None,
3.8,
3.8,
4.2,
4.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(37, 99, 235)',
'hoverinfo': 'name',
'legendgroup': 'rgb(37, 99, 235)',
'mode': 'none',
'name': '7 (#7)',
'x': [-4.0,
-3.0,
-3.0,
-4.0,
-4.0,
-6.0,
-5.0,
-5.0,
-6.0,
-6.0,
-1.0,
-0.0,
-0.0,
-1.0],
'y': [8.8,
8.8,
9.2,
9.2,
None,
8.8,
8.8,
9.2,
9.2,
None,
8.8,
8.8,
9.2,
9.2],
'type': 'scatter'},
{'fill': 'toself',
'fillcolor': 'rgb(76, 29, 149)',
'hoverinfo': 'name',
'legendgroup': 'rgb(76, 29, 149)',
'mode': 'none',
'name': '5',
'x': [-6.0,
-5.0,
-5.0,
-6.0,
-6.0,
-5.0,
-4.0,
-4.0,
-5.0,
-5.0,
-0.0,
1.0,
1.0,
-0.0,
-0.0,
-1.0,
-0.0,
-0.0,
-1.0,
-1.0,
-7.0,
-6.0,
-6.0,
-7.0,
-7.0,
-4.0,
-3.0,
-3.0,
-4.0],
'y': [4.8,
4.8,
5.2,
5.2,
None,
4.8,
4.8,
5.2,
5.2,
None,
4.8,
4.8,
5.2,
5.2,
None,
4.8,
4.8,
5.2,
5.2,
None,
4.8,
4.8,
5.2,
5.2,
None,
4.8,
4.8,
5.2,
5.2],
'type': 'scatter'},
{'legendgroup': 'rgb(103, 232, 249)',
'marker': {'color': 'rgb(103, 232, 249)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['V', 'V', 'v', 'v', 'ii%65', 'ii%65', 'V', 'V', 'V', 'V'],
'x': [-1.0, -0.0, -7.0, -6.0, -2.0, -1.0, -4.0, -3.0, -6.0, -5.0],
'y': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6],
'type': 'scatter'},
{'legendgroup': 'rgb(120, 113, 108)',
'marker': {'color': 'rgb(120, 113, 108)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['v', 'v'],
'x': [-7.0, -6.0],
'y': [2, 2],
'type': 'scatter'},
{'legendgroup': 'rgb(134, 25, 143)',
'marker': {'color': 'rgb(134, 25, 143)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['VI', 'VI', 'i', 'i', 'iv65', 'iv65'],
'x': [-3.0, -2.5, -5.0, -4.0, -2.5, -2.0],
'y': [1, 1, 1, 1, 1, 1],
'type': 'scatter'},
{'legendgroup': 'rgb(192, 38, 211)',
'marker': {'color': 'rgb(192, 38, 211)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['I', 'I'],
'x': [-0.0, 1.0],
'y': [8, 8],
'type': 'scatter'},
{'legendgroup': 'rgb(239, 68, 68)',
'marker': {'color': 'rgb(239, 68, 68)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['ii%65', 'ii%65', 'iv65', 'iv65'],
'x': [-2.0, -1.0, -2.5, -2.0],
'y': [3, 3, 3, 3],
'type': 'scatter'},
{'legendgroup': 'rgb(250, 204, 21)',
'marker': {'color': 'rgb(250, 204, 21)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['VI', 'VI', 'iv65', 'iv65', 'ii%65', 'ii%65'],
'x': [-3.0, -2.5, -2.5, -2.0, -2.0, -1.0],
'y': [0, 0, 0, 0, 0, 0],
'type': 'scatter'},
{'legendgroup': 'rgb(34, 197, 94)',
'marker': {'color': 'rgb(34, 197, 94)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['<NA>',
'<NA>',
'i',
'i',
'I',
'I',
'iv65',
'iv65',
'ii%65',
'ii%65',
'VI',
'VI'],
'x': [0.0, 0.0, -5.0, -4.0, -0.0, 1.0, -2.5, -2.0, -2.0, -1.0, -3.0, -2.5],
'y': [7, 7, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4],
'type': 'scatter'},
{'legendgroup': 'rgb(37, 99, 235)',
'marker': {'color': 'rgb(37, 99, 235)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['V', 'V', 'V', 'V', 'V', 'V'],
'x': [-4.0, -3.0, -6.0, -5.0, -1.0, -0.0],
'y': [9, 9, 9, 9, 9, 9],
'type': 'scatter'},
{'legendgroup': 'rgb(76, 29, 149)',
'marker': {'color': 'rgb(76, 29, 149)', 'opacity': 0, 'size': 1},
'mode': 'markers',
'name': '',
'showlegend': False,
'text': ['V', 'V', 'i', 'i', 'I', 'I', 'V', 'V', 'v', 'v', 'V', 'V'],
'x': [-6.0,
-5.0,
-5.0,
-4.0,
-0.0,
1.0,
-1.0,
-0.0,
-7.0,
-6.0,
-4.0,
-3.0],
'y': [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
'type': 'scatter'}],
'layout': {'height': 600,
'hovermode': 'x unified',
'showlegend': True,
'title': {'text': 'Gantt chart'},
'xaxis': {'rangeselector': {'buttons': [{'count': 7,
'label': '1w',
'step': 'day',
'stepmode': 'backward'},
{'count': 1, 'label': '1m', 'step': 'month', 'stepmode': 'backward'},
{'count': 6, 'label': '6m', 'step': 'month', 'stepmode': 'backward'},
{'count': 1, 'label': 'YTD', 'step': 'year', 'stepmode': 'todate'},
{'count': 1, 'label': '1y', 'step': 'year', 'stepmode': 'backward'},
{'step': 'all'}]},
'showgrid': True,
'zeroline': False},
'yaxis': {'autorange': False,
'range': [-1, 11],
'showgrid': True,
'ticktext': ['Eb', 'Bb', 'F', 'C', 'G', 'D', 'A', 'E', 'B', 'F#'],
'tickvals': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
'zeroline': False},
'template': {'data': {'histogram2dcontour': [{'type': 'histogram2dcontour',
'colorbar': {'outlinewidth': 0, 'ticks': ''},
'colorscale': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]}],
'choropleth': [{'type': 'choropleth',
'colorbar': {'outlinewidth': 0, 'ticks': ''}}],
'histogram2d': [{'type': 'histogram2d',
'colorbar': {'outlinewidth': 0, 'ticks': ''},
'colorscale': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]}],
'heatmap': [{'type': 'heatmap',
'colorbar': {'outlinewidth': 0, 'ticks': ''},
'colorscale': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]}],
'heatmapgl': [{'type': 'heatmapgl',
'colorbar': {'outlinewidth': 0, 'ticks': ''},
'colorscale': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]}],
'contourcarpet': [{'type': 'contourcarpet',
'colorbar': {'outlinewidth': 0, 'ticks': ''}}],
'contour': [{'type': 'contour',
'colorbar': {'outlinewidth': 0, 'ticks': ''},
'colorscale': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]}],
'surface': [{'type': 'surface',
'colorbar': {'outlinewidth': 0, 'ticks': ''},
'colorscale': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]}],
'mesh3d': [{'type': 'mesh3d',
'colorbar': {'outlinewidth': 0, 'ticks': ''}}],
'scatter': [{'fillpattern': {'fillmode': 'overlay',
'size': 10,
'solidity': 0.2},
'type': 'scatter'}],
'parcoords': [{'type': 'parcoords',
'line': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'scatterpolargl': [{'type': 'scatterpolargl',
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'bar': [{'error_x': {'color': '#2a3f5f'},
'error_y': {'color': '#2a3f5f'},
'marker': {'line': {'color': '#E5ECF6', 'width': 0.5},
'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}},
'type': 'bar'}],
'scattergeo': [{'type': 'scattergeo',
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'scatterpolar': [{'type': 'scatterpolar',
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'histogram': [{'marker': {'pattern': {'fillmode': 'overlay',
'size': 10,
'solidity': 0.2}},
'type': 'histogram'}],
'scattergl': [{'type': 'scattergl',
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'scatter3d': [{'type': 'scatter3d',
'line': {'colorbar': {'outlinewidth': 0, 'ticks': ''}},
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'scattermapbox': [{'type': 'scattermapbox',
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'scatterternary': [{'type': 'scatterternary',
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'scattercarpet': [{'type': 'scattercarpet',
'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}}],
'carpet': [{'aaxis': {'endlinecolor': '#2a3f5f',
'gridcolor': 'white',
'linecolor': 'white',
'minorgridcolor': 'white',
'startlinecolor': '#2a3f5f'},
'baxis': {'endlinecolor': '#2a3f5f',
'gridcolor': 'white',
'linecolor': 'white',
'minorgridcolor': 'white',
'startlinecolor': '#2a3f5f'},
'type': 'carpet'}],
'table': [{'cells': {'fill': {'color': '#EBF0F8'},
'line': {'color': 'white'}},
'header': {'fill': {'color': '#C8D4E3'}, 'line': {'color': 'white'}},
'type': 'table'}],
'barpolar': [{'marker': {'line': {'color': '#E5ECF6', 'width': 0.5},
'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}},
'type': 'barpolar'}],
'pie': [{'automargin': True, 'type': 'pie'}]},
'layout': {'autotypenumbers': 'strict',
'colorway': ['#636efa',
'#EF553B',
'#00cc96',
'#ab63fa',
'#FFA15A',
'#19d3f3',
'#FF6692',
'#B6E880',
'#FF97FF',
'#FECB52'],
'font': {'color': '#2a3f5f'},
'hovermode': 'closest',
'hoverlabel': {'align': 'left'},
'paper_bgcolor': 'white',
'plot_bgcolor': '#E5ECF6',
'polar': {'bgcolor': '#E5ECF6',
'angularaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'radialaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}},
'ternary': {'bgcolor': '#E5ECF6',
'aaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'baxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'caxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}},
'coloraxis': {'colorbar': {'outlinewidth': 0, 'ticks': ''}},
'colorscale': {'sequential': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']],
'sequentialminus': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']],
'diverging': [[0, '#8e0152'],
[0.1, '#c51b7d'],
[0.2, '#de77ae'],
[0.3, '#f1b6da'],
[0.4, '#fde0ef'],
[0.5, '#f7f7f7'],
[0.6, '#e6f5d0'],
[0.7, '#b8e186'],
[0.8, '#7fbc41'],
[0.9, '#4d9221'],
[1, '#276419']]},
'xaxis': {'gridcolor': 'white',
'linecolor': 'white',
'ticks': '',
'title': {'standoff': 15},
'zerolinecolor': 'white',
'automargin': True,
'zerolinewidth': 2},
'yaxis': {'gridcolor': 'white',
'linecolor': 'white',
'ticks': '',
'title': {'standoff': 15},
'zerolinecolor': 'white',
'automargin': True,
'zerolinewidth': 2},
'scene': {'xaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white',
'gridwidth': 2},
'yaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white',
'gridwidth': 2},
'zaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white',
'gridwidth': 2}},
'shapedefaults': {'line': {'color': '#2a3f5f'}},
'annotationdefaults': {'arrowcolor': '#2a3f5f',
'arrowhead': 0,
'arrowwidth': 1},
'geo': {'bgcolor': 'white',
'landcolor': '#E5ECF6',
'subunitcolor': 'white',
'showland': True,
'showlakes': True,
'lakecolor': 'white'},
'title': {'x': 0.05},
'mapbox': {'style': 'light'}}},
'shapes': [{'fillcolor': 'LightSalmon',
'label': {'text': 'G/g', 'textposition': 'top left'},
'layer': 'below',
'line': {'width': 0},
'opacity': 0.5,
'type': 'rect',
'x0': -7.0,
'x1': 1.0,
'y0': -0.5,
'y1': 9.5},
{'label': {'font': {'color': 'black', 'size': 20},
'text': 'Keys of G or g',
'textposition': 'top left'},
'showlegend': True,
'type': 'rect',
'x0': -4,
'x1': -2,
'y0': 3,
'y1': 1}],
'legend': {'traceorder': 'grouped'}}}
fig = go.Figure(figure)
fig.show()
```
</p>
</details> | closed | 2024-01-04T12:53:31Z | 2024-07-11T22:16:38Z | https://github.com/plotly/plotly.py/issues/4475 | [] | johentsch | 4 |
jacobgil/pytorch-grad-cam | computer-vision | 116 | Tuple Index Out of Range | Hello, when I try to run the script bellow I get an IndexError: tuple index out of range and I am not quite sure why.
`from pytorch_grad_cam import GradCAM, ScoreCAM, GradCAMPlusPlus, AblationCAM, XGradCAM, EigenCAM
from pytorch_grad_cam.utils.image import show_cam_on_image
trans=transforms.ToTensor()
model = model
target_layer = model.fc
input_tensor =trans(resize_image(image[14][0]))
input_tensor=input_tensor.unsqueeze(0)
input_tensor=input_tensor.to('cuda')
cam = GradCAM(model=model, target_layer=target_layer, use_cuda='args.use_cuda')
target_category = None
grayscale_cam = cam(input_tensor=input_tensor, target_category=target_category)
grayscale_cam = grayscale_cam[0,:]
visualization = show_cam_on_image(rgb_img, grayscale_cam)`
**I then get the following error traceback:**
`IndexError: tuple index out of range
IndexError Traceback (most recent call last)
<ipython-input-7-60b4cb2e59c1> in <module>
22
23 # You can also pass aug_smooth=True and eigen_smooth=True, to apply smoothing.
---> 24 grayscale_cam = cam(input_tensor=input_tensor, target_category=target_category)
25
26 # In this example grayscale_cam has only one image in the batch:
~/anaconda3/lib/python3.7/site-packages/pytorch_grad_cam/base_cam.py in __call__(self, input_tensor, target_category, aug_smooth, eigen_smooth)
127
128 return self.forward(input_tensor,
--> 129 target_category, eigen_smooth)
~/anaconda3/lib/python3.7/site-packages/pytorch_grad_cam/base_cam.py in forward(self, input_tensor, target_category, eigen_smooth)
75
76 cam = self.get_cam_image(input_tensor, target_category,
---> 77 activations, grads, eigen_smooth)
78
79 cam = np.maximum(cam, 0)
~/anaconda3/lib/python3.7/site-packages/pytorch_grad_cam/base_cam.py in get_cam_image(self, input_tensor, target_category, activations, grads, eigen_smooth)
44 grads,
45 eigen_smooth=False):
---> 46 weights = self.get_cam_weights(input_tensor, target_category, activations, grads)
47 weighted_activations = weights[:, :, None, None] * activations
48 if eigen_smooth:
~/anaconda3/lib/python3.7/site-packages/pytorch_grad_cam/grad_cam.py in get_cam_weights(self, input_tensor, target_category, activations, grads)
14 activations,
15 grads):
---> 16 return np.mean(grads, axis=(2, 3))
<__array_function__ internals> in mean(*args, **kwargs)
~/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims)
3333
3334 return _methods._mean(a, axis=axis, dtype=dtype,
-> 3335 out=out, **kwargs)
3336
3337
~/anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims)
136
137 is_float16_result = False
--> 138 rcount = _count_reduce_items(arr, axis)
139 # Make this warning show up first
140 if rcount == 0:
~/anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py in _count_reduce_items(arr, axis)
55 items = 1
56 for ax in axis:
---> 57 items *= arr.shape[ax]
58 return items
59
IndexError: tuple index out of range`
The image that I am feeding into it is 3,128,128 in dimension and I have added a 4th dimension with tensor.unsqueeze(0) as it would not be fed into the model properly without this pseudo "batch index". I do not understand which tuple it is finding to be out of range. | closed | 2021-07-21T18:51:07Z | 2021-07-23T19:20:58Z | https://github.com/jacobgil/pytorch-grad-cam/issues/116 | [] | juanpabloalfonzo | 2 |
autogluon/autogluon | computer-vision | 4,148 | [BUG] time_limit is displayd wrong in logs | **Describe the bug**
I've wanted run example from https://auto.gluon.ai/stable/tutorials/tabular/tabular-quick-start.html
with:
- `time_limit = 3600`
- `preset = "best"`
In logs `time_limit` is divided by 4 (900s instead of 3600s) which seems to me like a bug or not clear message
```
Beginning AutoGluon training ... Time limit = 900s
...
Fitting model: KNeighborsUnif_BAG_L1 ... Training model for up to 599.53s of the 899.47s of remaining time.
..
```
Notes:
- there is warning about ray not installed.
- one message is displayed correctly `Sub-fit(s) time limit is: 3600 seconds.`
**Expected behavior**
`time_limit` specified in `fit` should be the same as in logs
**To Reproduce**
Everything done on google colab with newest version:
```
!pip install autogluon.tabular
from autogluon.tabular import TabularDataset, TabularPredictor
data_url = 'https://raw.githubusercontent.com/mli/ag-docs/main/knot_theory/'
train_data = TabularDataset(f'{data_url}train.csv')
label = 'signature'
predictor = TabularPredictor(label=label).fit(train_data,
presets='best',
time_limit=3600)
```
**Screenshots / Logs**
```
No path specified. Models will be saved in: "AutogluonModels/ag-20240427_161329"
Preset alias specified: 'best' maps to 'best_quality'.
Presets specified: ['best']
Setting dynamic_stacking from 'auto' to True. Reason: Enable dynamic_stacking when use_bag_holdout is disabled. (use_bag_holdout=False)
Stack configuration (auto_stack=True): num_stack_levels=1, num_bag_folds=8, num_bag_sets=1
Dynamic stacking is enabled (dynamic_stacking=True). AutoGluon will try to determine whether the input data is affected by stacked overfitting and enable or disable stacking as a consequence.
Detecting stacked overfitting by sub-fitting AutoGluon on the input data. That is, copies of AutoGluon will be sub-fit on subset(s) of the data. Then, the holdout validation data is used to detect stacked overfitting.
Sub-fit(s) time limit is: 3600 seconds.
Starting holdout-based sub-fit for dynamic stacking. Context path is: AutogluonModels/ag-20240427_161329/ds_sub_fit/sub_fit_ho.
/usr/local/lib/python3.10/dist-packages/autogluon/tabular/predictor/predictor.py:1213: UserWarning: Failed to use ray for memory safe fits. Falling back to normal fit. Error: ImportError('ray is required to train folds in parallel for TabularPredictor or HPO for MultiModalPredictor. A quick tip is to install via `pip install ray==2.10.0`')
stacked_overfitting = self._sub_fit_memory_save_wrapper(
Beginning AutoGluon training ... Time limit = 900s
AutoGluon will save models to "AutogluonModels/ag-20240427_161329/ds_sub_fit/sub_fit_ho"
=================== System Info ===================
AutoGluon Version: 1.1.0
Python Version: 3.10.12
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
CPU Count: 2
Memory Avail: 11.15 GB / 12.67 GB (88.0%)
Disk Space Avail: 81.37 GB / 107.72 GB (75.5%)
===================================================
Train Data Rows: 8889
Train Data Columns: 18
Label Column: signature
Problem Type: multiclass
Preprocessing data ...
Warning: Some classes in the training set have fewer than 10 examples. AutoGluon will only keep 9 out of 13 classes for training and will not try to predict the rare classes. To keep more classes, increase the number of datapoints from these rare classes in the training data or reduce label_count_threshold.
Fraction of data from classes with at least 10 examples that will be kept for training models: 0.9983125210934863
Train Data Class Count: 9
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 11422.74 MB
Train Data (Original) Memory Usage: 1.22 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 5 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Useless Original Features (Count: 1): ['Symmetry_D8']
These features carry no predictive signal and should be manually investigated.
This is typically a feature which has the same value for all rows.
These features do not need to be present at inference time.
Types of features in original data (raw dtype, special dtypes):
('float', []) : 14 | ['chern_simons', 'cusp_volume', 'injectivity_radius', 'longitudinal_translation', 'meridinal_translation_imag', ...]
('int', []) : 3 | ['Unnamed: 0', 'hyperbolic_adjoint_torsion_degree', 'hyperbolic_torsion_degree']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 9 | ['chern_simons', 'cusp_volume', 'injectivity_radius', 'longitudinal_translation', 'meridinal_translation_imag', ...]
('int', []) : 3 | ['Unnamed: 0', 'hyperbolic_adjoint_torsion_degree', 'hyperbolic_torsion_degree']
('int', ['bool']) : 5 | ['Symmetry_0', 'Symmetry_D3', 'Symmetry_D4', 'Symmetry_D6', 'Symmetry_Z/2 + Z/2']
0.4s = Fit runtime
17 features in original data used to generate 17 features in processed data.
Train Data (Processed) Memory Usage: 0.85 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.46s ...
AutoGluon will gauge predictive performance using evaluation metric: 'accuracy'
To change this, specify the eval_metric parameter of Predictor()
Large model count detected (112 configs) ... Only displaying the first 3 models of each family. To see all, set `verbosity=3`.
User-specified model hyperparameters to be fit:
{
'NN_TORCH': [{}, {'activation': 'elu', 'dropout_prob': 0.10077639529843717, 'hidden_size': 108, 'learning_rate': 0.002735937344002146, 'num_layers': 4, 'use_batchnorm': True, 'weight_decay': 1.356433327634438e-12, 'ag_args': {'name_suffix': '_r79', 'priority': -2}}, {'activation': 'elu', 'dropout_prob': 0.11897478034205347, 'hidden_size': 213, 'learning_rate': 0.0010474382260641949, 'num_layers': 4, 'use_batchnorm': False, 'weight_decay': 5.594471067786272e-10, 'ag_args': {'name_suffix': '_r22', 'priority': -7}}],
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, 'GBMLarge'],
'CAT': [{}, {'depth': 6, 'grow_policy': 'SymmetricTree', 'l2_leaf_reg': 2.1542798306067823, 'learning_rate': 0.06864209415792857, 'max_ctr_complexity': 4, 'one_hot_max_size': 10, 'ag_args': {'name_suffix': '_r177', 'priority': -1}}, {'depth': 8, 'grow_policy': 'Depthwise', 'l2_leaf_reg': 2.7997999596449104, 'learning_rate': 0.031375015734637225, 'max_ctr_complexity': 2, 'one_hot_max_size': 3, 'ag_args': {'name_suffix': '_r9', 'priority': -5}}],
'XGB': [{}, {'colsample_bytree': 0.6917311125174739, 'enable_categorical': False, 'learning_rate': 0.018063876087523967, 'max_depth': 10, 'min_child_weight': 0.6028633586934382, 'ag_args': {'name_suffix': '_r33', 'priority': -8}}, {'colsample_bytree': 0.6628423832084077, 'enable_categorical': False, 'learning_rate': 0.08775715546881824, 'max_depth': 5, 'min_child_weight': 0.6294123374222513, 'ag_args': {'name_suffix': '_r89', 'priority': -16}}],
'FASTAI': [{}, {'bs': 256, 'emb_drop': 0.5411770367537934, 'epochs': 43, 'layers': [800, 400], 'lr': 0.01519848858318159, 'ps': 0.23782946566604385, 'ag_args': {'name_suffix': '_r191', 'priority': -4}}, {'bs': 2048, 'emb_drop': 0.05070411322605811, 'epochs': 29, 'layers': [200, 100], 'lr': 0.08974235041576624, 'ps': 0.10393466140748028, 'ag_args': {'name_suffix': '_r102', 'priority': -11}}],
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
AutoGluon will fit 2 stack levels (L1 to L2) ...
Fitting 110 L1 models ...
Fitting model: KNeighborsUnif_BAG_L1 ... Training model for up to 599.53s of the 899.47s of remaining time.
0.2116 = Validation score (accuracy)
0.07s = Training runtime
0.11s = Validation runtime
Fitting model: KNeighborsDist_BAG_L1 ... Training model for up to 599.29s of the 899.24s of remaining time.
0.2214 = Validation score (accuracy)
0.05s = Training runtime
0.09s = Validation runtime
``` | closed | 2024-04-27T16:26:26Z | 2024-05-21T16:44:54Z | https://github.com/autogluon/autogluon/issues/4148 | [
"API & Doc",
"module: tabular"
] | mglowacki100 | 1 |
seleniumbase/SeleniumBase | pytest | 3,116 | Add a stealthier Recorder Mode (UC + Recorder) | ## Add a stealthier Recorder Mode (UC + Recorder)
Make it possible to create recordings in Stealth Mode / UC Mode.
Example:
```bash
sbase recorder --uc
```
(And then create recordings from there.)
Note that special UC Mode methods (such as `uc_gui_click_captcha()`, etc) will need to be added on afterward.
----
This will improve on https://github.com/seleniumbase/SeleniumBase/issues/3078, which let you generate a UC Mode boilerplate from the URL provided. Eg:
```bash
sbase mkfile bypass_cf.py --uc --url=https://gitlab.com/users/sign_in
```
```python
from seleniumbase import SB
with SB(uc=True) as sb:
url = "https://gitlab.com/users/sign_in"
sb.uc_open_with_reconnect(url, 4)
sb.uc_gui_click_captcha()
``` | closed | 2024-09-11T04:57:39Z | 2024-09-11T05:25:02Z | https://github.com/seleniumbase/SeleniumBase/issues/3116 | [
"enhancement",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
gevent/gevent | asyncio | 1,675 | Issue with Greenlet 0.4.17 | * gevent version: 20.6.2
* Python version: Please be as specific as possible: "cPython 3.8.2"
* Operating System: Please be as specific as possible: "Ubuntu (Linux 8e4dd7170f65 5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux)"
### Description:
The last version of greenlet is causing segmentation fault (core dumped) with gevent.
```
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
./entrypoint: line 10: 7 Segmentation fault (core dumped) talisker.gunicorn.gevent webapp.app:app --bind $1 --worker-class gevent --name talisker-`hostname`
```
| closed | 2020-09-22T13:02:29Z | 2020-09-22T13:03:54Z | https://github.com/gevent/gevent/issues/1675 | [] | jkfran | 1 |
LAION-AI/Open-Assistant | python | 3,271 | create minimal tutorial on using a plugin | - [x] research and gather what is needed
- [ ] create a blog showing how to use plugins
- [x] graduate some of that content into a proper place in /docs | open | 2023-05-31T17:24:56Z | 2023-05-31T21:40:22Z | https://github.com/LAION-AI/Open-Assistant/issues/3271 | [
"documentation",
"plugins"
] | andrewm4894 | 1 |
noirbizarre/flask-restplus | flask | 293 | Using JSON Schema models | I'm trying to use a JSON schema to generate a model to be used with `marshal_with`. Here's my MWE:
```python
#!/usr/bin/env python3
from flask import Flask, Blueprint
from flask_restplus import Resource, Api, fields
import json
app = Flask(__name__)
blueprint = Blueprint("api", __name__)
api = Api(blueprint, version="0.1", title="title")
app.register_blueprint(blueprint)
model = api.schema_model("Response", {"type": "string"})
@api.route("/hello")
class Analyze(Resource):
@api.marshal_with(model)
def get(self):
return 5
if __name__ == "__main__":
app.run(debug=True)
```
When you visit `localhost:5000/hello`, the following traceback occurs:
```
Traceback (most recent call last):
File "/nix/store/nl9a0l5dvrc3c8y8110qihfcbdzgy5zl-python3.6-flask-0.12/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/nix/store/nl9a0l5dvrc3c8y8110qihfcbdzgy5zl-python3.6-flask-0.12/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/nix/store/nhg8sdk9nwqkghk9xwwb69dybjxbj1gz-python3.6-flask-restplus-0.10.1/lib/python3.6/site-packages/flask_restplus/api.py", line 313, in wrapper
resp = resource(*args, **kwargs)
File "/nix/store/nl9a0l5dvrc3c8y8110qihfcbdzgy5zl-python3.6-flask-0.12/lib/python3.6/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/nix/store/nhg8sdk9nwqkghk9xwwb69dybjxbj1gz-python3.6-flask-restplus-0.10.1/lib/python3.6/site-packages/flask_restplus/resource.py", line 44, in dispatch_request
resp = meth(*args, **kwargs)
File "/nix/store/nhg8sdk9nwqkghk9xwwb69dybjxbj1gz-python3.6-flask-restplus-0.10.1/lib/python3.6/site-packages/flask_restplus/marshalling.py", line 110, in wrapper
return marshal(resp, self.fields, self.envelope, mask)
File "/nix/store/nhg8sdk9nwqkghk9xwwb69dybjxbj1gz-python3.6-flask-restplus-0.10.1/lib/python3.6/site-packages/flask_restplus/marshalling.py", line 54, in marshal
for k, v in list(fields.items()))
AttributeError: 'SchemaModel' object has no attribute 'items'
```
I'm just trying to use a `SchemaModel` in the same way I would use a `Model`, like in the following example (which works):
```python
model = api.model("Response", {"field": fields.String(required=True)})
@api.route("/hello")
class Analyze(Resource):
@api.marshal_with(model)
def get(self):
return {"field": "str"}
```
How can I use the JSON schema-generated model? | open | 2017-06-16T21:08:17Z | 2019-10-16T09:17:55Z | https://github.com/noirbizarre/flask-restplus/issues/293 | [] | langston-barrett | 16 |
pywinauto/pywinauto | automation | 1,066 | The same wrappers accessed in two different ways do not have the same parent. | ## Expected Behavior
The output should be:
```python
True
True
```
The first True is because wrapper_maximize_button1 == wrapper_maximize_button2
So I expect wrapper_maximize_button1.parent() == wrapper_maximize_button2.parent()
## Actual Behavior
The output is:
```python
True
False
```
False is the result of wrapper_maximize_button1.parent() == wrapper_maximize_button2.parent()
## Steps to Reproduce the Problem
1. Execute code
## Short Example of Code to Demonstrate the Problem
```python
import pywinauto
pywinauto.application.Application().start(cmd_line="explorer.exe")
desktop = pywinauto.Desktop(backend='uia', allow_magic_lookup=False)
if desktop['File Explorer'].is_maximized():
desktop['File Explorer'].restore()
window = desktop.windows(title='File Explorer', control_type='Window')[0]
wrapper_maximize_button1 = window.descendants(title='Maximize')[0]
pt = wrapper_maximize_button1.rectangle().mid_point()
wrapper_maximize_button2 = desktop.from_point(pt[0],pt[1])
print(wrapper_maximize_button1 == wrapper_maximize_button2) # True
print(wrapper_maximize_button1.parent() == wrapper_maximize_button2.parent()) # Should be True
```
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.8.3 64bit
- Platform and OS: PC Windows 10
| open | 2021-05-04T17:45:50Z | 2021-05-18T14:46:12Z | https://github.com/pywinauto/pywinauto/issues/1066 | [
"Priority-Low",
"need investigation"
] | beuaaa | 10 |
paperless-ngx/paperless-ngx | machine-learning | 7,361 | [BUG] Inconsisnt custom field value validation | ### Description
https://github.com/paperless-ngx/paperless-ngx/blob/2312eba5b6640419facb566cf1dc2becdc875850/src/documents/models.py#L886-L902
`CustomFieldInstance.value_*` are configured to have `.blank=False`. However, this is not enforced by [`CustomFieldInstanceSerializer.validate`](https://github.com/paperless-ngx/paperless-ngx/blob/2312eba5b6640419facb566cf1dc2becdc875850/src/documents/serialisers.py#L566-L607).
As a result custom fields can have two possible values for “no data”. This is not a huge issue for GUI users, but a pain for API-based integration. This might also confuse contributors who look at the `CustomFieldInstance` and assume custom fields cannot be blank.
### Steps to reproduce
1. Create a custom field with name "test_custom_field" with type "text".
2. In the web interface, edit any document, add a "test_custom_field", **do not put anything in the box**, and click "save".
3. Verify that this document now has a "test_custom_field" with value `null`.
4. Edit this document again, enter something into the "test_custom_field" box, **delete it**, and click "save".
5. Verify that this document now has a "test_custom_field" with value `""`.
### Webserver logs
```bash
N/A, nothing special.
```
### Browser logs
_No response_
### Paperless-ngx version
dev
### Host OS
x86_64 Ubuntu 20.04.6 LTS
### Installation method
Bare metal
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-01T12:12:32Z | 2024-09-01T03:09:31Z | https://github.com/paperless-ngx/paperless-ngx/issues/7361 | [
"not a bug"
] | yichi-yang | 4 |
piskvorky/gensim | data-science | 3,352 | new word cannot be added to vocabulary by build_vocab | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Hi, I have a question about the behavior of build_vocab.
I am Japanese and I am using gensim's Word2Vec model by loading the Japanese model [here](http://public.shiroyagi.s3.amazonaws.com/latest-ja-word2vec-gensim-model.zip).
I want to add a new word that is not in the vocabulary, so I created a corpus and tried build_vocab, but about 280 of the 320 or so new words were not registered and I got a key error.
Here is a simple code: build_vocab, train, and then check if the vocab is updated, and I get a keyerorr. I would like to know the cause of this.
```python
import gensim
# model's path
model_path='latest-ja-word2vec-gensim-model/word2vec.gensim.model'
model = gensim.models.Word2Vec.load(model_path)
model.wv["python"] # error occured because 'python' is not in the vocab
# adding new word
corpus_list=[["python"]]
# build_vocab and train
model.build_vocab(corpus_list, update=True)
model.train(corpus_list, total_examples=model.corpus_count, epochs=model.epochs)
model.wv["python"] # error occured
```
#### Versions
```python
gensim version 3.8.1
python version 3.8.9
```
| closed | 2022-06-09T06:07:57Z | 2023-11-12T22:25:28Z | https://github.com/piskvorky/gensim/issues/3352 | [] | Atsuyoshi-Funahashi | 4 |
shaikhsajid1111/social-media-profile-scrapers | web-scraping | 14 | 'ProfilePage' None | I'm getting a
"'ProfilePage'
None"
return for all attempts.

| open | 2022-10-10T13:50:05Z | 2022-10-10T13:50:05Z | https://github.com/shaikhsajid1111/social-media-profile-scrapers/issues/14 | [] | hamelcubsfan | 0 |
huggingface/datasets | pytorch | 6,564 | `Dataset.filter` missing `with_rank` parameter | ### Describe the bug
The issue shall be open: https://github.com/huggingface/datasets/issues/6435
When i try to pass `with_rank` to `Dataset.filter()`, i get this:
`Dataset.filter() got an unexpected keyword argument 'with_rank'`
### Steps to reproduce the bug
Run notebook:
https://colab.research.google.com/drive/1WUNKph8BdP0on5ve3gQnh_PE0cFLQqTn?usp=sharing
### Expected behavior
Should work?
### Environment info
NVIDIA RTX 4090 | closed | 2024-01-06T23:48:13Z | 2024-01-29T16:36:55Z | https://github.com/huggingface/datasets/issues/6564 | [] | kopyl | 2 |
Colin-b/pytest_httpx | pytest | 38 | async callback are not supported | When I register an async function as a callback to httpx_mock I get this error:
`TypeError: cannot unpack non-iterable coroutine object`
I suppose it's not awaited here:
https://github.com/Colin-b/pytest_httpx/blob/develop/pytest_httpx/_httpx_mock.py#L179
Is this a bug or am I using the library wrong?
Thanks! | closed | 2021-03-18T11:55:14Z | 2022-10-20T21:39:02Z | https://github.com/Colin-b/pytest_httpx/issues/38 | [
"question"
] | mkotsalainen | 4 |
aiogram/aiogram | asyncio | 566 | I suggest adding a new builtin filter | **Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
In my handlers I often use StorageDataFilter. It's similar as StateFilter and helps me to check storage data for current user and chat. Here are examples:
```python
dp = Dispatcher(bot)
# check if storage data for current user and chat has key 'regime' with value 'demo'
@dp.message_handler(storage_data={'regime': 'demo'})
async def test(msg: types.Message): ...
# check if has key 'regime' with value 'demo' and key 'game' with value 'football'
@dp.message_handler(storage_data={'regime': 'demo', 'game': 'football'})
async def test(msg: types.Message): ...
# check if has key 'game' with value 'football' or 'basketball'
# key 'game' with value ['football', 'basketball'] is also suitable
@dp.message_handler(storage_data={'game': ['football', 'basketball']})
async def test(msg: types.Message): ...
# check if has key 'regime' with value 'demo' and key 'game' with any value
@dp.message_handler(storage_data={'regime': 'demo', 'game': '*'})
async def test(msg: types.Message): ...
# check if has key 'game' with value '*' (or ['*'])
@dp.message_handler(storage={'game': ['*']})
async def test(msg: types.Message): ...
```
Here is code of filter:
```python
class StorageDataFilter(BoundFilter):
"""
Check if all items matches the relevant items in the current storage data.
"""
key = 'storage_data'
ctx_storage_data = ContextVar('user_storage_data')
def __init__(self, dispatcher, storage_data: dict):
from aiogram import Dispatcher
self.dispatcher: Dispatcher = dispatcher
self.storage_data = storage_data
@staticmethod
def get_target(obj) -> typing.Tuple[Optional[int], Optional[int]]:
if isinstance(obj, CallbackQuery):
try:
chat_id = obj.message.chat.id
except AttributeError:
chat_id = None
else:
try:
chat_id = obj.chat.id
except AttributeError:
chat_id = None
try:
user_id = obj.from_user.id
except AttributeError:
user_id = None
return chat_id, user_id
async def get_current_storage_data(self, obj) -> Optional[dict]:
try:
return self.ctx_storage_data.get()
except LookupError:
chat_id, user_id = self.get_target(obj)
if chat_id or user_id:
storage_data = await self.dispatcher.storage.get_data(chat=chat_id, user=user_id)
self.ctx_storage_data.set(storage_data)
return storage_data
async def check(self, obj) -> bool:
current_storage_data = await self.get_current_storage_data(obj)
if current_storage_data is None:
return False
for key, value in self.storage_data.items():
if key not in current_storage_data:
return False
if value == '*':
continue
if isinstance(value, (list, tuple, set)):
if current_storage_data[key] in value:
continue
if current_storage_data[key] == value:
continue
return False
return True
```
What about to include this filter in builtins filters? | closed | 2021-04-16T10:49:20Z | 2023-08-04T18:11:49Z | https://github.com/aiogram/aiogram/issues/566 | [
"new feature",
"under discussion"
] | LDmitriy7 | 3 |
klen/mixer | sqlalchemy | 127 | Please update Faker version due security issue | How to check:
```bash
$ pip install safety
$ safety check -r requirements.txt
╒══════════════════════════════════════════════════════════════════════════════╕
│ │
│ /$$$$$$ /$$ │
│ /$$__ $$ | $$ │
│ /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ │
│ /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ │
│ | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ │
│ \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ │
│ /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ │
│ |_______/ \_______/|__/ \_______/ \___/ \____ $$ │
│ /$$ | $$ │
│ | $$$$$$/ │
│ by pyup.io \______/ │
│ │
╞══════════════════════════════════════════════════════════════════════════════╡
│ REPORT │
│ checked 1 packages, using default DB │
╞════════════════════════════╤═══════════╤══════════════════════════╤══════════╡
│ package │ installed │ affected │ ID │
╞════════════════════════════╧═══════════╧══════════════════════════╧══════════╡
│ faker │ 0.9.1 │ <2.1.2 │ 37658 │
╘══════════════════════════════════════════════════════════════════════════════╛
``` | closed | 2020-04-01T07:28:55Z | 2020-12-30T20:17:31Z | https://github.com/klen/mixer/issues/127 | [] | sirkonst | 1 |
cobrateam/splinter | automation | 391 | error: ChromeDriver executable needs to be available in the path | I downloaded chromedriver.zip
extracted chromedriver.exe into N:\
added N:\; to PATH.
I get above message from:
from splinter import Browser
b = Browser("chrome")
From a command prompt it works:
C:\Python33\Scripts>chromedriver
Starting ChromeDriver (v2.9.248315) on port 9515
| closed | 2015-04-18T23:03:19Z | 2018-08-27T00:55:49Z | https://github.com/cobrateam/splinter/issues/391 | [] | ghost | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,282 | Mypy: @declared_attr crash mypy when using "--follow-imports=silent" | ### Ensure stubs packages are not installed
- [X] No sqlalchemy stub packages is installed (both `sqlalchemy-stubs` and `sqlalchemy2-stubs` are not compatible with v2)
### Verify if the api is typed
- [X] The api is not in a module listed in [#6810](https://github.com/sqlalchemy/sqlalchemy/issues/6810) so it should pass type checking
### Describe the typing issue
When mypy is configured to [follow import but suppress error messages](https://mypy.readthedocs.io/en/stable/running_mypy.html#follow-imports) (eg: with `--follow-imports=silent`), any `@declared_attr` in a module imported will crash mypy >= 1.4.0.
### To Reproduce
```python
# example.py
from sqlalchemy import Column, String
from sqlalchemy.orm import Mapped, declared_attr, declarative_mixin
@declarative_mixin
class Foo:
@declared_attr
def bar(cls) -> Mapped[str]:
return Column(String)
# example2.py
from example import Foo
```
### Error
```
example.py:-1: error: INTERNAL ERROR -- Please try using mypy master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.5.1
Traceback (most recent call last):
File "mypy/checkexpr.py", line 5141, in accept
File "mypy/nodes.py", line 2207, in accept
File "mypy/checkexpr.py", line 4633, in visit_lambda_expr
File "mypy/nodes.py", line 2200, in expr
IndexError: list index out of range
example.py:-1: : note: use --pdb to drop into pdb
```
### Versions
- OS: linux
- Python: 3.11.3
- SQLAlchemy: 2.0.20 & 1.4.49
- Type checker: mypy >= 1.4.0 (for some reason, it works with mypy 1.3.1)
### Additional context
The error happens in mypy [at this line](https://github.com/python/mypy/blob/v1.5.1/mypy/nodes.py#L2200). This is because the `Block` passed when creating the `LambdaExpr` [here](https://github.com/sqlalchemy/sqlalchemy/blob/rel_2_0_20/lib/sqlalchemy/ext/mypy/decl_class.py#L340-L342) is empty. | open | 2023-08-25T17:16:55Z | 2024-02-26T14:18:44Z | https://github.com/sqlalchemy/sqlalchemy/issues/10282 | [
"bug",
"PRs (with tests!) welcome",
"SQLA mypy plugin"
] | k4nar | 5 |
ijl/orjson | numpy | 24 | Can not load exception class: {}.{}json.JSONDecodeError | I have some interesting situation with `orjson==2.0.6` in my project. I wasn't able to reproduce this issue outside of the project, it means there's probably some strange conflict with existing dependencies or environment. Clean module outside of the project but in the same venv and same imports works absolutely fine.
Any idea what it can be?
*Python*: 3.6.3
PyCharm output:
```
thread '<unnamed>' panicked at 'Can not load exception class: {}.{}json.JSONDecodeError: PyErr { type: Py(0x9d0c40, PhantomData) }', src/libcore/result.rs:999:5
stack backtrace:
0: <unknown>
1: <unknown>
2: <unknown>
3: <unknown>
4: <unknown>
5: <unknown>
6: <unknown>
7: <unknown>
8: <unknown>
9: PyInit_orjson
10: _PyImport_LoadDynamicModuleWithSpec
11: <unknown>
12: PyCFunction_Call
13: _PyEval_EvalFrameDefault
14: <unknown>
15: <unknown>
16: _PyEval_EvalFrameDefault
17: <unknown>
18: <unknown>
19: _PyEval_EvalFrameDefault
20: <unknown>
21: <unknown>
22: _PyEval_EvalFrameDefault
23: <unknown>
24: <unknown>
25: _PyEval_EvalFrameDefault
26: <unknown>
27: <unknown>
28: _PyEval_EvalFrameDefault
29: <unknown>
30: _PyFunction_FastCallDict
31: _PyObject_FastCallDict
32: _PyObject_CallMethodIdObjArgs
33: PyImport_ImportModuleLevelObject
34: _PyEval_EvalFrameDefault
35: <unknown>
36: PyEval_EvalCode
37: <unknown>
38: PyCFunction_Call
39: _PyEval_EvalFrameDefault
40: <unknown>
41: <unknown>
42: _PyEval_EvalFrameDefault
43: <unknown>
44: <unknown>
45: _PyEval_EvalFrameDefault
46: <unknown>
47: <unknown>
48: _PyEval_EvalFrameDefault
49: <unknown>
50: <unknown>
51: _PyEval_EvalFrameDefault
52: <unknown>
53: _PyFunction_FastCallDict
54: _PyObject_FastCallDict
55: _PyObject_CallMethodIdObjArgs
56: PyImport_ImportModuleLevelObject
57: _PyEval_EvalFrameDefault
58: <unknown>
59: PyEval_EvalCode
60: <unknown>
61: PyCFunction_Call
62: _PyEval_EvalFrameDefault
63: <unknown>
64: <unknown>
65: _PyEval_EvalFrameDefault
66: <unknown>
67: <unknown>
68: _PyEval_EvalFrameDefault
69: <unknown>
70: <unknown>
71: _PyEval_EvalFrameDefault
72: <unknown>
73: <unknown>
74: _PyEval_EvalFrameDefault
75: <unknown>
76: _PyFunction_FastCallDict
77: _PyObject_FastCallDict
78: _PyObject_CallMethodIdObjArgs
79: PyImport_ImportModuleLevelObject
80: _PyEval_EvalFrameDefault
81: <unknown>
82: PyEval_EvalCode
83: <unknown>
84: PyCFunction_Call
85: _PyEval_EvalFrameDefault
86: <unknown>
87: <unknown>
88: _PyEval_EvalFrameDefault
89: <unknown>
90: <unknown>
91: _PyEval_EvalFrameDefault
92: <unknown>
93: <unknown>
94: _PyEval_EvalFrameDefault
95: <unknown>
96: <unknown>
97: _PyEval_EvalFrameDefault
98: <unknown>
99: _PyFunction_FastCallDict
``` | closed | 2019-08-01T17:32:38Z | 2019-08-01T19:31:49Z | https://github.com/ijl/orjson/issues/24 | [] | leobuskin | 2 |
ultralytics/ultralytics | computer-vision | 19,179 | train method, object detection rate & No detect on background(no object env) rather than box iou | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I perform a video test using best.pt saved from the model I am training.
i think the performance of save models are arranged like this, 100 epoch > 90 epoch > 80 epoch..
but in real-time test environment, It seems like the model doesn't always perform the way I described above.
I think that when updating the best.pt model in the current training method, it is done by calculating the iou with the label box. The training method for the model I actually want is as follows.
1. iou does not need to be high.
2. just need to detect the object well.
3. I hope no false positives occur in background images without objects.
I think we can adjust the box loss, cls loss, and dfl loss weight in the training parameters. What do you think? I'm currently training with default settings. box 7.5, cls 0.5, dlf 1.5
### Additional
_No response_ | open | 2025-02-11T08:31:13Z | 2025-02-13T23:47:28Z | https://github.com/ultralytics/ultralytics/issues/19179 | [
"question"
] | yeonhyochoi | 3 |
tiangolo/uwsgi-nginx-flask-docker | flask | 111 | best approach to include custom supervisord.conf ? | Hello,
I've used this configured image before for a small project and it worked like a charm!
For a new project I'd like to include custom configuration for supervisord, looking through the baseimage dockerfile (
https://hub.docker.com/r/tiangolo/uwsgi-nginx) /,I found:
# Custom Supervisord config
> COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
I've tried building my own baseimages last time, but this takes up a lot of time for every config change.
If I leave the baseimage as it is, but instead include a custom COPY supervisord.conf to that location ( /etc/supervisor/conf.d/supervisord.conf) in the final dockerfile.
Would this be a valid approach?
Thank you | closed | 2018-11-23T11:59:54Z | 2019-01-01T19:51:41Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/111 | [] | AYEG | 7 |
modin-project/modin | pandas | 6,518 | BUG: converting string columns to interchange protocol changes values to NaN | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import pandas
import modin.pandas as pd
print(pandas.api.interchange.from_dataframe(pd.DataFrame({'fips': ['01001']}).__dataframe__()))
```
### Issue Description
BUG: converting string columns to interchange protocol changes values to NaN
### Expected Behavior
Should convert to strings, as pandas would
### Error Logs
N/A
### Installed Versions
<details>
```
INSTALLED VERSIONS
------------------
commit : 38110bb65643babc748e9ed59f6e7780d80c539e
python : 3.8.16.final.0
python-bits : 64
OS : Darwin
OS-release : 22.5.0
Version : Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.23.0+67.g38110bb65
ray : 2.4.0
dask : 2023.4.1
distributed : 2023.4.1
hdk : None
pandas dependencies
-------------------
pandas : 2.0.2
numpy : 1.24.3
pytz : 2023.3
dateutil : 2.8.2
setuptools : 66.0.0
pip : 23.0.1
Cython : 0.29.34
pytest : 7.3.1
hypothesis : None
sphinx : 7.0.0
blosc : None
feather : 0.4.1
xlsxwriter : None
lxml.etree : 4.9.2
html5lib : None
pymysql : None
psycopg2 : 2.9.6
jinja2 : 3.1.2
IPython : 8.12.1
pandas_datareader: None
bs4 : 4.12.2
bottleneck : None
brotli : 1.0.9
fastparquet : 2022.12.0
fsspec : 2023.4.0
gcsfs : None
matplotlib : 3.7.1
numba : None
numexpr : 2.8.4
odfpy : None
openpyxl : 3.0.10
pandas_gbq : 0.19.1
pyarrow : 11.0.0
pyreadstat : None
pyxlsb : None
s3fs : 0.4.2
scipy : 1.10.1
snappy : None
sqlalchemy : 1.4.45
tables : 3.8.0
tabulate : None
xarray : 2023.1.0
xlrd : 2.0.1
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
```
</details>
| closed | 2023-08-28T20:12:15Z | 2023-08-31T17:07:14Z | https://github.com/modin-project/modin/issues/6518 | [
"bug 🦗",
"Integration ➕➕",
"P1"
] | mvashishtha | 0 |
ray-project/ray | python | 51,379 | [Data] Ray read_tfrecords allow ray_remote_args configs | ### Description
For parquet reader, its possible to pass in additional Ray task configs via `ray_remote_args` as the doc suggested [here](https://docs.ray.io/en/latest/data/performance-tips.html#tuning-read-resources). However, such option is not available for the read_tfrecords loader.
### Use case
To fine-tune the loader configs, we need to offer `ray_remote_args` as init args for read_tfrecords.
| open | 2025-03-14T19:40:34Z | 2025-03-18T17:11:57Z | https://github.com/ray-project/ray/issues/51379 | [
"enhancement",
"triage",
"data"
] | shaowei-su | 0 |
microsoft/unilm | nlp | 1,251 | [Kosmos-2] Will depth information be incorporated in the future? | Hi,
I wonder has anyone successfully incorporated Kosmos-2 with depth information? Will this be the future goal to make the model gain more spatial awareness?
Model I am using Kosmos-2
| open | 2023-08-11T21:56:39Z | 2023-08-21T03:13:46Z | https://github.com/microsoft/unilm/issues/1251 | [] | quantingxie | 2 |
hzwer/ECCV2022-RIFE | computer-vision | 336 | HDv3模型的复现 |
flow, mask, merged = self.flownet(torch.cat((imgs, gt), 1), scale=scale, training=training)
**loss_l1** = (merged[2] - gt).abs().mean()
**loss_smooth** = self.sobel(flow[2], flow[2]*0).mean()
# loss_vgg = self.vgg(merged[2], gt)
if training:
self.optimG.zero_grad()
**loss_G = loss_cons + loss_smooth * 0.1**
loss_G.backward()
self.optimG.step()
else:
flow_teacher = flow[2]
return merged[2], {
'mask': mask,
'flow': flow[2][:, :2],
'loss_l1': loss_l1,
'loss_cons': loss_cons,
'loss_smooth': **loss_smooth,**
}
想问您使用的是几个loss?是”loss_l1+loss_cons+loss_smooth“三个loss吗?还是仅仅loss_cons + loss_smooth * 0.1?
还想问下作者,使用HDv3复现插多帧模型的时候,训练并不成功,模型的psnr值为2.多,是什么原因呢? | open | 2023-08-16T05:37:32Z | 2024-07-02T03:25:39Z | https://github.com/hzwer/ECCV2022-RIFE/issues/336 | [] | ZFU123456 | 5 |
httpie/cli | rest-api | 548 | Posting a form field string with spaces results in error | I'm currently trying to post to a form and the argument I'm trying to pass is a string with spaces in it.
```
$ http --form POST example.com name="John Smith"
```
But I keep getting this error back:
```
http: error: argument REQUEST_ITEM: "Smith" is not a valid value
```
I've seen this example on a couple of different blogs so it must of worked at some point in time. Am i doing something wrong?
Debug printout
```
HTTPie 0.9.9
Requests 2.12.4
Pygments 2.1.3
Python 2.7.12 (default, Jun 29 2016, 09:13:05)
[GCC 4.9.2]
/usr/bin/python
Linux 4.4.27-moby
<Environment {
"colors": 8,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/docs#config",
"httpie": "0.9.9"
},
"default_options": "[]"
},
"config_dir": "/root/.httpie",
"is_windows": false,
"stderr": "<open file '<stderr>', mode 'w' at 0x7ff88d3df1e0>",
"stderr_isatty": true,
"stdin": "<open file '<stdin>', mode 'r' at 0x7ff88d3df0c0>",
"stdin_encoding": "UTF-8",
"stdin_isatty": true,
"stdout": "<open file '<stdout>', mode 'w' at 0x7ff88d3df150>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
```
| closed | 2016-12-16T18:07:43Z | 2016-12-16T22:44:21Z | https://github.com/httpie/cli/issues/548 | [] | thornycrackers | 4 |
pyg-team/pytorch_geometric | pytorch | 9,395 | GPU out of memory caused by eval() mode in TGN | ### 🐛 Describe the bug
I encountered an out-of-memory (OOM) issue during the evaluation phase, whereas the training procedure runs without any problems. I have verified that the OOM issue is caused solely by the eval() function, which should not be the case.
To reproduce the bug more directly, I have prepared the following code:
````
import os.path as osp
import torch
from sklearn.metrics import average_precision_score, roc_auc_score
from torch.nn import Linear
from torch_geometric.datasets import JODIEDataset
from torch_geometric.loader import TemporalDataLoader
from torch_geometric.nn import TGNMemory, TransformerConv
from torch_geometric.nn.models.tgn import (
IdentityMessage,
LastAggregator,
LastNeighborLoader,
)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
memory_dim = time_dim = embedding_dim = 200
memory = TGNMemory(
5000000,
200,
memory_dim,
time_dim,
message_module=IdentityMessage(32, memory_dim, time_dim),
aggregator_module=LastAggregator(),
).to(device)
memory.eval()
import time
time.sleep(3600)
````
Additionally, before encountering the OOM bug, I faced another issue with the following error message:
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and CPU!`
Fortunately, this device assignment problem has been resolved by previously closed issues #7008 and #8926. After resolving the device assignment problem, I ran the above code, and the GPU memory usage exploded from 6GB to more than 40 GB.
However, if I comment on the _memory.eval()_ line, the GPU memory usage remains under 10GB. This is unexpected because model.eval() should not cause such a dramatic increase in GPU memory usage. I believe this is a bug.
Thank you for your assistance.
### Versions
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==1.13.1
[pip3] torch_cluster==1.6.3
[pip3] torch_geometric==2.5.3
[pip3] torch_scatter==2.1.2
[pip3] torch_sparse==0.6.18
[pip3] torch-spline-conv==1.2.2+pt113cu117
[pip3] torchaudio==0.13.1
[pip3] torchvision==0.14.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.8 py39h5eee18b_0
[conda] mkl_random 1.2.4 py39hdb19cb5_0
[conda] numpy 1.26.4 py39h5f9d8c6_0
[conda] numpy-base 1.26.4 py39hb5e798b_0
[conda] pytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-cluster 1.6.3 pypi_0 pypi
[conda] torch-geometric 2.5.3 pypi_0 pypi
[conda] torch-scatter 2.1.2 pypi_0 pypi
[conda] torch-sparse 0.6.18 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt113cu117 pypi_0 pypi
[conda] torchaudio 0.13.1 py39_cu117 pytorch
[conda] torchvision 0.14.1 py39_cu117 pytorch
| closed | 2024-06-05T20:01:15Z | 2024-06-12T22:08:31Z | https://github.com/pyg-team/pytorch_geometric/issues/9395 | [
"bug"
] | Joney-Yf | 1 |
pytest-dev/pytest-xdist | pytest | 160 | Regression in pip test suite | We went from https://travis-ci.org/pypa/pip/jobs/241184675#L218 to https://travis-ci.org/pypa/pip/jobs/241509953#L218 and the only thing that changed was pytest-xdist going from 1.16.0 to 1.17.0.
<details>
```
+tox -- -m integration -n 8 --duration=5
[1mGLOB sdist-make: /home/travis/build/pypa/pip/setup.py[0m
[1mpy34 inst-nodeps: /home/travis/build/pypa/pip/.tox/dist/pip-10.0.0.dev0.zip[0m
[1mpy34 installed: apipkg==1.4,execnet==1.4.1,freezegun==0.3.9,mock==1.0.1,pretend==1.0.8,py==1.4.34,pytest==3.1.2,pytest-catchlog==1.2.2,pytest-rerunfailures==2.1.0,pytest-timeout==1.2.0,pytest-xdist==1.17.0,python-dateutil==2.6.0,scripttest==1.3,six==1.10.0,virtualenv==15.2.0.dev0[0m
[1mpy34 runtests: PYTHONHASHSEED='472861946'[0m
[1mpy34 runtests: commands[0] | py.test --timeout 300 -m integration -n 8 --duration=5[0m
[1m============================= test session starts ==============================[0m
platform linux -- Python 3.4.4, pytest-3.1.2, py-1.4.34, pluggy-0.4.0
rootdir: /home/travis/build/pypa/pip, inifile: setup.cfg
plugins: xdist-1.17.0, timeout-1.2.0, rerunfailures-2.1.0, catchlog-1.2.2
timeout: 300.0s method: signal
[1m
gw0 I / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 I[0m[1m
gw0 C / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 I[0m[1m
gw0 C / gw1 C / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 I[0m[1m
gw0 C / gw1 C / gw2 C / gw3 I / gw4 I / gw5 I / gw6 I / gw7 I[0m[1m
gw0 C / gw1 C / gw2 C / gw3 C / gw4 I / gw5 I / gw6 I / gw7 I[0m[1m
gw0 C / gw1 C / gw2 C / gw3 C / gw4 C / gw5 I / gw6 I / gw7 I[0m[1m
gw0 C / gw1 C / gw2 C / gw3 C / gw4 C / gw5 C / gw6 I / gw7 I[0m[1m
gw0 C / gw1 C / gw2 C / gw3 C / gw4 C / gw5 C / gw6 C / gw7 I[0m[1m
gw0 C / gw1 C / gw2 C / gw3 C / gw4 C / gw5 C / gw6 C / gw7 C[0m[1m
gw0 ok / gw1 C / gw2 C / gw3 C / gw4 C / gw5 C / gw6 C / gw7 C[0m[1m
gw0 ok / gw1 ok / gw2 C / gw3 C / gw4 C / gw5 C / gw6 C / gw7 C[0m[1m
gw0 ok / gw1 ok / gw2 ok / gw3 C / gw4 C / gw5 C / gw6 C / gw7 C[0m[1m
gw0 ok / gw1 ok / gw2 ok / gw3 ok / gw4 C / gw5 C / gw6 C / gw7 C[0m[1m
gw0 ok / gw1 ok / gw2 ok / gw3 ok / gw4 ok / gw5 C / gw6 C / gw7 C[0m[1m
gw0 ok / gw1 ok / gw2 ok / gw3 ok / gw4 ok / gw5 ok / gw6 C / gw7 C[0m[1m
gw0 ok / gw1 ok / gw2 ok / gw3 ok / gw4 ok / gw5 ok / gw6 ok / gw7 C[0m[1m
gw0 ok / gw1 ok / gw2 ok / gw3 ok / gw4 ok / gw5 ok / gw6 ok / gw7 ok[0m[1m
gw0 [359] / gw1 ok / gw2 ok / gw3 ok / gw4 ok / gw5 ok / gw6 ok / gw7 ok[0m[1m
gw0 [359] / gw1 [359] / gw2 ok / gw3 ok / gw4 ok / gw5 ok / gw6 ok / gw7 ok[0m[1m
gw0 [359] / gw1 [359] / gw2 [359] / gw3 ok / gw4 ok / gw5 ok / gw6 ok / gw7 ok[0m[1m
gw0 [359] / gw1 [359] / gw2 [359] / gw3 [359] / gw4 ok / gw5 ok / gw6 ok / gw7 ok[0m[1m
gw0 [359] / gw1 [359] / gw2 [359] / gw3 [359] / gw4 [359] / gw5 ok / gw6 ok / gw7 ok[0m[1m
gw0 [359] / gw1 [359] / gw2 [359] / gw3 [359] / gw4 [359] / gw5 [359] / gw6 ok / gw7 ok[0m[1m
gw0 [359] / gw1 [359] / gw2 [359] / gw3 [359] / gw4 [359] / gw5 [359] / gw6 [359] / gw7 ok[0m[1m
gw0 [359] / gw1 [359] / gw2 [359] / gw3 [359] / gw4 [359] / gw5 [359] / gw6 [359] / gw7 [359][0m
scheduling tests via LoadScheduling
.....................................x........................sINTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/_pytest/main.py", line 105, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/_pytest/main.py", line 141, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/xdist/dsession.py", line 539, in pytest_runtestloop
INTERNALERROR> self.loop_once()
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/xdist/dsession.py", line 558, in loop_once
INTERNALERROR> call(**kwargs)
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/xdist/dsession.py", line 664, in slave_testreport
INTERNALERROR> self.sched.mark_test_complete(node, rep.item_index, rep.duration)
INTERNALERROR> File "/home/travis/build/pypa/pip/.tox/py34/lib/python3.4/site-packages/xdist/dsession.py", line 280, in mark_test_complete
INTERNALERROR> self.node2pending[node].remove(item_index)
INTERNALERROR> ValueError: list.remove(x): x not in list
```
</details> | closed | 2017-06-10T19:39:36Z | 2017-06-10T19:51:24Z | https://github.com/pytest-dev/pytest-xdist/issues/160 | [] | xavfernandez | 4 |
flasgger/flasgger | rest-api | 259 | when will Flasgger 0.9.2 be released? | Hi there :)
Starting with Flasgger 0.9.2 you can specify external URL locations for loading the JavaScript and CSS for the Swagger and jQuery libraries loaded in the Flasgger default templates.
when will Flasgger 0.9.2 be released? | closed | 2018-11-13T18:46:50Z | 2018-11-15T02:38:32Z | https://github.com/flasgger/flasgger/issues/259 | [] | wobeng | 2 |
mljar/mercury | data-visualization | 13 | Add scrolling if many parameters in the sidebar | In the case of many widgets in the sidebar the Run and Donwload buttons are not available.
There should be some scroll available.

| closed | 2022-01-18T15:23:21Z | 2022-01-18T15:36:19Z | https://github.com/mljar/mercury/issues/13 | [] | pplonski | 0 |
netbox-community/netbox | django | 18,705 | Can never upgrade - Always migration errors. | ### Deployment Type
Self-hosted
### NetBox Version
4.1.10
### Python Version
3.11
### Steps to Reproduce
**Checkout latest release**
sudo git checkout v4.2.4
**Run upgrade script**
./upgrade.sh
### Expected Behavior
Successful upgrade to 4.2.4
### Observed Behavior
Error when running migrations. I have also disabled ALL plugins for testing/upgrading.
I have also tried stepping up to 4.1.11 but that also fails with the same error.
```
ipam.prefix... Traceback (most recent call last):
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/netbox/venv/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
psycopg.errors.UndefinedColumn: column ipam_prefix.site_id does not exist
LINE 1: ..."ipam_prefix"."comments", "ipam_prefix"."prefix", "ipam_pref...
^
HINT: Perhaps you meant to reference the column "ipam_prefix._site_id".
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/netbox/netbox/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/base.py", line 413, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/base.py", line 459, in execute
output = self.handle(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/netbox/netbox/extras/management/commands/reindex.py", line 95, in handle
i = search_backend.cache(model.objects.iterator(), remove_existing=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/netbox/netbox/netbox/search/backends.py", line 197, in cache
for instance in instances:
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/models/query.py", line 518, in _iterator
yield from iterableFile "/opt/netbox/venv/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql
^^^^^^^^^^^^^^^^^^^^^
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 79, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 92, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 100, in _execute
with self.db.wrap_database_errors:
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/netbox/venv/lib/python3.11/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/netbox/venv/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
django.db.utils.ProgrammingError: column ipam_prefix.site_id does not exist
LINE 1: ..."ipam_prefix"."comments", "ipam_prefix"."prefix", "ipam_pref...
^
HINT: Perhaps you meant to reference the column "ipam_prefix._site_id".
``` | closed | 2025-02-22T08:01:20Z | 2025-02-24T13:39:09Z | https://github.com/netbox-community/netbox/issues/18705 | [] | deanfourie1 | 0 |
TencentARC/GFPGAN | pytorch | 91 | 如何生成自己数据的landmark文件? | FFHQ_eye_mouth_landmarks_512.pth 这个是FFHQ的数据集的landmark,如果要训练自己的数据集,这个如何生成?比如用CeleBA 数据集 | open | 2021-11-02T09:29:29Z | 2022-08-31T09:18:06Z | https://github.com/TencentARC/GFPGAN/issues/91 | [] | alexliyang | 5 |
apify/crawlee-python | web-scraping | 705 | Make the AutoscaledPool log understandable | AutoscalePool periodically logs system load information in this function:
[AutoscaledPool._log_system_status](https://github.com/apify/crawlee-python/blob/07c138e07c8edb0fc3df58e5e39d3769bafe21ec/src/crawlee/_autoscaling/autoscaled_pool.py#L212)
This looks for example like this:
> 2024-11-06T15:11:50.471Z [crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 1; desired_concurrency = 1; cpu = 0.581; mem = 0.0; event_loop = 0.227; client_info = 0.0
It shows values that are internally used by the desired_concurrency controller, but those value are hard to interpret by humans and thus not very useful to show in log. Make this log understandable.
On the other hand, the logged values should also be connected to values used by mentioned controller. If it gets readable, but detached from controller, then the log is again not very usable. So there is a risk that making this more readable would require changing the controller itself.
See full discussion in: https://github.com/apify/crawlee-python/issues/662
| open | 2024-11-18T09:25:04Z | 2024-11-18T09:25:47Z | https://github.com/apify/crawlee-python/issues/705 | [
"enhancement",
"t-tooling"
] | Pijukatel | 0 |
ploomber/ploomber | jupyter | 245 | Jupyter extension support when entry point is a directory | Via env variable | closed | 2020-09-11T21:26:56Z | 2020-09-14T20:56:14Z | https://github.com/ploomber/ploomber/issues/245 | [] | edublancas | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,046 | Using nginx reverse-proxy Js Client connects with Flask socketio but doesn't receive any messages | For some reason my Angular client connects with the backend server (apparently successfully) but it doens't receive any message. Neither direct messages nor broadcasted.
This problem was introduced after using Nginx configured as reverse-proxy for the backend. I followed the latest official documentation of flask socketio module but still didn't found any clue of what's going on.
On the client I connect and prepare to receive messages with:
```
const API = "http://146.250.180.213/api/"
::
socket: io;
::
this.socket = io(API);
::
this.socket.on('connect', function() {
console.log('Conection with server estblished - socketio');
});
::
this.socket.on('update_monitor', (data: any) => {
console.log('Broadcasted message received! Data:', data);
});
```
On Flask I start the server and define endpoints with:
```
app = Flask(__name__)
socketio = SocketIO(app)
if __name__ == '__main__':
socketio.run(app, port=5000, debug=True)
```
```
@app.route('/test_endpoint', methods=['GET'])
def test_endpoint():
socketio.emit('update_monitor', {"mrtp": app.config['MOST_RECENT_TIMESTAMP_PROCESSED'], 'updated_elements': ['ESICAS23C_ESICAS23']})
return jsonify({'message': 'ok'}), 200
@socketio.on('connect')
def connect():
print('Client connected')
@socketio.on('disconnect')
def disconnect():
print('Client disconnected')
```
I use the 'test_endpoint' to test the socketio mechanism by requesting it with Postman.
On Nginx, I followed the configuration provided by the socketio documentation:
```
server {
listen 0.0.0.0:80;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
location / {
try_files $uri /index.html;
}
location /socket.io {
# include proxy_params; dont know what this stands for
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000/socket.io;
}
location /grafana/ {
proxy_pass http://localhost:3000/;
proxy_hide_header X-Frame-Options;
}
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_hide_header X-Frame-Options;
proxy_connect_timeout 9999;
proxy_send_timeout 9999;
proxy_read_timeout 9999;
send_timeout 9999;
rewrite ^/api/(.*) /$1 break;
proxy_pass http://localhost:5000;
}
}
```
And I start the server with gunicorn (eventlet):
`gunicorn manage:app --worker-class eventlet -w 1 --bind 0.0.0.0:5000 --reload
`
Both on client and backend I see that they connects, but client never receives any message.


Already checked nginx (docker logs) logs output and nothing abnormal happens. There isn't any error message, anywhere. Any clue of what's happening? Suggestions? | closed | 2019-08-23T14:22:24Z | 2025-03-08T15:34:54Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1046 | [
"question"
] | denisb411 | 14 |
jupyter-book/jupyter-book | jupyter | 2,093 | Update boilerplate code to use main branch by default instead of master | ### Context
Github's new default branch name is "main": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-branches#about-the-default-branch
Same goes for Gitlab: https://docs.gitlab.com/ee/user/project/repository/branches/default.html
Jupyterbook is still referencing "master".
### Proposal
Let's update the docs and boilerplate code to reflect Github's new default branch name.
### Tasks and updates
I'd be happy to take this on, but please let me know if there's someone who'd be willing to review it. | open | 2023-12-27T17:59:27Z | 2023-12-27T17:59:56Z | https://github.com/jupyter-book/jupyter-book/issues/2093 | [
"enhancement"
] | topspinj | 1 |
serengil/deepface | machine-learning | 1,033 | Custom Model Position | Deepface has achieved 99% excellence, but cannot customize the location of model files. I think it is not very user-friendly for users with network issues。
` home = functions.get_deepface_home()
if os.path.isfile(home + "/.deepface/weights/openface_weights.h5") != True:
print("openface_weights.h5 will be downloaded...")
output = home + "/.deepface/weights/openface_weights.h5"
gdown.download(url, output, quiet=False)` | closed | 2024-02-23T03:48:40Z | 2024-02-23T07:40:24Z | https://github.com/serengil/deepface/issues/1033 | [
"question"
] | 2310794041 | 1 |
wkentaro/labelme | deep-learning | 861 | Add a way to load a predefined list of labels for a project [Feature] | Never mind, I found the --labels command line argument does what I want | closed | 2021-04-28T02:59:57Z | 2022-06-25T04:43:50Z | https://github.com/wkentaro/labelme/issues/861 | [] | hqm | 1 |
microsoft/unilm | nlp | 1,587 | Kosmos 2.5 for Volta GPU | I'm on a Volta GPU, I'm aware that flash attention is not compatible with them but I've seen that the kosmos 2.5 requirements.txt contains "xformers", but I haven't seen any implementation in the kosmos 2.5 code. Do you plan to use xformers as a fallback when flash_attn is not installed? | open | 2024-06-25T07:57:33Z | 2024-06-26T10:10:08Z | https://github.com/microsoft/unilm/issues/1587 | [] | Borobo | 0 |
seleniumbase/SeleniumBase | pytest | 2,768 | Add the `--ee` option for regular tests and Recorder mode | ## Add the `--ee` option for regular tests and Recorder mode
For regular tests that run with `pytest`, if adding `--ee` as a `pytest` command-line option, this will allow you to skip the current test by pressing the `ESC` key from the web browser of the active test. (Note that the test will end at the next safe moment, and the test will be marked as `Skipped`.)
For Recorder Mode, the `--ee` option enables concluding the Recording by pressing `SHIFT` followed by `ESC`, instead of the usual way of ending the recording by typing `c` in the command-prompt and pressing `ENTER` to continue from the `breakpoint()`. Note that pressing `ESC` without `SHIFT` in Recorder Mode will only pause the current Recording (until `~` is pressed). You'll need to use `SHIFT` followed by `ESC` to fully end the Recording. | closed | 2024-05-13T01:36:38Z | 2024-05-13T04:57:40Z | https://github.com/seleniumbase/SeleniumBase/issues/2768 | [
"enhancement",
"documentation"
] | mdmintz | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 741 | Resetting max_radii2D in densification_postfix() seems to make no gaussians pruned in densify_and_prune(). | In function
```
def densify_and_prune(self, max_grad, min_opacity, extent, max_screen_size):
grads = self.xyz_gradient_accum / self.denom
grads[grads.isnan()] = 0.0
self.densify_and_clone(grads, max_grad, extent)
self.densify_and_split(grads, max_grad, extent)
prune_mask = (self._opacity < min_opacity).squeeze()
print("opacity true =", torch.sum(prune_mask).item())
if max_screen_size:
big_points_vs = self.max_radii2D > max_screen_size
prune_mask = torch.logical_or(prune_mask, big_points_vs)
print("radii true = ", torch.sum(prune_mask).item())
big_points_ws = self.get_scaling.max(dim=1).values > 50 * extent
prune_mask = torch.logical_or(prune_mask, big_points_ws)
print("scaling true = ", torch.sum(prune_mask).item())
self.prune_points(prune_mask)
torch.cuda.empty_cache()
```
densify_and_split() will be called first, but this function will call densification_postfix():
```
def densification_postfix(self, new_xyz, new_features_dc, new_features_rest, new_opacities, new_scaling,
new_rotation, new_transform):
d = {"xyz": new_xyz,
"f_dc": new_features_dc,
"f_rest": new_features_rest,
"opacity": new_opacities,
"scaling": new_scaling,
"rotation": new_rotation,
"tau": new_transform}
optimizable_tensors, optimizable_tensors_t = self.cat_tensors_to_optimizer(d)
self._xyz = optimizable_tensors["xyz"]
self._features_dc = torch.cat((self._features_dc, new_features_dc), dim=0)
self._features_rest = torch.cat((self._features_rest, new_features_rest), dim=0)
self._opacity = torch.cat((self._opacity, new_opacities), dim=0)
# print("new opa=", new_opacities.shape)
self.tilted.tilted.tau = optimizable_tensors_t["tau"]
self._scaling = optimizable_tensors["scaling"]
self._rotation = optimizable_tensors["rotation"]
self.xyz_gradient_accum = torch.zeros((self.get_xyz.shape[0], 1), device="cuda")
self.denom = torch.zeros((self.get_xyz.shape[0], 1), device="cuda")
self.max_radii2D = torch.zeros((self.get_xyz.shape[0]), device="cuda")
# Here it just set self.max_radii2D to all zeros
```
densification_postfix() will set self.max_radii2D to all zeros, will this result in no gaussians will be pruned for being too large in the value max_radii2D ? When I ran these other's code in other's project, I printed the number of True values in the mask for pruning gaussians with max_radii2D value above the threshold and got always 0 along the whole training progress. I wonder if I made some mistakes or misunderstand the pruning progress.
Looking forward to your reply! Thanks so much :) | closed | 2024-04-07T11:58:43Z | 2024-04-19T01:25:41Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/741 | [] | StarSapph1re | 2 |
JaidedAI/EasyOCR | pytorch | 547 | minor bug when migrating from 1.4 to 1.4.1. | As stated in the 'What's new' section, in Version 1.4.1:
> Extend rotation_info argument to support all possible angle (thanksabde0103, see PR)
I run the detection and recognition on this specific [image](https://drive.google.com/file/d/1eSymuZlO1J4wib68RQ84_2exTZv2lii_/view?usp=sharing). As you can see in order to get all the text a 90 degree and a 270 degree rotation is needed. For both versions I run the line below:
```
result = reader.recognize(np.array(im.convert('L')), merged_list, free_list[0], rotation_info=[0, 90, 180, 270])
```
The results I am getting from one version to another are totally different.
Version 1.4:
```
([[145, 0], [882, 0], [882, 101], [145, 101]], 'EUROPEAN LAWYERS', 0.9799557536771372)
([[911, 107], [1014, 107], [1014, 906], [911, 906]], 'BARREAUX EUROPEENS', 0.646282488832031)
([[4, 124], [108, 124], [108, 876], [4, 876]], 'EUROPEAN BARS', 0.6906238763445064)
([[185, 401], [841, 401], [841, 583], [185, 583]], 'CCBE', 0.9803938127024359)
([[150, 893], [881, 893], [881, 1010], [150, 1010]], 'AVOCATS EUROPEENS', 0.7165664042811335)
```
Version 1.4.1:
```
[
([[145, 0], [882, 0], [882, 101], [145, 101]], 'EUROPEAN LAWYERS', 0.9799557536771372),
([[911, 107], [1014, 107], [1014, 906], [911, 906]], 'SNJJdo&n: XnVaudva', 0.09454260329171453),
([[4, 124], [108, 124], [108, 876], [4, 876]], 'EUROPEAN BARS', 0.9971172987203192),
([[185, 401], [841, 401], [841, 583], [185, 583]], '7833', 0.4892914593219757),
([[150, 893], [881, 893], [881, 1010], [150, 1010]], 'SNJAdoHn] SLVDOAV', 0.06483533410458987)]
```
When just using `rotation_info = [270]` I get all the text right but 'BARREAUX EUROPEENS'. When using `rotation_info = [90]` I get all the text right but 'EUROPEAN BARS'. Which means that when giving one element other than 0 in rotation_info it gives the right results for the horizontal text as well. But the thing is that I need both the 90 and 270 rotation and when the elements are 2+ in rotation_info I only get the results for the last element which was not an issue in version 1.4.
For now I am just going to use Version 1.4, but needed to inform of that minor bug.
| closed | 2021-09-23T08:17:13Z | 2021-10-06T08:41:29Z | https://github.com/JaidedAI/EasyOCR/issues/547 | [] | beecadox | 2 |
FujiwaraChoki/MoneyPrinterV2 | automation | 54 | Error: Generated Title is too long. Retrying... | I'm getting this error, please help me?

| closed | 2024-03-03T13:07:12Z | 2024-03-04T10:46:13Z | https://github.com/FujiwaraChoki/MoneyPrinterV2/issues/54 | [] | alexdo83 | 1 |
Crinibus/scraper | web-scraping | 76 | Use plotly or something more prettier to visualize data | closed | 2020-09-28T21:11:40Z | 2020-11-14T23:43:30Z | https://github.com/Crinibus/scraper/issues/76 | [
"enhancement"
] | Crinibus | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.