repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
mitmproxy/mitmproxy | python | 7,038 | Crash for Non-HTTP/3 QUIC Protocols | #### Problem Description
See https://github.com/mitmproxy/mitmproxy/issues/7025#issuecomment-2248667622 for a traceback. HTTP/3 over QUIC works, but raw QUIC doesn't work here.
#### Steps to reproduce the behavior:
1. `mitmdump --mode reverse:quic://http3.is@443`
2. `curl --http3-only https://http3.is -k -vvv --resolve http3.is:443:127.0.0.1`
#### System Information
```
Mitmproxy: 11.0.0.dev (+47, commit e8db473)
Python: 3.12.3
OpenSSL: OpenSSL 3.2.1 30 Jan 2024
Platform: macOS-14.5-arm64-arm-64bit
``` | open | 2024-07-24T18:57:52Z | 2024-08-19T10:55:49Z | https://github.com/mitmproxy/mitmproxy/issues/7038 | [
"kind/bug",
"area/protocols"
] | mhils | 1 |
yunjey/pytorch-tutorial | deep-learning | 103 | Train Process so slow | I tried to run train.py in Ubuntu 16.04 which installed cuda 8.0, CUDnn, python 3.5, when i check using `torch.cuda.is_available()` show output is
> True
but when I try to add code to check if cuda available or not in like this
`if torch.cuda.is_available():
encoder.cuda()
decoder.cuda()
print("cuda is available")
else:
print("None of cuda cant use")`
and the output is
> None of cuda cant use
I also add os.environ['CUDA VISIBLE DEVICES'] = '1' after import all packages.
Is it cuda unavailable or actually this training consume much times?
Thanks for your answer
| closed | 2018-03-22T06:43:31Z | 2018-11-05T16:01:06Z | https://github.com/yunjey/pytorch-tutorial/issues/103 | [] | khaerulumam42 | 2 |
akfamily/akshare | data-science | 5,638 | AKShare 接口问题报告 | AKShare Interface Issue Report stock_zh_a_spot_em只返回200数据了 | 目前使用 akshare的版本是1.6.34,
调用的代码如下:
import akshare as ak
df = ak.stock_zh_a_spot_em()
上周都还是全量返回所有5000多条的数据的,今天(25-02-17)调用的时候只返回200条数据了。
| closed | 2025-02-17T02:07:57Z | 2025-03-11T02:15:01Z | https://github.com/akfamily/akshare/issues/5638 | [
"bug"
] | kenjs | 4 |
LibreTranslate/LibreTranslate | api | 492 | local hosted server API keys not working | I am not sure if this is a bug or what but every API key generated via ltmanage command locally on our server for selfhosted projects says invalid key.
LibreTranslate is launched as a service on CentOS 7 with the --ssl --api-keys commands
we even tried adding --api-keys-db-path=/home/admin/web/"webdomain"/public_html/LibreTranslate/db/api_keys.db command as well
API works fine without the --api-keys commands but we can't limit since local API keys are not valid.
| closed | 2023-09-15T21:34:46Z | 2023-09-20T03:42:18Z | https://github.com/LibreTranslate/LibreTranslate/issues/492 | [] | snagi12 | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,140 | An issue with editing and saving questionnaires | Hi
When an user is editing a questionnaire, there is possibility to save progress at different levels: questionnaire, step or field level.
When a field is edited, and user saves at questionnaire level, the change does not get saved.
If the user saves at field level, then the change is preserved.
This is somewhat confusing as users expect the change to get saved, and the change is visible in the WebUI for lifetime of the session.
An example: Attachment checkbox, in the TOS Question in a multi-step questionnaire
Is this a known limitation or maybe a bug?
| closed | 2022-01-06T18:46:40Z | 2022-01-19T18:18:26Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3140 | [] | aetdr | 2 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 835 | 运行发现代码报错 cuda版本11.7 torch版本1.11.0 torchvision版本0.12.0 | 在运行mobileNetv2和rpn+Res50时都会报错,报错信息如下:


| open | 2024-09-24T06:11:32Z | 2024-09-24T06:11:32Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/835 | [] | kusenanhai | 0 |
dsdanielpark/Bard-API | nlp | 31 | list index out of range | well, maybe google decided to change things a bit...
File "/home/runner/Bard-AI-bot-template-bardapi/venv/lib/python3.10/site-packages/bardapi/core.py", line 86, in get_answer
resp_dict = json.loads(resp.content.splitlines()[3])[0][2]
IndexError: list index out of range | closed | 2023-05-23T06:38:19Z | 2023-05-23T07:43:17Z | https://github.com/dsdanielpark/Bard-API/issues/31 | [] | sublimeclemency | 10 |
ionelmc/pytest-benchmark | pytest | 160 | Add AWS S3 or HTTP storage | 👋 First thanks for this awesome library 🙏
I was wondering if adding AWS S3 or simple HTTP storage would be something worth implementing here.
I could start a PR if 👌
Thanks | open | 2020-02-05T15:54:31Z | 2025-03-06T11:38:46Z | https://github.com/ionelmc/pytest-benchmark/issues/160 | [] | vincentsarago | 6 |
Lightning-AI/pytorch-lightning | machine-learning | 20,016 | Gathering a list of strings from multiple devices using Fabric | ### Bug description
I have a list of strings, on each device in multi-gpu evaluation, I want to be able to collect them all on all devices across all devices into a single list
```
m_preds = fabric.all_gather(all_preds)
m_gt = fabric.all_gather(all_gt)
```
when I try the above code (`all_preds` I and `all_gt` are lists of strings), `m_preds` and `m_gt` are the same lists as `all_preds` and `all_gt` as per the device their on. Am I doing something wrong?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
cc @borda | open | 2024-06-26T01:28:46Z | 2024-06-27T18:44:48Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20016 | [
"question",
"docs"
] | Haran71 | 1 |
koxudaxi/datamodel-code-generator | pydantic | 1,884 | Strict types don't enforce field constraints | **Describe the bug**
When using `—strict-types` and `--field-constraints` (or `--use-annotated`) together, the field constraints are not added to the Field.
**To Reproduce**
Example schema:
```yaml
components:
schemas:
Timestamp:
type: integer
minimum: 1
maximum: 9999999999
```
Used commandline:
```
$ datamodel-codegen --input api.yaml --output models.py --input-file-type openapi --output-model-type pydantic_v2.BaseModel --field-constraints --strict-types int
```
**Expected behavior**
Generates the model
```python
class Timestamp(RootModel[StrictInt]):
root: StrictInt = Field(..., ge=1, le=9999999999)
```
**Actual behavior**
Generates the model
```python
class Timestamp(RootModel[StrictInt]):
root: StrictInt
```
**Version:**
- OS: RedHat Linux
- Python version: Python 3.11.7
- datamodel-code-generator version: 0.25.3
**Additional context**
There's a few workarounds, but both are undesirable:
1. Omit the `--field-constraints` option, which uses `conint` for enforcing constraints. But it fails to pass MyPy with the error "Invalid type comment or annotation".
2. Avoid using strict field attributes and enforce Pydantic strict mode via `Timestamp.model_validate('123', strict=True)`. But then the generated model is vulnerable if we forget to pass `strict=True`. | open | 2024-03-13T21:04:12Z | 2024-03-14T15:03:02Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1884 | [] | colinbr96 | 0 |
holoviz/panel | plotly | 7,377 | DeckGL/Pydeck tooltips incorrect | 
This is a screenshot in the documentation so I haven't bothered to put together an MRE. Only applicable in the example when using PyDeck. I had noticed this in previous versions of panel aswell.
https://panel.holoviz.org/reference/panes/DeckGL.html | open | 2024-10-09T03:41:38Z | 2025-01-20T19:18:30Z | https://github.com/holoviz/panel/issues/7377 | [] | mayonnaisecolouredbenz7 | 1 |
K3D-tools/K3D-jupyter | jupyter | 41 | Add an abstraction for transforms | Here's a suggestion for a design change and/or enhancement.
I think it will make the framework much more flexible at a fairly low cost.
The association of model_matrix (formerly view_matrix) with each object is a pretty low level feature and leaves a lot of both power and responsibility up to the user in terms of placing objects in the scene.
Traditionally scene graphs are used to position objects and groups of objects relative to each other, but K3D uses a flat list of objects which does have its merits in terms of simplicity.
If, instead of having a `model_matrix` attribute, each object had a `transform` attribute, which is a reference to an instance of e.g. a `Transform` class (widget), then one transform can be shared by many objects, and changes to that transform can automatically be propagated to the relevant objects.
If the `Transform` class has a reference to a child transform, we can effectively get back the composability of a full scene graph while keeping the objects in a flat list.
Example python/pseudo-code:
```
class Transform(Widget):
translation = array((3,)) representing translation
rotation = array((4,)) representing quarternion
scaling = array((3,)) representing scaling
custom = 4x4 array defaulting to identity
child = Instance(Transform) allowing null
def compute_matrix(self):
"Compose transformations, for use on python side (client side duplicates this)."
T = compute_translation_matrix(self.translation)
R = compute_rotation_matrix(self.rotation)
S = compute_scaling_matrix(self.scaling)
H = self.custom
C = self.child or identity
M = T * R * S * H * C
return M
```
The translate/rotate/scale options are for more convenient construction of the matrix, the custom matrix is an escape hatch for full flexibility (could for example be used to set up a custom projection to a plane), and the child reference is what makes it possible to compose these transforms like a standard scene graph.
Then you can setup e.g. a shared rotation transform and separate translations of subfigures, then update the rotation and have the effects propagate:
```
R = K3D.transform(rotate=[...])
T00 = K3D.transform(translate=[-10,-10,0], child=R)
T10 = K3D.transform(translate=[+10,-10,0], child=R)
T01 = K3D.transform(translate=[-10,10,0], child=R)
T11 = K3D.transform(translate=[+10,10,0], child=R)
plot += K3d.someobject(..., transform=T00)
plot += K3d.someobject(..., transform=T10)
plot += K3d.someobject(..., transform=T01)
plot += K3d.someobject(..., transform=T11)
R.rotate = [...] # propagates to Txy and to objects in plot
```
What do you think? | closed | 2017-05-31T07:44:21Z | 2017-10-21T14:01:07Z | https://github.com/K3D-tools/K3D-jupyter/issues/41 | [] | martinal | 5 |
pydata/xarray | pandas | 9,246 | Writing complex numbers to netCDF | ### Is your feature request related to a problem?
Currently, the only supported method for saving complex numbers to netCDF is to use the `h5netcdf` engine with `invalid_netcdf=True`.
The latest release of `netCDF4`, 1.7.1, now has [support for complex numbers](https://unidata.github.io/netcdf4-python/#support-for-complex-numbers). Also, because it's built on the up-coming netCDF 4.9.3, it includes support for reading files written using `h5netcdf` with `invalid_netcdf=True` (and I _think_ for pretty much all other types that fall under this, including enums and bools).
This means that there are now more options for users wanting to read/write complex numbers, although this depends on the version of `netCDF4` in use and the flags used to open files.
Just to note that the complex number support in `netCDF4` is done via converting to/from a compound datatype (or a `complex` dimension for netCDF-3 files), and is completely standard netCDF -- it's basically just a wrapper in the Python API.
### Describe the solution you'd like
Either expose the `auto_complex` keyword for `netCDF4.Dataset` to the backend or set this automatically. This will either require using `netCDF4>=1.7.1` or detecting the version.
Additionally, `invalid_netcdf` will no longer be strictly necessary for bools and complex numbers for `h5netcdf`, although again this will be dependent on the version of netCDF used -- this is sort of outside of xarray though.
I'm happy to implement this as it's pretty trivial, but is it better
### Describe alternatives you've considered
_No response_
### Additional context
There is currently an issue mixing `netCDF4==1.7.1` and `h5py==3.11.0` in the same python process, see: https://github.com/Unidata/netcdf4-python/issues/1343 and https://github.com/h5py/h5py/issues/2453. This might need to be resolved before xarray can require `netCDF4==1.7.1`
Any insights on how to resolve this would be much appreciated! | closed | 2024-07-15T10:05:01Z | 2024-10-02T06:10:58Z | https://github.com/pydata/xarray/issues/9246 | [
"enhancement"
] | ZedThree | 6 |
tensorflow/tensor2tensor | machine-learning | 1,461 | AttributeError: module 'tensorflow' has no attribute 'bytes' while running tensorflow serving | ### Description
AttributeError: module 'tensorflow' has no attribute 'bytes' while running tensorflow serving
### Environment information
```
OS: Ubuntu 18.04.2 LTS
$ pip freeze | grep tensor
mesh-tensorflow==0.0.4
tensor2tensor==1.12.0
tensorboard==1.12.0
tensorflow==1.12.0
tensorflow-gpu==1.12.0
tensorflow-hub==0.2.0
tensorflow-metadata==0.9.0
tensorflow-plot==0.2.0
tensorflow-probability==0.5.0
tensorflow-serving-api==1.12.0
$ python -V
Python 3.6.7
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
1/ create a Text2Class model
2/ start the model server
3/ run t2t-query-server
```
```
# Error logs:
2019-02-21 13:39:55.628012: I tensorflow_serving/model_servers/server.cc:286] Running gRPC ModelServer at 0.0.0.0:9000 ...
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:75: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
return _inspect.getargspec(target)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
INFO:tensorflow:Importing user module t2t_try_text_to_class from path /home/jdroo
Traceback (most recent call last):
File "/usr/local/bin/t2t-query-server", line 17, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/local/bin/t2t-query-server", line 12, in main
query.main(argv)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/serving/query.py", line 89, in main
outputs = serving_utils.predict([inputs], problem, request_fn)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/serving/serving_utils.py", line 156, in predict
for input_ids in input_ids_list]
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/serving/serving_utils.py", line 156, in <listcomp>
for input_ids in input_ids_list]
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/serving/serving_utils.py", line 75, in _make_example
if ftype.dtype == tf.bytes:
AttributeError: module 'tensorflow' has no attribute 'bytes'
```
| open | 2019-02-21T12:44:33Z | 2019-02-21T13:41:11Z | https://github.com/tensorflow/tensor2tensor/issues/1461 | [] | josd | 0 |
manrajgrover/halo | jupyter | 59 | Bug: Output gets hidden on stop_and_persist in Jupyter notebooks | <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently, in jupyter notebooks, the final `stop_and_persist` output is not persisted. I've opened an issue in ipywidget for understanding the issue a little better (https://github.com/jupyter-widgets/ipywidgets/issues/2072).
### System settings
- Operating System: Mac OS 10.13.1
- Terminal in use: NA
- Jupyter version: 4.4.0
- Python version: 2.7.14
- Halo version: 9f9ea376d43f0d43859f7ddeee68e34834fdbf19
- `pip freeze` output: NA
### Error
The output display is not persited on `stop_and_persist` call.
### Expected behaviour
The output should be persited on `stop_and_persist` call.
## Steps to recreate

## People to notify
@JungWinter @ManrajGrover | open | 2018-05-14T17:55:41Z | 2019-10-15T23:54:53Z | https://github.com/manrajgrover/halo/issues/59 | [
"bug",
"help wanted",
"up-for-grabs",
"hacktoberfest",
"jupyter"
] | manrajgrover | 5 |
noirbizarre/flask-restplus | flask | 351 | doc= to hide API doc does not support a True value | Flask==0.12.2
flask-restplus==0.10.1
Using the doc= option needs to be hard coded with a False value if we do not want to show the documentation.
The value True is not supported, as it throws an exception, see this code snippet:
```
api = Api(blueprint_api,
version=app.config['API_VERSION'],
title='User API',
description='User API',
doc=app.config['API_DOC_ENABLE'], # set to True throws an error!
)
```
| open | 2017-11-12T16:37:06Z | 2017-11-12T16:38:06Z | https://github.com/noirbizarre/flask-restplus/issues/351 | [] | ptrdvds | 0 |
JaidedAI/EasyOCR | deep-learning | 1,012 | Poor performance after training on custom data ℹ️ | Greetings! 👋🏼
I used your repository (and not the [deep-text-recognition-benchmark](https://github.com/clovaai/deep-text-recognition-benchmark)), so I think it would be better to ask this in here.
### **I hope the issue would not be lost in the void😢**
* Can you please give some insights to how is that the accuracy can be as good as `>90%` with really low `validation loss` like `<0.01`, but when using the trained model in production ( `easyocr.Reader` ) the extracted text is just nonsense and not even close to the actual text in the image? :confused:
* I saw some comments to other issues like this that perhaps using a dataset close to your domain would help, which I used similar images for both training, validation, and inference, but still no changes. ❌ : 🙅🏼♂️
* Moreover, if you just train one model for let's say 30000 iterations, get the `best_accuracy.pth` and train it again for another 30000 iterations, would it ultimately make the model better? :suspect:
* In conclusion, I would like to know all of your opinions (specially from the contributors of this repository since they know better what they developed) on why the performance in inference is worse than what it shows in the training process. 🤝
* If it helps to provide you with anything, let me know. 🗒️
* Also be noted that before giving any image to the model upon inference, I do image processing to make sure that the image is more readable for the model. 😏
Have a good one! | open | 2023-05-09T21:03:49Z | 2024-07-10T17:20:39Z | https://github.com/JaidedAI/EasyOCR/issues/1012 | [] | amir2628 | 7 |
wandb/wandb | tensorflow | 9,174 | [Feature]: Want option to revert to old wandb workspace settings | ### Description
The new workspace settings is a pop-up drawer on the right of the window, which
1) is quite laggy to open and close especially if many runs and curves are visualized.
2) occupies a large space previously available for visualizations.
3) needs 3 clicks (1 click on settings button, and wait for a slow sliding animation, and then 1 click on "line plots" card, 1 click changing settings), compared to 2 clicks and not needing to wait for the slow animation in the previous versions.
4) the sub-panel settings use the same drawer space as the workspace settings, leading to much confusion and inconvenience when trying to set different values for different panels.
### Suggested Solution
<!--- Describe your solution here --->
| open | 2025-01-03T00:30:33Z | 2025-01-07T01:08:12Z | https://github.com/wandb/wandb/issues/9174 | [
"ty:feature",
"a:app"
] | eliphatfs | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 683 | vit的某些模型参数没有提供下载 不知道是否可以增加模型权重链接 感谢~ | 比如。。。vit_base_patch16_224_in21k等?没有看到链接。。。啊 以及非常感谢这份代码,很好用! | closed | 2022-11-14T08:27:29Z | 2023-02-23T13:58:15Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/683 | [] | Arsmart1 | 2 |
graphdeco-inria/gaussian-splatting | computer-vision | 838 | Non uniform image as background | Hi !
How to optimize the gaussians in the render process whilehaving a non uniform background such as an image ?
I mean should I modify this part in the cuda_rasterizer ?
https://github.com/graphdeco-inria/diff-gaussian-rasterization/blob/59f5f77e3ddbac3ed9db93ec2cfe99ed6c5d121d/cuda_rasterizer/backward.cu#L530-L534
Does anyone has an idea ?
Thank you | open | 2024-06-05T10:25:50Z | 2024-06-27T14:51:19Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/838 | [] | MatteoMarengo | 4 |
microsoft/JARVIS | pytorch | 186 | After running "python models_server.py --config configs/config.default.yaml" i get the following error; | ~/JARVIS/server$ python models_server.py --config configs/config.default.yaml
Fetching 27 files: 100%|█████████████████████████████████████████████████████████████| 27/27 [00:00<00:00, 28662.67it/s]`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
WARNING:datasets.builder:Found cached dataset cmu-arctic-xvectors (/home/bitnomad/.cache/huggingface/datasets/cmu-arctic-xvectors/default/0.0.1/a62fea1f9415e240301ea0042ffad2a3aadf4d1caa7f9a8d9512d631723e781f)
Killed
Not even chatgpt could help me with this. :/
Any suggestions? | open | 2023-04-26T22:13:55Z | 2023-05-03T21:00:17Z | https://github.com/microsoft/JARVIS/issues/186 | [] | IvanSchroll | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 701 | Matplotlib version (>=3.0.0) backends don't support Yellowbrick | **Describe the issue**
A clear and concise description of what the issue is.
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
The error says YellowBrick 0.9 has requirement matplotlib <3.0 and >=1.5.1.So,the version of matplotlib without updating works fine.I will recommend users not to update matplotlib as its version 3.0.2's backends don't support yellowbrick. | closed | 2019-01-29T01:20:11Z | 2019-02-04T18:10:35Z | https://github.com/DistrictDataLabs/yellowbrick/issues/701 | [
"type: question"
] | dnabanita7 | 2 |
fastapi-users/fastapi-users | fastapi | 846 | Delete response header not setting cookie_samesite as defined in CookieTransport | For my angular project i need to set the cookies SameSite header to 'None' so 'Set-Cookie' is set by the angular app.
I defined the cookie transport:
```
cookie_transport = CookieTransport(cookie_max_age=3600, cookie_samesite='None')
```
On a login call the browser is receiving the following cookie:
> fastapiusersauth=yofyP7VSA3YZ6seMk3YzX075majNRys4vV7N1kO83a0; HttpOnly; Max-Age=3600; Path=/; SameSite=None; Secure
and the cookie will be set.
On a logout call the browser is receiving the following cookie:
> fastapiusersauth=""; expires=Fri, 07 Jan 2022 11:38:53 GMT; Max-Age=0; Path=/; SameSite=lax
**Setting cookie is blocked by SameSite attribute lax!**
## To Reproduce
Steps to reproduce the behavior:
setup fastapi-users cookie backend
setup angular project to call login and logout
look in network calls
## Expected behavior
On the delete cookie endpoint return response headers with samesite attribute as defined in the cookie transport.
## Configuration
- Python version : 3.10
- FastAPI version : 0.70.1
- FastAPI Users version : 9.2.0
### FastAPI Users configuration
see [repo](https://github.com/Hazedd/fastapi-users-angular-example) for config.
edit:
I have found the issue:
CookieTransport implements a set and delete cookie of starlette. In the set_cookie of starlette all posible options are present (secure and samesite for this issue). The delete_cookie of starlette makes a expired cookie by using the set_cookie on this point only key, expires max_age path domain are used.
Not sure deleted cookies should have those parameters, if so bug is in starlette package.
```python
def delete_cookie(self, key: str, path: str = "/", domain: str = None) -> None:
self.set_cookie(key, expires=0, max_age=0, path=path, domain=domain)
```
fix for fastapi-users:
on CookieTransport class make sure the get_logout_response is using the set_cookie methode of starlette and set cookie-value to '' and max_age to 0
```python
async def get_logout_response(self, response: Response) -> Any:
response.set_cookie(
self.cookie_name,
'',
max_age=0,
path=self.cookie_path,
domain=self.cookie_domain,
secure=self.cookie_secure,
httponly=self.cookie_httponly,
samesite=self.cookie_samesite,
)
```
commit 113bcf0 PR: #848 | closed | 2022-01-07T11:45:35Z | 2022-01-10T11:59:47Z | https://github.com/fastapi-users/fastapi-users/issues/846 | [
"bug"
] | Hazedd | 0 |
widgetti/solara | flask | 825 | Vue components unable to find template vue files when using frozen/pyinstaller application on Windows | I have been using [PyInstaller](https://pyinstaller.org/en/stable/) to create an executable .exe file for my solara application, and that has, in general, worked very well. However, recently I started using the [Menu](https://github.com/widgetti/solara/blob/8ef0826818ae3e08026c0904c2acdec77aeef195/solara/lab/components/menu.py#L8) component and that caused the following issue when I was building the application to an executable using PyInstaller:
```log
Traceback (most recent call last):
File "reacton\core.py", line 388, in _create_widget
File "ipyvue\VueTemplateWidget.py", line 144, in __init__
File "solara\server\patch.py", line 250, in wrapper
File "ipyvue\Template.py", line 47, in get_template
FileNotFoundError: [Errno 2] No such file or directory: 'solara\\lab\\components\\menu.vue'
```
On the other hand, the solara application was working completely fine if I ran it as a normal python program from the terminal.
I belive I have traced the problem to the [component_vue](https://github.com/widgetti/solara/blob/8ef0826818ae3e08026c0904c2acdec77aeef195/solara/components/component_vue.py#L64) decorator which in turn calls a [wrapper](https://github.com/widgetti/solara/blob/8ef0826818ae3e08026c0904c2acdec77aeef195/solara/components/component_vue.py#L48) function that uses `inspect.getfile` to get the path to the file where the decorated function is defined. It looks like follows:
```python
def _widget_vue(vue_path: str, vuetify=True) -> Callable[[Callable[P, None]], Type[v.VuetifyTemplate]]:
def decorator(func: Callable[P, None]):
class VuetifyWidgetSolara(v.VuetifyTemplate):
template_file = (inspect.getfile(func), vue_path)
class VueWidgetSolara(vue.VueTemplate):
template_file = (inspect.getfile(func), vue_path)
base_class = VuetifyWidgetSolara if vuetify else VueWidgetSolara
widget_class = _widget_from_signature("VueWidgetSolaraSub", base_class, func, "vue_")
return widget_class
return decorator
```
We can see here that the call `inspect.getfile(func)` is expected to provide the *full absolute path* to the file. When not using a frozen executable on Windows (or using some other platform like Mac), this works as expected, but when using the frozen executable on Windows, the `inspect.getfile(func)` will return a relative path, leading the the vue file not being found.
A simple solution (which I have tested already) is to surround the `inspect.getfile(func)` with `os.path.abspath`, as this will correctly resolve the path, no matter if the inspect module returns a relative path, or not.
| closed | 2024-10-21T11:38:16Z | 2024-10-25T09:55:04Z | https://github.com/widgetti/solara/issues/825 | [] | suhren | 3 |
frappe/frappe | rest-api | 31,116 | Default Value: Problem with German | <!--
Welcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to Frappe
- For questions and general support, use https://stackoverflow.com/questions/tagged/frappe
- For documentation issues, refer to https://frappeframework.com/docs/user/en or the developer cheetsheet https://github.com/frappe/frappe/wiki/Developer-Cheatsheet
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
4. If you think you know what the reason for the bug is, share it with us. Maybe put in a PR 😉
-->
## Description of the issue
We defined a Child Table Doctype with a Field of type Currency and a default value of 0.3.
But when language "German" is used, this field receives the default value 3.0.
## Context information (for bug reports)
Default Value 0.3 in DocType:
<img width="893" alt="Image" src="https://github.com/user-attachments/assets/b0a7d5c1-7ef3-4040-b70c-21a918402529" />
Wrong Default Value 3,0 in Form View:
<img width="703" alt="Image" src="https://github.com/user-attachments/assets/44033ac0-7824-475b-a6ec-eaf93349bb7f" />
**Output of `bench version`**
```
erpnext 15.50.0
frappe 15.54.1
```
## Steps to reproduce the issue
1. Create Child Table DocType with Currency field and Default Value = 0.3
2. Change Language to German
3. In Form View add a new row to Child Table.
### Observed result
Default value 3,0 is applied instead 0,3
### Expected result
Default value 0,3, is applied
### Stacktrace / full error message
```
Does not occur
```
## Additional information
OS version / distribution, `Frappe` install method, etc.
debian bullseye, manual install | open | 2025-02-04T14:43:33Z | 2025-02-11T07:44:22Z | https://github.com/frappe/frappe/issues/31116 | [
"bug"
] | zongo811 | 1 |
serengil/deepface | deep-learning | 1,077 | AttributeError: 'NoneType' object has no attribute 'xy' | Simply, what I'm trying to run is this:
```
from deepface import DeepFace
result = DeepFace.verify(model_name='VGG-Face', detector_backend='yolov8', img1_path=img1, img2_path=img2)
```
and I'm facing this error:
`AttributeError: 'NoneType' object has no attribute 'xy'`
Why is it happening and how to solve it? | closed | 2024-03-09T09:24:26Z | 2024-03-10T09:10:54Z | https://github.com/serengil/deepface/issues/1077 | [
"bug"
] | freedom9393 | 7 |
keras-team/keras | data-science | 20,323 | EarlyStopping() does not return the model weights corresponding to the epoch with the best value of the monitored quantity | When using `early_stopping = keras.callbacks.EarlyStopping(monitor="loss", mode="auto", patience=50, verbose=1, restore_best_weights=True)` with `restore_best_weights=True` the restored weights do not correspond to the epoch that produced the best value of the monitored quantity but rather to the epoch after the best epoch.
For a demonstration of the aforementioned observation, consider the log:
1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step - error_metric: 3.1894 - loss: -5.8280
Epoch 2/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - error_metric: 3.8508 - loss: -1.1713
.
.
.
**omitted for clarity**
.
.
.
Epoch 22/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 137ms/step - error_metric: 0.2297 - loss: -15.9473
Epoch 23/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 161ms/step - error_metric: 0.8144 - loss: -15.3367
Epoch 24/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 126ms/step - error_metric: 0.6884 - loss: -15.5262
**Epoch 25/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 135ms/step - error_metric: 0.0093 - loss: -15.9999**
Epoch 26/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 122ms/step - error_metric: 0.6519 - loss: -15.5750
Epoch 27/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 99ms/step - error_metric: 0.6260 - loss: -15.6082
.
.
.
**omitted for clarity**
.
.
.
Epoch 75/6000
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 32ms/step - error_metric: 0.0595 - loss: -15.9965
Epoch 75: early stopping
Restoring model weights from the end of the best epoch: **25.**
Evidently, the epoch produced the min loss is epoch 25 with measured loss = -15.9999; however, when using ` model.predict(X)` (note: X is exactly the same validation set) the produced loss = -15.5750 which is the loss measured on epoch 26, the epoch after the best epoch.
It seems `keras.callbacks.EarlyStopping()` restores the model weights from the epoch following the best epoch!
| closed | 2024-10-04T18:38:23Z | 2024-11-11T04:07:50Z | https://github.com/keras-team/keras/issues/20323 | [
"keras-team-review-pending",
"type:Bug"
] | GeorgeOrfanidis | 11 |
horovod/horovod | deep-learning | 3,260 | Building Horovod 0.23.0 w HOROVOD_GPU=CUDA on a system with ROCM also installed-- Build tries to use ROCM too | **Environment:**
1. Framework: TensorFlow, PyTorch
2. Framework version: 2.7.0, 1.9.1
3. Horovod version: 0.23.0
4. MPI version: MPICH 3.4.2
5. CUDA version: 11.4.2
6. NCCL version: 2.11.4
7. Python version: 3.9.7
8. Spark / PySpark version: NA
9. Ray version: NA
10. OS and version: Ubuntu 20.04
11. GCC version: GCC 9.3.0
12. CMake version: 3.21.4
**Bug report:**
Trying to build Horovod w/ CUDA, on a system that also has ROCM 4.3.1 installed, and despite setting `HOROVOD_GPU=CUDA` it looks like the install is trying to build against ROCM too:
```
$> HOROVOD_WITH_TENSORFLOW=1 \
HOROVOD_WITH_PYTORCH=1 \
HOROVOD_WITH_MPI=1 \
HOROVOD_GPU_OPERATIONS=NCCL \
HOROVOD_BUILD_CUDA_CC_LIST=35,70,80 \
HOROVOD_BUILD_ARCH_FLAGS="-march=x86-64" \
HOROVOD_CUDA_HOME=/usr/local/cuda-11.4 \
HOROVOD_GPU=CUDA \
pip install horovod[tensorflow,pytorch]
...
[ 74%] Building CXX object horovod/torch/CMakeFiles/pytorch.dir/__/common/common.cc.o
cd /tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/build/temp.linux-x86_64-3.9/RelWithDebInfo/horovod/torch && /usr/bin/c++ -DEIGEN_MPL2_ONLY=1 -DHAVE_CUDA=1 -DHAVE_GLOO=1 -DHAVE_GPU=1 -DHAVE_MPI=1 -DHAVE_NCCL=1 -DHAVE_NVTX=1 -DHAVE_ROCM=1 -DHOROVOD_GPU_ALLGATHER=78 -DHOROVOD_GPU_ALLREDUCE=78 -DHOROVOD_GPU_ALLTOALL=78 -DHOROVOD_GPU_BROADCAST=78 -DTORCH_API_INCLUDE_EXTENSION_H=1 -DTORCH_VERSION=1009001000 -Dpytorch_EXPORTS -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/HTTPRequest/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/assert/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/config/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/core/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/detail/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/iterator/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/lockfree/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/mpl/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/parameter/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/predef/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/preprocessor/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/static_assert/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/type_traits/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/boost/utility/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/lbfgs/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/gloo -I/usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/tensorflow/include -I/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/third_party/flatbuffers/include -isystem /spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/mpich-3.4.2-qfhacakdkcdmvjzstuukmphjr4khbdgn/include -isystem /usr/local/cuda-11.4/include -isystem /usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/torch/include -isystem /usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/torch/include/TH -isystem /usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/torch/include/THC -isystem /usr/local/miniconda3/envs/cuda/include/python3.9 No ROCm runtime is found, using ROCM_HOME='/opt/rocm-4.3.1' -MD -MT horovod/torch/CMakeFiles/pytorch.dir/__/common/common.cc.o -MF CMakeFiles/pytorch.dir/__/common/common.cc.o.d -o CMakeFiles/pytorch.dir/__/common/common.cc.o -c /tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/horovod/common/common.cc
c++: error: No: No such file or directory
c++: error: ROCm: No such file or directory
c++: error: runtime: No such file or directory
c++: error: is: No such file or directory
c++: error: found,: No such file or directory
c++: error: using: No such file or directory
c++: error: ROCM_HOME=/opt/rocm-4.3.1: No such file or directory
make[2]: *** [horovod/torch/CMakeFiles/pytorch.dir/build.make:76: horovod/torch/CMakeFiles/pytorch.dir/__/common/common.cc.o] Error 1
make[2]: Leaving directory '/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/build/temp.linux-x86_64-3.9/RelWithDebInfo'
make[1]: *** [CMakeFiles/Makefile2:446: horovod/torch/CMakeFiles/pytorch.dir/all] Error 2
make[1]: Leaving directory '/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/build/temp.linux-x86_64-3.9/RelWithDebInfo'
make: *** [Makefile:136: all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/setup.py", line 167, in <module>
setup(name='horovod',
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/tmp/pip-install-bs7lwyxo/horovod_9543edab589b4acfbafcd2a92c02c4c3/setup.py", line 100, in build_extensions
subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,
File "/usr/local/miniconda3/envs/cuda/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', 'VERBOSE=1']' returned non-zero exit status 2.
----------------------------------------
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
Failed to build horovod
...
```
| open | 2021-11-05T15:42:58Z | 2021-11-16T12:24:57Z | https://github.com/horovod/horovod/issues/3260 | [
"bug"
] | eugeneswalker | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 447 | Pytorch synthesizer | Splitting this off from #370, which will remain for tensorflow2 conversion. I would prefer this route if we can get it to work. Asking for help from the community on this one.
One example of a pytorch-based tacotron is: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2
Another option is to manually convert the code and pretrained models which would be extremely time-consuming, but also an awesome learning experience. | closed | 2020-07-24T06:40:58Z | 2021-12-01T09:31:35Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/447 | [
"dependencies"
] | ghost | 74 |
mwaskom/seaborn | data-visualization | 3,536 | On the future deprecation of palette without hue | Hi,
Regarding https://seaborn.pydata.org/whatsnew/v0.12.0.html and a recent _"FutureWarning: Passing `palette` without assigning `hue` is deprecated"._ message, assigning the same variable to `x` or `y` and `hue` would apply a color palette.
However, by doing so, the figures objects become very stretched and it becomes much less readable. See the example below:
with some variable assigned to `x` and none to `hue`:

with the same variable assigned to `x` **and** `hue`:

What is the solution to keep the first behavior while providing both arguments `hue` and `x` ?
Here is the _seaborn_ function call used to produce such behavior:
```python
for kind in ["box", "violin", "bar", "strip"]:
g = sns.catplot(
data=df_sns[(df_sns["component"] == "weight")],
kind=kind,
x="ent_coef",
y="value",
# hue="ent_coef",
col="norm",
row="neural_network",
errorbar="sd",
palette=sns_palette,
sharey=False)
```
Thanks for your support,
| closed | 2023-10-21T12:55:08Z | 2023-10-28T12:47:37Z | https://github.com/mwaskom/seaborn/issues/3536 | [] | ReHoss | 3 |
sqlalchemy/alembic | sqlalchemy | 1,536 | Enable Private Vulnerability Reporting | ## Summary
In your repository, we have found a bug that may require your attention. We do not want to disclose the details. Therefore, we request you to enable private vulnerability reporting in your repository.
## Sponsorship and Support
This work is done by the security researchers from OpenRefactory and is supported by the [Open Source Security Foundation (OpenSSF)](https://openssf.org/): [Project Alpha-Omega](https://alpha-omega.dev/). Alpha-Omega is a project partnering with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code - and get them fixed – to improve global software supply chain security.
The bug is found by running the Intelligent Code Repair (iCR) tool by OpenRefactory and then manually triaging the results. | closed | 2024-09-12T09:32:58Z | 2024-09-12T10:15:53Z | https://github.com/sqlalchemy/alembic/issues/1536 | [] | rokydas-OR | 2 |
2noise/ChatTTS | python | 330 | What is the difference between [lbreak] and [uv_break]? | 如题。 | closed | 2024-06-17T07:19:51Z | 2024-11-02T18:44:11Z | https://github.com/2noise/ChatTTS/issues/330 | [
"stale"
] | Walle1493 | 3 |
dfki-ric/pytransform3d | matplotlib | 220 | Remove dependency on nosetests | Will be replaced by pytest in the future
```
pytransform3d/test/test_urdf.py
15:from nose.tools import assert_raises, assert_equal, assert_true, assert_in
16:from nose import SkipTest
pytransform3d/test/test_transformations.py
33:from nose.tools import (assert_equal, assert_almost_equal,
pytransform3d/test/test_rotations.py
4:from nose.tools import (assert_almost_equal, assert_equal, assert_true,
pytransform3d/test/test_transform_manager.py
15:from nose.tools import (assert_raises_regexp, assert_equal, assert_true,
17:from nose import SkipTest
pytransform3d/test/test_coordinates.py
3:from nose.tools import assert_less_equal
pytransform3d/test/test_camera.py
6:from nose.tools import (assert_raises_regexp, assert_false, assert_in,
pytransform3d/test/test_batch_rotations.py
4:from nose.tools import (assert_almost_equal, assert_raises_regexp,
pytransform3d/test/test_plot_utils.py
2:from nose import SkipTest
11:from nose.tools import assert_equal, assert_less, assert_greater_equal
pytransform3d/transformations/_testing.py
109: This function needs the dependency nose.
149: from nose.tools import assert_almost_equal
``` | closed | 2023-02-07T22:08:46Z | 2023-02-09T15:55:05Z | https://github.com/dfki-ric/pytransform3d/issues/220 | [] | AlexanderFabisch | 0 |
mithi/hexapod-robot-simulator | plotly | 85 | Refactoring Suggestions | Replace:
https://github.com/mithi/hexapod-robot-simulator/blob/808534d769476342ae56e88ea865c33b36c89490/index.py#L14
With:
```python
div_header = html.Div(
[
html.A(html.H6("👾"), href=URL_REPO, target="_blank", style=icon_link_style),
html.A(html.H6("☕"), href=URL_KOFI, target="_blank", style=icon_link_style),
dcc.Link(html.H6("●"), href="/", style=icon_link_style),
dcc.Link(html.H6("●"), href="/inverse-kinematics", style=icon_link_style),
dcc.Link(html.H6("●"), href="/kinematics", style=icon_link_style),
],
style={"display": "flex", "flex-direction": "row"}
)
```
So that the page does not not refresh | closed | 2020-04-24T17:00:13Z | 2020-04-27T13:29:04Z | https://github.com/mithi/hexapod-robot-simulator/issues/85 | [
"feature request",
"good first issue",
"low hanging fruit",
"code quality"
] | mithi | 4 |
prkumar/uplink | rest-api | 2 | urlparse.urljoin restricts URL patterns | The [documentation for urljoin](https://docs.python.org/2/library/urlparse.html#urlparse.urljoin) reveals that it has some very strange behavior. Calling `build(... , base_url="https://example.com")` and also having a method `@get("//unrelated.com/users")` means that the method would execute on "https://unrelated.com/users".
Is this the intended behavior? I personally think it is very confusing. The following code snippet does not work as one might expect:
```python
request = utils.Request("METHOD", "/api.php?id={id}", {}, None)
uplink_builder.client = http_client_mock
uplink_builder.base_url = "https://example.com/feature"
request_preparer = builder.RequestPreparer(
uplink_builder, request_definition
)
return_value = request_preparer.prepare_request(request)
assert return_value[0] == "METHOD"
assert return_value[1] == "https://example.com/feature/api.php?id={id}"
assert return_value[2] == {}
```
The above code snippet fails, as `return_value[1]=https://example.com/api.php?id={id}`.
Should we consider using a simple `path.join()` on URLs? Or should we allow for the complex, yet fairly convoluted behavior? If we want to allow for convoluted behavior (and continue using urljoin), I think it's worth raising an exception if `base_url` is not a true "base" (since its child paths will be trimmed anyway) | closed | 2017-10-20T04:13:56Z | 2017-10-20T05:29:16Z | https://github.com/prkumar/uplink/issues/2 | [] | brandonio21 | 4 |
gradio-app/gradio | python | 10,204 | SSR does not work with `auth` enabled | ### Describe the bug
Failed to start while auth and ssr on
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def greet(name):
return f"Hello, {name}!"
# Define the username and password for authentication
auth = ('username', 'password')
# Create the Gradio interface
iface = gr.Interface(fn=greet, inputs="text", outputs="text")
# Launch the interface with SSR enabled
iface.launch(server_name="0.0.0.0", ssr_mode=True, auth=auth)
```
### Screenshot
_No response_
### Logs
```shell
Error: Error: Login credentials are required to access this space.
at Client._resolve_config (file:///project/.venv/lib/python3.10/site-packages/gradio/templates/node/build/server/chunks/2-9Q2E4iJ-.js:39575:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Client.init (file:///project/.venv/lib/python3.10/site-packages/gradio/templates/node/build/server/chunks/2-9Q2E4iJ-.js:39491:5)
at async Client.connect (file:///project/.venv/lib/python3.10/site-packages/gradio/templates/node/build/server/chunks/2-9Q2E4iJ-.js:39530:5)
at async load$1 (file:///project/.venv/lib/python3.10/site-packages/gradio/templates/node/build/server/chunks/2-9Q2E4iJ-.js:41454:15)
at async load_data (file:///project/.venv/lib/python3.10/site-packages/gradio/templates/node/build/server/index.js:1178:18)
at async file:///project/.venv/lib/python3.10/site-packages/gradio/templates/node/build/server/index.js:2618:18
> raise ValueError(
"When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost."
)
E ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.
```
### System Info
```shell
❯ gradio environment
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.8.0
gradio_client version: 1.5.1
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.7.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.4.0
gradio-client==1.5.1 is not installed.
httpx: 0.28.1
huggingface-hub: 0.26.5
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.2.0
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.2
ruff: 0.8.3
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.38.6
tomlkit: 0.12.0
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.1
huggingface-hub: 0.26.5
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
Blocking usage of gradio | open | 2024-12-16T02:49:53Z | 2024-12-17T17:53:52Z | https://github.com/gradio-app/gradio/issues/10204 | [
"bug",
"SSR"
] | laoshancun | 0 |
aminalaee/sqladmin | fastapi | 361 | Two relations on model (foreign key), but one field on model | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
There are two correlations per table (seller, buyer), but a table is generated where there is only one users field.
```python
class DealModel(ormar.Model, BaseMixin):
"""DealModel."""
class Meta(ormar.ModelMeta):
"""Meta."""
tablename: str = "deals"
database: databases.Database = database
metadata: sqlalchemy.MetaData = metadata
stage: DealStage = ormar.Enum(
enum_class=DealStage,
default=DealStage.created,
default_server=DealStage.created,
nullable=False,
)
buyer: UserModel = ormar.ForeignKey(UserModel, nullable=False, related_name="buys")
seller: UserModel = ormar.ForeignKey(UserModel, nullable=False, related_name="sells")
pair: CurrencyPairModel = ormar.ForeignKey(CurrencyPairModel, nullable=False, related_name="deals")
session: SessionModel = ormar.ForeignKey(SessionModel, nullable=False, related_name="deals")
count: int = ormar.Integer(minimum=1, nullable=False)
rate: float = ormar.Float(minimum=0.01, nullable=False)
```
### Steps to reproduce the bug
_No response_
### Expected behavior
Except two fields Seller and Buyer
### Actual behavior
_No response_
### Debugging material
No
### Environment
- Ubuntu 20.04
- Python 3.10
### Additional context
_No response_ | closed | 2022-10-19T13:53:22Z | 2022-11-08T10:59:28Z | https://github.com/aminalaee/sqladmin/issues/361 | [] | Egnod | 2 |
strawberry-graphql/strawberry | graphql | 3,658 | Support chunked transfers (file upload) | ## Describe the Bug
Strawberry-django does not seem to support chunked transfers when using multipart uploads.
Thanks to @enisdanjo (https://github.com/ardatan/graphql-mesh/issues/7701) :
You can reproduce the issue by appending Transfer-Encoding: chunked to the upload request.
`curl localhost:8000 \
-H 'Transfer-Encoding: chunked' \
-F operations='{ "query": "mutation ($file: Upload!) { uploadFile(file: $file) { id } }", "variables": { "file": null } }' \
-F map='{ "0": ["variables.file"] }' \
-F 0=@hello.txt`
This is problematic when using Strawberry after a proxy or a gateway that cannot calculate the `content-length`. | open | 2024-10-06T16:27:59Z | 2025-03-20T15:56:53Z | https://github.com/strawberry-graphql/strawberry/issues/3658 | [
"bug"
] | MaximeDetne | 3 |
nalepae/pandarallel | pandas | 267 | Pandarallel is failing with SSLContext error | I have upgraded openai from 0.28.0 to openai==1.23.5.
My parallel calls to openai with Pandarallel was working well with openai==0.28.0 version.
But failing with the below error after upgrading to openai==1.23.5
File "/app/imssumm/Summ_parallel.py", line 239, in call_iterative_summ_logic
prompt_df_1["result"] = prompt_df_1.parallel_apply(lambda x: self.summarize(x["to_be_summarized"],x["token_len"]), axis=1)
File "/usr/local/lib/python3.8/site-packages/pandarallel/core.py", line 265, in closure
dilled_user_defined_function = dill.dumps(user_defined_function)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 263, in dumps
dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 235, in dump
Pickler(file, protocol, **_kwds).dump(obj)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/usr/lib64/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1824, in save_function
_save_with_postproc(pickler, (_create_function, (
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1089, in _save_with_postproc
pickler.save_reduce(*reduction)
File "/usr/lib64/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib64/python3.8/pickle.py", line 886, in save_tuple
save(element)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib64/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib64/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib64/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib64/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib64/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib64/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib64/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib64/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib64/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib64/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib64/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib64/python3.8/pickle.py", line 1002, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib64/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib64/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib64/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/lib64/python3.8/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'SSLContext' object
| open | 2024-04-26T12:15:42Z | 2024-04-29T06:01:00Z | https://github.com/nalepae/pandarallel/issues/267 | [] | madhvs | 4 |
netbox-community/netbox | django | 17,695 | Rack face not clear when site is cleared | ### Deployment Type
Self-hosted
### NetBox Version
v4.1.3 Community
### Python Version
3.12
### Steps to Reproduce
1. Create a device.
2. Assign it to a site
3. Assign it to a location
4. Assign it to a rack
5. Set rack face
6. Set Position
7. Save
8. Edit device again
9. Press x next to Site (Note: all fields clear bar rack face)
10. press save
11. Error prompt "Race face: Cannot select a rack face without assigning a rack"
12. Press x for Rack face.
13. Press save as a work around
### Expected Behavior
All fields associated with Location should auto clear when parent item is removed.
Aka when Site is clear rack face should also auto clear.
### Observed Behavior
When clearing the Site field with the x. Rack Face doesn't clear. | open | 2024-10-08T05:42:26Z | 2025-01-23T20:04:17Z | https://github.com/netbox-community/netbox/issues/17695 | [
"type: bug",
"status: needs owner",
"severity: low",
"netbox"
] | Theyouth1 | 1 |
SYSTRAN/faster-whisper | deep-learning | 81 | [mp3float] Header missing | Hi first of all thank you for this great repo!
## Issue
Transcribing some mp3 files I got the following error: " [mp3float] Header missing"
## The cause:
"faster_whisper/audio.py", line 42, in decode_audio", I saw faster whisper uses PyAv instead of the FFmpeg package whisper uses, I'm a total noob when it comes to FFmpeg, but I'd be more than happy to help if I can.
## How I got it working:
I'm using the following Load audio from whisper, to first take the mp3 file as input:
```python
def load_audio(file: str, sr: int = SAMPLE_RATE):
"""
Open an audio file and read as mono waveform, resampling as necessary
Parameters
----------
file: str
The audio file to open
sr: int
The sample rate to resample the audio if necessary
Returns
-------
A NumPy array containing the audio waveform, in float32 dtype.
"""
try:
# This launches a subprocess to decode audio while down-mixing and resampling as necessary.
# Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
out, _ = (
ffmpeg.input(file, threads=0)
.output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sr)
.run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
)
except ffmpeg.Error as e:
raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") from e
return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
```
I then pass the returned np array to faster-whisper:
```python
audio = load_audio(file)
segments, info = model.transcribe(audio, **options_dict)
```
## Fix?
Doing this solves my problem, but I'd like to know if there are tweaks we can do with PyAv instead, so I don't have to use two different packages. | closed | 2023-03-26T10:10:18Z | 2023-03-27T08:19:24Z | https://github.com/SYSTRAN/faster-whisper/issues/81 | [] | Hannes1 | 2 |
jpadilla/django-rest-framework-jwt | django | 488 | DeprecationWarning: The following fields will be removed in the future: `email` and `user_id`. | Hello,
I am using `email` instead of `username` for authentication. So my custom user model does
not have username field. My code is based on [this](https://www.fomfus.com/articles/how-to-use-email-as-username-for-django-authentication-removing-the-username)
Now I am setting up jwt authentication and I receive the following warning:
```
rest_framework_jwt/utils.py:39: DeprecationWarning: The following fields will be removed in the future: `email` and `user_id`.
DeprecationWarning
```
Is there something that I can do to overcome this (other than using `username`, which will not be an option)
thanks! | open | 2019-08-06T19:16:36Z | 2019-08-08T13:08:29Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/488 | [] | ajendrex | 1 |
pandas-dev/pandas | data-science | 60,794 | BUG: Bug in mask method when handling pd.NA with Int64Dtype | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
from pandas import Series
from pandas.api.extensions import Int64Dtype
series = Series([None, 1, 2, None, 3, 4, None], dtype=Int64Dtype())
result = series.mask(series <= 2, -99)
print(result)
```
### Issue Description
I am encountering an issue with the mask method in pandas when it is used with a Series of type Int64Dtype. Specifically, when trying to mask pd.NA values, they are being replaced, which is not the expected behavior. I expected the pd.NA values to remain unchanged, but they are being incorrectly filled.
### Expected Behavior
Series([None, -99, -99, None, 3, 4, None], dtype=Int64Dtype())
### Installed Versions
python: 3.11.1
pandas: 2.1.3
| closed | 2025-01-26T11:53:00Z | 2025-01-27T21:36:49Z | https://github.com/pandas-dev/pandas/issues/60794 | [
"Bug",
"Duplicate Report",
"NA - MaskedArrays"
] | IceyDuan | 2 |
kizniche/Mycodo | automation | 805 | PID Controller Output: Min On Duration, Max On Duration, Min Off Duration not saved | Unable to save PID controller output attributes (Min On Duration, Max On Duration, Min Off Duration).
### Versions:
- Mycodo Version: 8.6.4
- Raspberry Pi Version: 3B
- Raspbian OS Version: Buster Lite
### Reproducibility
1. Deactivate existing PID controller
2. Edit controller Output Min, Max durations
2. Save
3. View saved form and observe system behavior
4. Durations not saved.
### Expected behavior
PID controller out durations should be saved.
### Daemon Log
```
Jul 28 22:51:30 martha gunicorn[465]: 2020-07-28 22:51:30,771 Exception during reset or similar
Jul 28 22:51:30 martha gunicorn[465]: Traceback (most recent call last):
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 697, in _finalize_fairy
Jul 28 22:51:30 martha gunicorn[465]: fairy._reset(pool)
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 893, in _reset
Jul 28 22:51:30 martha gunicorn[465]: pool._dialect.do_rollback(self)
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 543, in do_rollback
Jul 28 22:51:30 martha gunicorn[465]: dbapi_connection.rollback()
Jul 28 22:51:30 martha gunicorn[465]: sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 1949430880 and this is thread id 1938551904.
Jul 28 22:51:30 martha gunicorn[465]: 2020-07-28 22:51:30,776 Exception closing connection <sqlite3.Connection object at 0x730a4920>
Jul 28 22:51:30 martha gunicorn[465]: Traceback (most recent call last):
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 697, in _finalize_fairy
Jul 28 22:51:30 martha gunicorn[465]: fairy._reset(pool)
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 893, in _reset
Jul 28 22:51:30 martha gunicorn[465]: pool._dialect.do_rollback(self)
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 543, in do_rollback
Jul 28 22:51:30 martha gunicorn[465]: dbapi_connection.rollback()
Jul 28 22:51:30 martha gunicorn[465]: sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 1949430880 and this is thread id 1938551904.
Jul 28 22:51:30 martha gunicorn[465]: During handling of the above exception, another exception occurred:
Jul 28 22:51:30 martha gunicorn[465]: Traceback (most recent call last):
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 270, in _close_connection
Jul 28 22:51:30 martha gunicorn[465]: self._dialect.do_close(connection)
Jul 28 22:51:30 martha gunicorn[465]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 549, in do_close
Jul 28 22:51:30 martha gunicorn[465]: dbapi_connection.close()
Jul 28 22:51:30 martha gunicorn[465]: sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 1949430880 and this is thread id 1938551904.
``` | closed | 2020-07-28T21:53:24Z | 2020-08-10T22:57:27Z | https://github.com/kizniche/Mycodo/issues/805 | [] | emuehlstein | 11 |
kensho-technologies/graphql-compiler | graphql | 781 | Dealing with vertex types and edge types with invalid names | When reflecting the GraphQL schema from a database, we ignore properties with invalid names because GraphQL core will raise an error if any of the GraphQL types has an invalid name. However, do not ignore vertex types and edge types with invalid names.
It would be ideal if we could implement a fix that would solve this issue for all the different possible backends. Ideally, in a pre-processing step that happens when the `SchemaGraph` is being built. However, the `SchemaElement` objects are all immutable and there are connections between them.
So removing `SchemaElement` objects in a pre-processing step might require a larger refactor.
In the future, we might also want to implement a function that attempts to sanitize the names by removing invalid characters instead of ignoring them.
An edge type/vertex type name is invalid if it doesn't match
`graphql.utilities.assert_valid_name.re_name`
| open | 2020-03-24T16:58:30Z | 2020-03-24T16:58:30Z | https://github.com/kensho-technologies/graphql-compiler/issues/781 | [] | pmantica1 | 0 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 119 | this project can change Black-and-white picture to Color photograph? | ok .from code ,i can't find it, just want to konw....
This project can change Black-and-white picture to Color photograph? | closed | 2021-03-03T06:48:14Z | 2021-03-03T06:51:16Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/119 | [] | cuiweipeng | 1 |
vitalik/django-ninja | rest-api | 1,190 | The CSRF header name needs to be configurable | We use a different header name than `X-CSRFToken` and causes testing via swagger-ui to fail against our api.
Either use the django [setting](https://docs.djangoproject.com/en/5.0/ref/settings/#csrf-header-name) or provide a way to override.
Code in question is
https://github.com/vitalik/django-ninja/blob/c6d44b62a180fcf8ddfd73d67e0274a77b9d30ae/ninja/templates/ninja/swagger_cdn.html#L28 | open | 2024-06-12T19:49:10Z | 2024-06-14T16:28:10Z | https://github.com/vitalik/django-ninja/issues/1190 | [] | vegaed | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 708 | Unpaired training noobie question | I apologize for such a simple question. I am very new to pix2pix, and still discovering it's power.
My initial target is following.
There is specific style of photos turned to paintings that a CoinTelegraph.com is using

or

I want to train pix2pix with 1000s of such paintings to turn into a painting like that ones from up.
Because literally I am not able to create a-b pictures and then train, as I understood I would need unpaired training.
Could somebody from you hint me to the script example or doc where I can find ready examples of such a training?
As I understood I should have like 1000 images in a specific folder, then via command line train the model on them , and then whenever I will put in a photo it will be translated into painting similar to this one. Am I right?
| closed | 2019-07-17T12:59:27Z | 2019-07-19T13:18:24Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/708 | [] | gelinger777 | 4 |
nonebot/nonebot2 | fastapi | 2,417 | Plugin: BlueArchive Title Generator | ### PyPI 项目名
nonebot-plugin-batitle
### 插件 import 包名
nonebot_plugin_batitle
### 标签
[{"label" : "碧蓝档案", "color" : "#00D7FB"}]
### 插件配置项
_No response_ | closed | 2023-10-13T17:17:27Z | 2023-10-14T12:32:21Z | https://github.com/nonebot/nonebot2/issues/2417 | [
"Plugin"
] | MerCuJerry | 4 |
huggingface/datasets | pytorch | 7,226 | Add R as a How to use from the Polars (R) Library as an option | ### Feature request
The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd
## Add Polars (R) option
The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well.
```r
library(polars)
df <- pl$read_parquet("hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet")
```
## Polars (python) option

## Libraries Currently

### Motivation
There are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem.
**This is a small/low-hanging-fruit front end change but would make a big impact expanding the community**
### Your contribution
I am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository. | open | 2024-10-14T19:56:07Z | 2024-10-14T19:57:13Z | https://github.com/huggingface/datasets/issues/7226 | [
"enhancement"
] | ran-codes | 0 |
kubeflow/katib | scikit-learn | 2,381 | Fix Katib UI Tests | Currently, Katib UI tests are broken.
Check this PR for more info: https://github.com/kubeflow/katib/pull/2313.
We should update the `cypress` version to fix it.
/good-first-issue
/area testing
| closed | 2024-07-12T21:53:29Z | 2024-12-03T14:03:00Z | https://github.com/kubeflow/katib/issues/2381 | [
"help wanted",
"good first issue",
"area/testing",
"kind/bug"
] | andreyvelich | 5 |
thp/urlwatch | automation | 101 | Use sensible-editor | Hello,
It is unusual to provide environment variables (standard, but not common) for select editor. There exists standard `sensible-editor` on Linux for that. There is `editor` executables very common too.
Greetings,
| closed | 2016-10-10T01:52:22Z | 2016-10-24T13:58:56Z | https://github.com/thp/urlwatch/issues/101 | [] | ad-m | 3 |
tensorflow/tensor2tensor | machine-learning | 1,843 | I got the errors about jaxlib when I was trying to install tensor2tensor on windows. | Hello
I got the errors about jaxlib when I was trying to install tensor2tensor on windows.
The error message was blow
ERROR: Could not find a version that satisfies the requirement jaxlib>=0.1.51 (from dopamine-rl->tensor2tensor) (from versions: none)
ERROR: No matching distribution found for jaxlib>=0.1.51 (from dopamine-rl->tensor2tensor)
How can I solve these issues?
Please help. | open | 2020-08-11T17:03:14Z | 2020-08-24T02:09:12Z | https://github.com/tensorflow/tensor2tensor/issues/1843 | [] | ys23 | 3 |
onnx/onnxmltools | scikit-learn | 214 | An error (Elu function) occured while loading model with winmltools which uses onnxmltools. | Hi, I use elu function in my model and I want to convert the model (from keras tensorflow) to ONNX format in order to use it with C# windows app.
I try to load the model with winmltools using the code below:
```
from winmltools import convert_keras
model_onnx = convert_keras(model, target_opset=8, name="cropper")
```
It gave me an error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-8-156ad53f924e> in <module>()
1 from winmltools import convert_keras
2
----> 3 model_onnx = convert_keras(model, target_opset=8, name="cropper")
4 print(type(model_onnx))
5 if False:
/usr/local/lib/python3.6/dist-packages/winmltools/convert/main.py in convert_keras(model, target_opset, name, initial_types, doc_string, default_batch_size, channel_first_inputs, custom_conversion_functions, custom_shape_calculators)
151 return _convert_keras(model, name=name, target_opset=target_opset, initial_types=initial_types, doc_string=doc_string,
152 default_batch_size=default_batch_size, channel_first_inputs=channel_first_inputs,
--> 153 custom_conversion_functions=custom_conversion_functions, custom_shape_calculators=custom_shape_calculators)
154
155
/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/keras/convert.py in convert(model, name, default_batch_size, initial_types, doc_string, target_opset, targeted_onnx, channel_first_inputs, custom_conversion_functions, custom_shape_calculators)
46 name = str(uuid4().hex)
47
---> 48 onnx_model = convert_topology(topology, name, doc_string, target_opset, targeted_onnx, channel_first_inputs)
49 return onnx_model
/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/common/_topology.py in convert_topology(topology, model_name, doc_string, target_opset, targeted_onnx, channel_first_inputs)
719 else:
720 # Convert the selected operator into some ONNX objects and save them into the container
--> 721 _registration.get_converter(operator.type)(scope, operator, container)
722
723 # When calling ModelComponentContainer's add_initializer(...), nothing is added into the input list. However, in
/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/keras/operator_converters/Conv.py in convert_keras_conv2d(scope, operator, container)
144 def convert_keras_conv2d(scope, operator, container):
145 is_transpose, n_dims, input_perm, output_perm, weight_perm = get_converter_config(2, False)
--> 146 convert_keras_conv_core(scope, operator, container, is_transpose, n_dims, input_perm, output_perm, weight_perm)
147
148
/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/keras/operator_converters/Conv.py in convert_keras_conv_core(scope, operator, container, is_transpose, n_dims, input_perm_axes, output_perm_axes, weight_perm_axes)
117 # The construction of convolution is done. Now, we create an activation operator to apply the activation specified
118 # in this Keras layer.
--> 119 apply_activation_function = _activation_map[op.activation]
120 activation_output_name = scope.get_unique_variable_name('activation_output')
121 apply_activation_function(scope, intermediate_output_name, activation_output_name, container)
KeyError: <function elu at 0x7f48da1697b8>
```
I did some research and found out that ONNX format supports ELU function. Is there anything that I do wrong or is it a bug? | closed | 2019-01-21T13:54:50Z | 2019-02-14T19:09:24Z | https://github.com/onnx/onnxmltools/issues/214 | [] | faruknane | 14 |
kornia/kornia | computer-vision | 2,969 | `utils.draw_convex_polygon` is not in-place update | ### Describe the bug
The function in the title does not in-place update the `images` argument, which is suggested by the document. Instead, one has to take the returned value to acquire the updated images.
The code to blame is at https://github.com/kornia/kornia/blob/4bd1bd172d27ae0ffb5a811b7338150b65f404dc/kornia/utils/draw.py#L354 | closed | 2024-07-28T10:27:00Z | 2024-08-28T15:10:10Z | https://github.com/kornia/kornia/issues/2969 | [
"help wanted"
] | riaqn | 3 |
youfou/wxpy | api | 7 | “用微信监控你的程序”还是需要扫码? | 如果还是需要扫码,可用性就没有那么强了啊 | closed | 2017-03-17T09:07:40Z | 2017-03-17T09:21:00Z | https://github.com/youfou/wxpy/issues/7 | [] | ylqfp | 6 |
fastapi/sqlmodel | fastapi | 363 | Defining Classes with Second Relationship to the same Foreign Key Causes RunTimeError:dictionary changed size during iteration | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional, List
from sqlmodel import Field, SQLModel, Relationship
class Team(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
players: List["Player"] = Relationship(back_populates="player_team")
class Player(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
player_team_id: Optional[int] = Field(default=None, foreign_key="team.id")
player_team: Optional["Team"] = Relationship(back_populates="players")
class Participant(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
participant_team_id: Optional[int] = Field(default=None, foreign_key="team.id")
participant_team: Optional[Team] = Relationship(sa_relationship_args=(participant_team_id == Team.id))
```
### Description
Running the above code (just defining the classes) causes a RunTimeError: dictionary changed size during iteration. The code works if you either:
1) comment out players line in class Team and player_team line in class Player
or,
2) comment out participant_team in Participant class
### Operating System
Windows
### Operating System Details
Windows 10
### SQLModel Version
0.0.6
### Python Version
3.10
### Additional Context
_No response_ | closed | 2022-06-14T20:52:34Z | 2022-06-21T14:10:32Z | https://github.com/fastapi/sqlmodel/issues/363 | [
"question"
] | lwanger | 1 |
huggingface/datasets | numpy | 7,400 | 504 Gateway Timeout when uploading large dataset to Hugging Face Hub | ### Description
I encountered consistent 504 Gateway Timeout errors while attempting to upload a large dataset (approximately 500GB) to the Hugging Face Hub. The upload fails during the process with a Gateway Timeout error.
I will continue trying to upload. While it might succeed in future attempts, I wanted to report this issue in the meantime.
### Reproduction
- I attempted the upload 3 times
- Each attempt resulted in the same 504 error during the upload process (not at the start, but in the middle of the upload)
- Using `dataset.push_to_hub()` method
### Environment Information
```
- huggingface_hub version: 0.28.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
- Python version: 3.11.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: /home/hotchpotch/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: hotchpotch
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.5.1
- Jinja2: 3.1.5
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 10.4.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.26.4
- pydantic: 2.10.6
- aiohttp: 3.11.11
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/hotchpotch/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/hotchpotch/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/hotchpotch/.cache/huggingface/token
- HF_STORED_TOKENS_PATH: /home/hotchpotch/.cache/huggingface/stored_tokens
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
### Full Error Traceback
```python
Traceback (most recent call last):
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/create_edu_japanese_ds/upload_edu_japanese_ds.py", line 12, in <module>
ds.push_to_hub("hotchpotch/fineweb-2-edu-japanese", private=True)
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/dataset_dict.py", line 1665, in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 5301, in _push_parquet_shards_to_hub
api.preupload_lfs_files(
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4215, in preupload_lfs_files
_upload_lfs_files(
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/_commit_api.py", line 395, in _upload_lfs_files
batch_actions_chunk, batch_errors_chunk = post_lfs_batch_info(
^^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/lfs.py", line 168, in post_lfs_batch_info
hf_raise_for_status(resp)
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch
```
| open | 2025-02-14T02:18:35Z | 2025-02-14T23:48:36Z | https://github.com/huggingface/datasets/issues/7400 | [] | hotchpotch | 4 |
microsoft/nni | pytorch | 5,522 | GPU usage via NNI is different from running programs separately. | **Describe the issue**:
I was running a script with `trial_gpu_number: 1` and `trial_concurrency: 5`. I noticed that all of my trials were failing due to CUDA out of memory errors.
However, when I run the same trials separately (i.e., with the same hyperparameters but simply by doing `python ./main.py`) it works fine.
Is there something that's using GPU memory that I'm not aware of?
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu
- Server OS (for remote mode only):
- Python version: Python 3.10.9
- PyTorch/TensorFlow version: PyTorch 1.12.1
- Is conda/virtualenv/venv used?: Yes, conda is used.
- Is running in Docker?: No.
**Configuration**:
- Experiment config (remember to remove secrets!):
```
experiment_name: resnest_hpo
search_space_file: search_space.json
use_annotation: False
trial_command: bash ./scripts/resnest_nni_hpo.sh
# trial_command: bash ./scripts/resnest_debug.sh
trial_gpu_number: 1
trial_concurrency: 5
max_experiment_duration: 15h
max_trial_number: 500
tuner:
name: TPE
class_args:
optimize_mode: maximize
training_service:
platform: local
use_active_gpu: True
```
- Search space:
```
{
"lr": {"_type": "choice", "_value": [0.0001, 0.0003, 0.0005, 0.001, 0.003, 0.005]},
"epochs": {"_type": "choice", "_value": [30, 50, 100, 150]},
"optim_type": {"_type": "choice", "_value": ["sgd", "adam"]},
"batch_size": {"_type": "choice", "_value": [32, 64, 128, 256, 512]}
}
```
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2023-04-19T07:01:57Z | 2023-05-10T07:01:50Z | https://github.com/microsoft/nni/issues/5522 | [] | seanswyi | 3 |
yzhao062/pyod | data-science | 522 | ABOD weighting schemes / not matching the definition? | The ABOD implementation of PyOD tends to perform very similar to kNN, because the scoring is dominated by the squared (!) distances.
The following lines:
https://github.com/yzhao062/pyod/blob/6c77e27a7a95fa928af37ff48c3dc607fa9408fa/pyod/models/abod.py#L50-L53
while closely reflecting the original paper, may be reflecting a issue and inconsistency of that paper.
$$ABOF=VAR\left(\frac{\langle AB,AC \rangle}{||AB||^2 \cdot ||AC||^2}\right)$$
uses the product of the squared weights for normalization, which puts an extreme high emphasis on distance (hence causing a **high similarity to kNN**).
But if you then look how the equation is expanded, another factor is added:
$$=\frac{\sum_B\sum_C \left(\frac{1}{||AB||\cdot ||AC||}\cdot\frac{\langle AB, AC \rangle}{||AB||^2 \cdot ||AC||^2}\right)^2}{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}} - \left(\frac{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}\cdot\frac{\langle AB, AC \rangle}{||AB||^2 \cdot ||AC||^2}}{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}}\right)^2$$
(clearly resembling $E[X^2]-E[X]^2$, with each point weighted by $\frac{1}{||AB||\cdot ||AC||}$, which would match the text).
Note that this would simplify to having even the cubic(!) distances in there.
The IMHO correct version would be the following:
$$ABOF=VAR_{weighted}\left(\frac{\langle AB,AC \rangle}{||AB|| \cdot ||AC||}\right)$$ $$=\frac{\sum_B\sum_C \left(\frac{1}{||AB||\cdot ||AC||}\cdot\frac{\langle AB, AC \rangle}{||AB|| \cdot ||AC||}\right)^2}{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}} - \left(\frac{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}\cdot\frac{\langle AB, AC \rangle}{||AB|| \cdot ||AC||}}{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}}\right)^2$$
which then can further be simplified to:
$$=\frac{\sum_B\sum_C \left(\frac{\langle AB, AC \rangle}{||AB||^2 \cdot ||AC||^2}\right)^2}{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}} - \left(\frac{\sum_B\sum_C \frac{\langle AB, AC \rangle}{||AB||^2 \cdot ||AC||^2}}{\sum_B\sum_C \frac{1}{||AB||\cdot ||AC||}}\right)^2$$
We now got the squares in the equation seen before, one coming from the weighting scheme and one from the angle.
Note that this computation is likely to produce numerical instabilities, though, and I would rather use $1/\sqrt{||AB|| \cdot ||AC||}$ for weighting (which uses the geometric mean distance, instead of the product of distances).
Hence,
1. there may be an issue in the original ABOD paper
2. the "Var" is not the standard variance call, but a weighted variance, as the expansion of the equation shows. The current PyOD implementation does *not* implement this:
https://github.com/yzhao062/pyod/blob/6c77e27a7a95fa928af37ff48c3dc607fa9408fa/pyod/models/abod.py#L89
3. it may be good to allow the user to choose among the two "angle" computations (with the extra square, and the standard definition of angle), and three weighting schemes ((1) unweighted variance, as currently implemented in PyOD, (2) variance weighted with 1/(d1 * d2) as in the ABOD paper, and (3) with the geometric mean distance to the neighbors, 1/sqrt(d1 * d2), which makes the weighting weaker)
4. the code needs to be carefully optimized for numerial issues, in particular as some of these values can become very small fast due to products of two powers of three.
Note that also, the "norm" function in the top snipped involves a square root that is then followed by a square.
Now I argue that the *most meaningful* variant is using standard angles is very simple. It is the only version that is invariant to scaling the data set with a constant. If we multiply all lengths by 2,
$$\left(\frac{\langle 2AB,2AC \rangle}{||2AB||^2 \cdot ||2AC||^2}\right)=\left(\frac{4\langle AB,AC \rangle}{4||AB||^2 \cdot 4||AC||^2}\right)=\frac{1}{4} \left(\frac{\langle AB,AC \rangle}{||AB||^2 \cdot ||AC||^2}\right)$$ Removing the extra squares - using the standard angle - resolve this and makes the angle invariant to scaling with a constant. | open | 2023-08-16T09:38:19Z | 2023-08-29T06:32:31Z | https://github.com/yzhao062/pyod/issues/522 | [] | kno10 | 1 |
scikit-learn/scikit-learn | machine-learning | 30,762 | DOC JupyterLite link _query_package() got multiple values for argument 'index_urls' | Clicking on the Jupyterlite button of [this example](https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_1_5_0.html#sphx-glr-download-auto-examples-release-highlights-plot-release-highlights-1-5-0-py) for example and executing the first cell.
This is broken on 1.6 and dev website but works on 1.5 website.
From the browser console log:
```
Uncaught (in promise) PythonError: Traceback (most recent call last):
File "/lib/python312.zip/_pyodide/_base.py", line 574, in eval_code_async
await CodeRunner(
File "/lib/python312.zip/_pyodide/_base.py", line 396, in run_async
await coroutine
File "<exec>", line 3, in <module>
File "/lib/python3.12/site-packages/piplite/piplite.py", line 121, in _install
return await micropip.install(
^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/micropip/_commands/install.py", line 142, in install
await transaction.gather_requirements(requirements)
File "/lib/python3.12/site-packages/micropip/transaction.py", line 55, in gather_requirements
await asyncio.gather(*requirement_promises)
File "/lib/python3.12/site-packages/micropip/transaction.py", line 62, in add_requirement
return await self.add_requirement_inner(Requirement(req))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/micropip/transaction.py", line 151, in add_requirement_inner
await self._add_requirement_from_package_index(req)
File "/lib/python3.12/site-packages/micropip/transaction.py", line 186, in _add_requirement_from_package_index
metadata = await package_index.query_package(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: _query_package() got multiple values for argument 'index_urls'
O https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.asm.js:10
new_error https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.asm.js:10
_PyEM_TrampolineCall_JS https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.asm.js:10
Te https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.asm.js:10
callPyObjectMaybePromising https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.asm.js:10
wrapper https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.asm.js:10
onmessage https://cdn.jsdelivr.net/pyodide/v0.26.0/full/pyodide.asm.js:10
[304.fed478b20183b113f417.js:2:3520](https://scikit-learn.org/stable/lite/extensions/@jupyterlite/pyodide-kernel-extension/static/304.fed478b20183b113f417.js?v=fed478b20183b113f417)
``` | closed | 2025-02-03T14:40:46Z | 2025-02-04T06:04:52Z | https://github.com/scikit-learn/scikit-learn/issues/30762 | [
"Bug",
"Documentation"
] | lesteve | 1 |
frappe/frappe | rest-api | 31,239 | Print Language Not Working in ERPNext v15.51.0 / Frappe v15.55.1 | ### Information about bug
Description
After upgrading to ERPNext v15.51.0 and Frappe v15.55.1, the "Print Language" setting in document printing is no longer functional. The selected language does not affect the printed document, and it always follows the UI language of the logged-in user instead of the chosen print language.
Expected Behavior
When selecting a different "Print Language" in a document, the printed output should reflect the chosen language, not the UI language of the user.
Actual Behavior
The print output always follows the UI language of the user, ignoring the "Print Language" selection.
Steps to Reproduce
Go to any printable document (e.g., Sales Invoice, Quotation, Purchase Order, etc.).
Select a different "Print Language" from the print settings.
Generate the print preview.
The document remains in the user's UI language instead of the selected print language.
Current Setup
ERPNext Version: 15.51.0
Frappe Version: 15.55.1
Deployment Type: (e.g., self-hosted / cloud / Docker)
Browser: (mention if it's browser-specific)
Additional Information
This issue was not present in previous versions (please specify the last working version if known).
No relevant custom scripts are interfering with the print language setting.
The issue persists across different browsers and user accounts.
### Module
selling
### Version
ERPNext Version: 15.51.0
Frappe Version: 15.55.1
### Installation method
None
### Relevant log output / Stack trace / Full Error Message.
```shell
``` | open | 2025-02-07T02:18:50Z | 2025-02-23T13:16:25Z | https://github.com/frappe/frappe/issues/31239 | [
"bug"
] | kanedai | 3 |
PeterL1n/RobustVideoMatting | computer-vision | 55 | How long does it take you to train the model on 4 v100? | closed | 2021-09-28T13:52:39Z | 2023-08-08T13:10:38Z | https://github.com/PeterL1n/RobustVideoMatting/issues/55 | [] | FengMu1995 | 3 |
|
recommenders-team/recommenders | machine-learning | 1,914 | [ASK] How to load LSTUR checkpoint | ### Description
I have the saved models as per https://github.com/microsoft/recommenders/blob/8ee1ed3ac0db04321b064edb6f10d6af0bb318fd/examples/00_quick_start/lstur_MIND.ipynb
For MIND large, I would need to save and load after each epoch due to resource constraints
How do I load the saved checkpoint to further train the model?
### Other Comments
| open | 2023-04-07T03:43:38Z | 2023-04-07T03:43:38Z | https://github.com/recommenders-team/recommenders/issues/1914 | [
"help wanted"
] | dishankpoddar | 0 |
errbotio/errbot | automation | 1,324 | callback_stream not working for Telegram Backend | In order to let us help you better, please fill out the following fields as best you can:
### I am requesting help with running my bot for file uploads.
### I am running...
* Errbot version: 5.2.0
* OS version: ubuntu 16.04
* Python version: 3.5
* Using a virtual environment: yes
### Issue description
Below is the code that I am using for listening to incoming file transfer using telegram backend.
import logging
from errbot import botcmd, BotPlugin
from config import BOT_DATA_DIR
from os import path, mkdir, walk
from io import open
import shutil
FILESHARE_PATH = path.join(BOT_DATA_DIR, "public")
if not path.exists(FILESHARE_PATH):
mkdir(FILESHARE_PATH)
class FileShare(BotPlugin):
min_err_version = '2.2.0-beta' # for file transfers
def target(self, name):
full_path = path.join(FILESHARE_PATH, name)
if full_path != path.abspath(full_path):
logging.warn('Refused the filename "%s" is it an injection attempt?' % name)
return ''
return full_path
def callback_stream(self, stream):
super(FileShare, self).callback_stream(stream)
import pdb;pdb.set_trace()
if not stream.name:
logging.info("Anonymous stream, I can't save that")
return
logging.debug('Receive the file "%s"' % stream.name)
destination_path = self.target(stream.name)
if not destination_path:
self.send(stream.identity, "Invalid filename %s." % stream.name)
return
with open(destination_path, "wb") as destination:
shutil.copyfileobj(stream, destination)
self.send(stream.identity, "File %q well received." % stream.name)
@botcmd
def download(self, mess, args):
target = self.target(args)
if not target:
return 'Invalid filename "%s"' % target
if not path.exists(target):
return 'File not found %s' % args
self.send_stream_request(mess.frm, open(target, 'rb'), name=args, size=path.getsize(target),
stream_type='document')
return 'File request sent'
@botcmd
def upload(self, mess, args):
return "upload your file"
@botcmd
def upload_file(self, mess, args):
target = '/home/beast/Downloads/errbot.pdf'
self.send_stream_request(mess.frm, open(target, 'rb'), name=args, size=path.getsize(target),
stream_type='document')
return "file uploaded successfully"
@botcmd
def ls(self, mess, args):
return '\n'.join(['\n'.join([n for n in f]) for p, _, f in walk(FILESHARE_PATH)])
Problem is that program execution never reads callback_stream method and fails with and error message saying 'Message ignored (not a text message)'.
My question is: does errbot Telegram backend support this feature? If yes then how can I make it work?
| closed | 2019-04-13T08:54:04Z | 2024-01-05T16:55:31Z | https://github.com/errbotio/errbot/issues/1324 | [
"type: support/question",
"backend: Slack",
"backend: Telegram"
] | ajay1mg | 5 |
ageitgey/face_recognition | python | 1,290 | OpenCL | Hello everyone, how do I run the code on the embedded GPU? I understand correctly that I should use OpenCL? | open | 2021-03-05T12:10:56Z | 2021-04-22T12:59:54Z | https://github.com/ageitgey/face_recognition/issues/1290 | [] | RarDay | 2 |
httpie/cli | rest-api | 1,415 | Incorrect references to benchmarking scripts | ## What's the issue?
There're bunch of places where we still have reference of old paths of benchmarking script.
1. In **Running benchmarks** section of `CONTRIBUTING.md` [[link-1]](https://github.com/httpie/httpie/blob/master/CONTRIBUTING.md#running-benchmarks)
2. In `extras/profiling/README.md` [[link-2]](https://github.com/httpie/httpie/tree/master/extras/profiling#usage)
3. Bunch of mentions in docstring of `https://github.com/httpie/httpie/blob/master/extras/profiling/run.py` [[link-3]](https://github.com/httpie/httpie/blob/master/extras/profiling/run.py)
The incorrect path is:
```bash
python extras/benchmarks/run.py
```
The correct path should be:
```bash
python extras/profiling/run.py
``` | closed | 2022-06-19T06:15:31Z | 2022-06-19T07:20:55Z | https://github.com/httpie/cli/issues/1415 | [
"new"
] | letmerecall | 0 |
aleju/imgaug | deep-learning | 260 | using imgaug not in main thread | Hi,
I tried to use imgaug in a secondary thread of my program but then i got this error :
RuntimeError: main thread is not in main loop Tcl_AsyncDelete: async handler deleted by the wrong thread
From what i found, it seem that imgaug use matplotlib that use tkinter and tkinter want to be on the main thread. But matplotlib already know this and made a call to correct this just in case. So to solve this problem i had to use :
import matplotlib
matplotlib.use('Agg')
from this stack overflow solution :
https://stackoverflow.com/questions/27147300/matplotlib-tcl-asyncdelete-async-handler-deleted-by-the-wrong-thread
So i open this issue in case other have this problem or if you didnt know and want to mention it somewhere.
Thank for your great library!
Martin | open | 2019-02-15T17:06:46Z | 2019-02-16T20:58:40Z | https://github.com/aleju/imgaug/issues/260 | [] | robert405 | 1 |
MilesCranmer/PySR | scikit-learn | 74 | [Feature] Units in equations | Each of my features has units [kg, m, N, ...], but the output equations don't take units into account. Most output equations fail an unit check even if setting the units of constants as required. This feature would allow defining units of each X and y feature, probably in SI, and only allow equations which pass an unit check. Any constants could still have arbitrary units. | closed | 2021-09-08T14:49:22Z | 2021-09-16T18:16:17Z | https://github.com/MilesCranmer/PySR/issues/74 | [
"enhancement"
] | euhruska | 5 |
vitalik/django-ninja | pydantic | 325 | Foreignkey assigning is searching for instance | Hello ! I create my schemas with create_schema(). When i try to perform a post request to create an object that has many foreignkeys in his model, i get an error like "UserIndividuel.id_document_type must be a TypePiece instance" for the field id_document_type that is renamed id_document_type_id in the generated schema. And when i remove the "id" string of foreignkeys in the schema in openAPI, it's work fine. | closed | 2022-01-14T11:17:50Z | 2023-11-24T13:04:51Z | https://github.com/vitalik/django-ninja/issues/325 | [] | Kimmyungetouh | 2 |
apache/airflow | automation | 47,778 | [Regression]Missing Asset Alias dependency graph | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
In the current UI implementation, there is no way to see Asset Alias dependencies with DAG in Airflow 2.10.5, we were able to see that in the dependency graph.
**AF2**
<img width="694" alt="Image" src="https://github.com/user-attachments/assets/b082371e-615c-417f-bd92-48132efe8030" />
Now in AF3 there is only the DAG graph where we can see Alias info but that will not show dependencies with other DAG
<img width="759" alt="Image" src="https://github.com/user-attachments/assets/723a2d45-0ce2-4a1e-85f0-4f44724ffcfd" />
### What you think should happen instead?
_No response_
### How to reproduce
Add below DAG as per that dependency should be `example_dataset_alias_mapped` (DAG) --> `alias-dataset-1`(Alias) --> `downstream_alias`(DAG). In current UI there is no way to can see this dependency as we were able to see in AF2.
```
"""
###
"""
from airflow.decorators import dag, task
from airflow.datasets import Dataset, DatasetAlias
from airflow.datasets.metadata import Metadata
from pendulum import datetime
my_alias_name = "alias-dataset-1"
@dag(
dag_display_name="example_dataset_alias_mapped",
start_date=datetime(2024, 8, 1),
schedule=None,
catchup=False,
tags=["datasets"],
)
def dataset_alias_dynamic_test():
@task
def upstream_task():
return ["a", "b"]
@task(outlets=[DatasetAlias(my_alias_name)])
def use_metadata(name):
yield Metadata(
Dataset(name),
alias=my_alias_name,
extra={} # extra is NOT optional
)
use_metadata.expand(name=upstream_task())
dataset_alias_dynamic_test()
@dag(
start_date=datetime(2024, 8, 1),
schedule=[DatasetAlias(my_alias_name)],
catchup=False,
tags=["dataset"]
)
def downstream_alias():
@task
def t1():
return 0
t1()
downstream_alias()
```
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-14T10:47:15Z | 2025-03-18T22:33:26Z | https://github.com/apache/airflow/issues/47778 | [
"kind:bug",
"priority:high",
"area:core",
"area:UI",
"area:datasets",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 3 |
mwaskom/seaborn | data-visualization | 2,918 | map_offdiag custom func | Hi, I want to annotate the linear regression equation in PairGrid (I mean `g.map_offdiag(sns.regplot)`). I tried to create a function by referring to [this link](https://stackoverflow.com/questions/45902739/seaborn-annotate-the-linear-regression-equation), but failed : (
I would be grateful if someone could help me😀
```
x_vars = ["A", "B", "C", "D", "E", "F"]
y_vars = ["C", "D", "F"]
g = sns.PairGrid(data=df, x_vars=x_vars, y_vars=y_vars)
g.map_diag(sns.histplot, color=".3")
g.map_offdiag(sns.regplot)
g.add_legend()
```

| closed | 2022-07-23T02:13:52Z | 2022-07-31T06:31:50Z | https://github.com/mwaskom/seaborn/issues/2918 | [] | FQMei | 2 |
Miserlou/Zappa | flask | 1,965 | Set_Cookie option sets duplicate cookies on AWS Lambda | ## Context
I have an API running Python3.7 and Zappa (in a virtualenv).
I am setting 6 cookies by using the option "set_cookie" in flask. It looks something like this:
```
resp = make_response(jsonify({'success':'true', 'message': 'Successfully authenticated!'}), 200)
resp.set_cookie("1", value="1", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("2", value="2", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("3", value="3", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("4", value="4", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("5", value="5", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("6", value="6", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
return resp
```
On localhost testing Flask, this works as expected.
If I deploy the same code to AWS using Zappa, the response header will show 36 "set-cookie" headers. So the formula here is n^2. So if I add 4 cookies using the above method, it will show 16 in the request header.
The browser takes care of duplicate cookies, but the response from the API is still huge because of this issue.
Same thing happens if I use:
`resp.headers.add("set-cookie""1"="1; Domain=.example.com; Max-Age=3600; Secure; Path=/; SameSite=Lax")`
## Expected Behavior
I believe Zappa or something at AWS is at fault here. Expected behaviour is to send 6 "set-cookie" headers and not 36.
## Actual Behavior
Sets n^2 cookies as response.
## Steps to Reproduce
Deploy a Flask route using Zappa which sets the cookies. Use the code above.
## Your Environment
* Zappa version used: 0.48.2
* Operating System and Python version: Ubuntu 18.04, Python3.7
* The output of `pip freeze`: https://pastebin.com/d4QTaTuG
* Your `zappa_settings.py`: https://pastebin.com/d1GK8sbe | closed | 2019-11-18T08:22:45Z | 2020-02-12T21:10:33Z | https://github.com/Miserlou/Zappa/issues/1965 | [] | ZappaUserMan | 0 |
harry0703/MoneyPrinterTurbo | automation | 336 | Please change the source video (background video) from 16:9 to 9:16 | Please change the source video (background video) from 16:9 to 9:16
I noticed that when the input video is 16:9, the output video will not be cropped
You can refer here: https://github.com/elebumm/RedditVideoMakerBot

| open | 2024-05-02T10:10:03Z | 2024-05-06T09:14:26Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/336 | [
"enhancement"
] | alexdo83 | 1 |
sktime/sktime | scikit-learn | 7,049 | [ENH] Hurst exponent | Add the Hurst exponent calculation feature, providing a tool for analyzing long-term memory effects in time series data based on:
[Hurst exponent](https://en.m.wikipedia.org/wiki/Hurst_exponent) | closed | 2024-08-27T21:13:09Z | 2024-09-13T16:50:33Z | https://github.com/sktime/sktime/issues/7049 | [
"feature request",
"module:transformations",
"enhancement",
"module:parameter-estimators"
] | ncooder | 4 |
serengil/deepface | machine-learning | 787 | current yolo logic is not working for 8.0.122 | Laird Foret raised this issue. He is using yolov8 with exact version '8.0.122'
Current code block:
```python
if align:
# Extract landmarks
left_eye, right_eye, _, _, _ = result.keypoints.tolist()
# Check the landmarks confidence before alignment
if (left_eye[2] > LANDMARKS_CONFIDENCE_THRESHOLD and
right_eye[2] > LANDMARKS_CONFIDENCE_THRESHOLD):
detected_face = FaceDetector.alignment_procedure(
detected_face, left_eye[:2], right_eye[:2]
)
```
He mentioned this should be used not to get any exception.
```python
if align:
# Extract landmarks
keypoints_xy = result.keypoints.xy # Extract the keypoints coordinates
keypoints_conf = result.keypoints.conf # Extract the confidence values for keypoints
left_eye = (keypoints_xy[0][0], keypoints_conf[0][0]) # Tuple of x,y and confidence for left eye
right_eye = (keypoints_xy[0][1], keypoints_conf[0][1]) # Tuple of x,y and confidence for right eye
# Check the landmarks confidence before alignment
if (left_eye[1] > LANDMARKS_CONFIDENCE_THRESHOLD and
right_eye[1] > LANDMARKS_CONFIDENCE_THRESHOLD):
detected_face = FaceDetector.alignment_procedure(
detected_face, left_eye[0].cpu(), right_eye[0].cpu()
)
```
@Vincent-Stragier you may have a comment. Besides, why we used specific github version instead of `pip install ultralytics` | closed | 2023-06-26T13:07:27Z | 2023-06-29T08:10:22Z | https://github.com/serengil/deepface/issues/787 | [
"dependencies"
] | serengil | 3 |
google-research/bert | tensorflow | 1,135 | module 'tensorflow_estimator.python.estimator.api._v1.estimator.tpu' has no attribute 'CrossShardOptimizer' | I am trying to pretrain a bert from google's pretrained checkpoint from Colab TPU. Until yesterday everything is fine. However, I came across this 'crossshardoptimizer' error for all day today. I am wondering if this caused by any code base change or version migration.
tf version: 1.15.2
python: 3.6
bert-tensorflow: 1.0.3
> >
>
>
> INFO:tensorflow:*** Input Files (MSL-128) ***
> INFO:tensorflow: gs://vbert/input/vmware-docs-2020-reddit_non-wwm_msl-128_vocab-vmware-unused.tfrecord
> INFO:tensorflow:*** Input Files (MSL-512) ***
> INFO:tensorflow: gs://vbert/input/vmware-docs-2020-reddit_non-wwm_msl-512_vocab-vmware-unused.tfrecord
> WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder.<locals>.model_fn at 0x7f5054197bf8>) includes params argument, but params are not passed to Estimator.
> INFO:tensorflow:Using config: {'_model_dir': 'gs://vbert/liuyi-vbert-docs-reddit/base/vocab-vmware-unused', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 10000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
> cluster_def {
> job {
> name: "worker"
> tasks {
> key: 0
> value: "10.47.24.194:8470"
> }
> }
> }
> isolate_session_state: true
> , '_keep_checkpoint_max': 10000, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f505413deb8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': 'grpc://10.47.24.194:8470', '_evaluation_master': 'grpc://10.47.24.194:8470', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=10000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None, eval_training_input_configuration=2, experimental_host_call_every_n_steps=1), '_cluster': <tensorflow.python.distribute.cluster_resolver.tpu_cluster_resolver.TPUClusterResolver object at 0x7f505413dc50>}
> INFO:tensorflow:_TPUContext: eval_on_tpu True
> INFO:tensorflow:***** Running training *****
> INFO:tensorflow: Batch size = 32
> INFO:tensorflow:Querying Tensorflow master (grpc://10.47.24.194:8470) for TPU system metadata.
> INFO:tensorflow:Found TPU system:
> INFO:tensorflow:*** Num TPU Cores: 8
> INFO:tensorflow:*** Num TPU Workers: 1
> INFO:tensorflow:*** Num TPU Cores Per Worker: 8
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 18293633603678532293)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 16754746863277155707)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 12168993875110325416)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 5785133627713800739)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 531464872121750804)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 13610383926908237188)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 3588204162670013970)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 5523440629424163654)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 9311023021754933234)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 8589934592, 17907827073552055203)
> INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 5163179840106115260)
> INFO:tensorflow:Calling model_fn.
> WARNING:tensorflow:Entity <function input_fn_builder.<locals>.input_fn.<locals>.<lambda> at 0x7f50541971e0> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Str'
> WARNING: Entity <function input_fn_builder.<locals>.input_fn.<locals>.<lambda> at 0x7f50541971e0> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Str'
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:Found small feature: next_sentence_labels [4, 1]
> INFO:tensorflow:*** Features ***
> INFO:tensorflow: name = input_ids, shape = (4, 128)
> INFO:tensorflow: name = input_mask, shape = (4, 128)
> INFO:tensorflow: name = masked_lm_ids, shape = (4, 20)
> INFO:tensorflow: name = masked_lm_positions, shape = (4, 20)
> INFO:tensorflow: name = masked_lm_weights, shape = (4, 20)
> INFO:tensorflow: name = next_sentence_labels, shape = (4, 1)
> INFO:tensorflow: name = segment_ids, shape = (4, 128)
> INFO:tensorflow:**** Trainable Variables ****
> ERROR:tensorflow:Error recorded from training_loop: module 'tensorflow_estimator.python.estimator.api._v1.estimator.tpu' has no attribute 'CrossShardOptimizer'
> INFO:tensorflow:training_loop marked as finished
> WARNING:tensorflow:Reraising captured error
>
> ---------------------------------------------------------------------------
>
> AttributeError Traceback (most recent call last)
>
> <ipython-input-23-5adfa9741e65> in <module>()
> 3 start_time = datetime.now()
> 4 FLAGS.training_start_time = start_time
> ----> 5 main()
> 6 print("Pretraining took", datetime.now() - start_time)
>
> 25 frames
>
> <ipython-input-22-85dd70a98293> in main()
> 93 max_predictions_per_seq=FLAGS.max_predictions_per_seq,
> 94 is_training=True)
> ---> 95 estimator.train(input_fn=train_input_fn, max_steps=FLAGS.num_train_steps, saving_listeners=[listener])
> 96
> 97 FLAGS.loop_times = loop_times
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
> 3033 finally:
> 3034 rendezvous.record_done('training_loop')
> -> 3035 rendezvous.raise_errors()
> 3036
> 3037 def evaluate(self,
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/error_handling.py in raise_errors(self, timeout_sec)
> 134 else:
> 135 logging.warn('Reraising captured error')
> --> 136 six.reraise(typ, value, traceback)
> 137
> 138 for k, (typ, value, traceback) in kept_errors:
>
> /usr/local/lib/python3.6/dist-packages/six.py in reraise(tp, value, tb)
> 701 if value.__traceback__ is not tb:
> 702 raise value.with_traceback(tb)
> --> 703 raise value
> 704 finally:
> 705 value = None
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
> 3028 steps=steps,
> 3029 max_steps=max_steps,
> -> 3030 saving_listeners=saving_listeners)
> 3031 except Exception: # pylint: disable=broad-except
> 3032 rendezvous.record_error('training_loop', sys.exc_info())
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/estimator.py in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
> 368
> 369 saving_listeners = _check_listeners_type(saving_listeners)
> --> 370 loss = self._train_model(input_fn, hooks, saving_listeners)
> 371 logging.info('Loss for final step: %s.', loss)
> 372 return self
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/estimator.py in _train_model(self, input_fn, hooks, saving_listeners)
> 1159 return self._train_model_distributed(input_fn, hooks, saving_listeners)
> 1160 else:
> -> 1161 return self._train_model_default(input_fn, hooks, saving_listeners)
> 1162
> 1163 def _train_model_default(self, input_fn, hooks, saving_listeners):
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/estimator.py in _train_model_default(self, input_fn, hooks, saving_listeners)
> 1189 worker_hooks.extend(input_hooks)
> 1190 estimator_spec = self._call_model_fn(
> -> 1191 features, labels, ModeKeys.TRAIN, self.config)
> 1192 global_step_tensor = training_util.get_global_step(g)
> 1193 return self._train_with_estimator_spec(estimator_spec, worker_hooks,
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in _call_model_fn(self, features, labels, mode, config)
> 2855 else:
> 2856 return super(TPUEstimator, self)._call_model_fn(features, labels, mode,
> -> 2857 config)
> 2858 else:
> 2859 if mode == _INFERENCE_ON_TPU_MODE:
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/estimator.py in _call_model_fn(self, features, labels, mode, config)
> 1147
> 1148 logging.info('Calling model_fn.')
> -> 1149 model_fn_results = self._model_fn(features=features, **kwargs)
> 1150 logging.info('Done calling model_fn.')
> 1151
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in _model_fn(features, labels, mode, config, params)
> 3157 if mode == model_fn_lib.ModeKeys.TRAIN:
> 3158 compile_op, loss, host_call, scaffold_fn, training_hooks = (
> -> 3159 _train_on_tpu_system(ctx, model_fn_wrapper, dequeue_fn))
> 3160 if ctx.embedding_config:
> 3161 g = ops.get_default_graph()
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in _train_on_tpu_system(ctx, model_fn_wrapper, dequeue_fn)
> 3602 num_shards=ctx.num_replicas,
> 3603 outputs_from_all_shards=False,
> -> 3604 device_assignment=ctx.device_assignment)
> 3605
> 3606 loss = loss[0]
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/tpu/tpu.py in split_compile_and_shard(computation, inputs, num_shards, input_shard_axes, outputs_from_all_shards, output_shard_axes, infeed_queue, device_assignment, name)
> 1275 infeed_queue=infeed_queue,
> 1276 device_assignment=device_assignment,
> -> 1277 name=name)
> 1278
> 1279 # There must be at least one shard since num_shards > 0.
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/tpu/tpu.py in split_compile_and_replicate(***failed resolving arguments***)
> 990 vscope.set_custom_getter(custom_getter)
> 991
> --> 992 outputs = computation(*computation_inputs)
> 993
> 994 vscope.set_use_resource(saved_use_resource)
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in multi_tpu_train_steps_on_single_shard(replica_id)
> 3587 lambda i, loss: i < iterations_per_loop_var,
> 3588 lambda i, loss: [i + 1, single_tpu_train_step(i)],
> -> 3589 inputs=[0, _INITIAL_LOSS])
> 3590 return outputs[1:]
> 3591
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/tpu/training_loop.py in while_loop(***failed resolving arguments***)
> 176 inputs = [array_ops.constant(0)]
> 177 return control_flow_ops.while_loop(
> --> 178 condition_wrapper, body_wrapper, inputs, name="", parallel_iterations=1)
> 179
> 180
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name, maximum_iterations, return_same_structure)
> 2751 ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, loop_context)
> 2752 result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants,
> -> 2753 return_same_structure)
> 2754 if maximum_iterations is not None:
> 2755 return result[1]
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants, return_same_structure)
> 2243 with ops.get_default_graph()._mutation_lock(): # pylint: disable=protected-access
> 2244 original_body_result, exit_vars = self._BuildLoop(
> -> 2245 pred, body, original_loop_vars, loop_vars, shape_invariants)
> 2246 finally:
> 2247 self.Exit()
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
> 2168 expand_composites=True)
> 2169 pre_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION) # pylint: disable=protected-access
> -> 2170 body_result = body(*packed_vars_for_body)
> 2171 post_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION) # pylint: disable=protected-access
> 2172 if not nest.is_sequence_or_composite(body_result):
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/tpu/training_loop.py in body_wrapper(*inputs)
> 119 else:
> 120 dequeue_ops = []
> --> 121 outputs = body(*(inputs + dequeue_ops))
> 122
> 123 # If the computation only returned one value, make it a tuple.
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in <lambda>(i, loss)
> 3586 outputs = training_loop.while_loop(
> 3587 lambda i, loss: i < iterations_per_loop_var,
> -> 3588 lambda i, loss: [i + 1, single_tpu_train_step(i)],
> 3589 inputs=[0, _INITIAL_LOSS])
> 3590 return outputs[1:]
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in train_step(step)
> 1713
> 1714 estimator_spec = self._verify_estimator_spec(
> -> 1715 self._call_model_fn(features, labels))
> 1716 loss, train_op = estimator_spec.loss, estimator_spec.train_op
> 1717
>
> /tensorflow-1.15.2/python3.6/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py in _call_model_fn(self, features, labels, is_export_mode)
> 1992 _add_item_to_params(params, _CTX_KEY, user_context)
> 1993
> -> 1994 estimator_spec = self._model_fn(features=features, **kwargs)
> 1995 if (running_on_cpu and
> 1996 isinstance(estimator_spec, model_fn_lib._TPUEstimatorSpec)): # pylint: disable=protected-access
>
> <ipython-input-21-bc7abb17e900> in model_fn(features, labels, mode, params)
> 67 if mode == tf.estimator.ModeKeys.TRAIN:
> 68 train_op = optimization.create_optimizer(
> ---> 69 total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
> 70
> 71 output_spec = tf.contrib.tpu.TPUEstimatorSpec(
>
> /usr/local/lib/python3.6/dist-packages/bert/optimization.py in create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps, use_tpu)
> 66
> 67 if use_tpu:
> ---> 68 optimizer = tf.estimator.tpu.CrossShardOptimizer(optimizer)
> 69
> 70 tvars = tf.trainable_variables()
>
> /tensorflow-1.15.2/python3.6/tensorflow_core/python/util/module_wrapper.py in __getattr__(self, name)
> 191 def __getattr__(self, name):
> 192 try:
> --> 193 attr = getattr(self._tfmw_wrapped_module, name)
> 194 except AttributeError:
> 195 if not self._tfmw_public_apis:
>
> AttributeError: module 'tensorflow_estimator.python.estimator.api._v1.estimator.tpu' has no attribute 'CrossShardOptimizer'
>
Any insights and discussions are appreciated. Thanks. | closed | 2020-08-07T20:04:19Z | 2020-08-10T03:26:30Z | https://github.com/google-research/bert/issues/1135 | [] | liuyibox | 1 |
pytest-dev/pytest-xdist | pytest | 512 | Xdist sends invalid URL to selenium? | Hello. Please help me to figure out issue launch in parallel selenium based test via pytest and xdsit.
I have two firefox instance at docker and launch 2 independent test this way:
`pytest -k "crypto" -n2`
My Pipfile:
```PyPOM==2.2.0
pytest==5.1.2
pytest-bdd==3.2.1
pytest-xdist==1.31.0
pytest-base-url==1.4.1
pytest-selenium==1.17.0
gcloud==0.18.3
firebase==3.0.1
python_jwt==3.2.4
sseclient-py==1.7
py-postgresql==1.2.1
requests-toolbelt==0.9.1
arrow==0.15.4
kombu==4.6.7
```
My pytest.ini
```
[pytest]
bdd_features_base_dir = tests
base_url = https://projectname.com
sensitive_url = https://projectname.com
addopts = --driver Remote
--port 4444
--capability browserName firefox
--html tests/logs/report.html
--cucumberjson=tests/uploads/cucumber.json
--cucumberjson-expanded
-v
filterwarnings =
ignore::DeprecationWarning
markers =
smoke: fast result
crypto: add calculations
```
Here is track trace:
result = call_fixture_func(fixturefunc, request, kwargs)
../../.pyenv/versions/3.6.8/envs/projectname/lib/python3.6/site-packages/_pytest/fixtures.py:778: in call_fixture_func
res = fixturefunc(**kwargs)
tests/functional/test_02_deposit_by_dash.py:20: in user_signed_in
page_main.open()
../../.pyenv/versions/3.6.8/envs/projectname/lib/python3.6/site-packages/pypom/page.py:130: in open
self.driver_adapter.open(self.seed_url)
../../.pyenv/versions/3.6.8/envs/projectname/lib/python3.6/site-packages/pypom/selenium_driver.py:48: in open
self.driver.get(url)
../../.pyenv/versions/3.6.8/envs/projectname/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:333: in get
self.execute(Command.GET, {'url': url})
../../.pyenv/versions/3.6.8/envs/projectname/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py:321: in execute
self.error_handler.check_response(response)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selenium.webdriver.remote.errorhandler.ErrorHandler object at 0x10faa6f98>
response = {'status': 400, 'value': '{"value":{"error":"invalid argument","message":"Malformed URL: / is not a valid URL.","stack...dArgumentError@chrome://marionette/content/error.js:304:5\\nget@chrome://marionette/content/listener.js:1132:19\\n"}}'}
E selenium.common.exceptions.InvalidArgumentException: Message: Malformed URL: / is not a valid URL.
See more at screenshoot.
It works without -n2:
`pytest -k "crypto"`
<img width="1385" alt="Screenshot 2020-03-09 at 16 14 51" src="https://user-images.githubusercontent.com/54805270/76216432-a59a0a00-6221-11ea-87a6-7637bfe7079f.png">
<img width="1387" alt="Screenshot 2020-03-09 at 16 15 18" src="https://user-images.githubusercontent.com/54805270/76216461-ad59ae80-6221-11ea-8bd3-265dc41fd13c.png"> | open | 2020-03-09T13:19:07Z | 2020-03-19T14:58:30Z | https://github.com/pytest-dev/pytest-xdist/issues/512 | [] | RuslanBM | 1 |
huggingface/datasets | tensorflow | 6,484 | [Feature Request] Dataset versioning | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different.
Of course, I may have done something wrong or missed a setting somewhere!
**Describe the solution you'd like**
The solution would allow me to easily work with revisions:
- create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this:
`dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')`
- then, get the current revision as follows:
```
dataset = load_dataset(
'kenfus/xy', revision='v1.0.2',
)
```
this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision.
- if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision.
**Describe alternatives you've considered**
I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere.
**Additional context**
Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface.
This is the data loading in my script:
```
## CREATE PATHS
prepared_dataset_path = os.path.join(
DATA_FOLDER, str(DATA_VERSION), "prepared_dataset"
)
os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True)
## LOAD DATASET
if os.path.exists(prepared_dataset_path):
print("Loading prepared dataset from disk...")
dataset_prepared = load_from_disk(prepared_dataset_path)
else:
print("Loading dataset from HuggingFace Datasets...")
dataset = load_dataset(
PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload"
)
print("Preparing dataset...")
dataset_prepared = dataset.map(
prepare_dataset,
remove_columns=["audio", "transcription"],
num_proc=os.cpu_count(),
load_from_cache_file=False,
)
dataset_prepared.save_to_disk(prepared_dataset_path)
del dataset
if CHECK_DATASET:
## CHECK DATASET
dataset_prepared = dataset_prepared.map(
check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False
)
dataset_filtered = dataset_prepared.filter(
lambda example: not example["incorrect_dimension"],
load_from_cache_file=False,
)
for example in dataset_prepared.filter(
lambda example: example["incorrect_dimension"], load_from_cache_file=False
):
print(example["path"])
print(
f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}"
)
print("Number of examples train: ", len(dataset_filtered["train"]))
print("Number of examples test: ", len(dataset_filtered["test"]))
```
| open | 2023-12-08T16:01:35Z | 2023-12-11T19:13:46Z | https://github.com/huggingface/datasets/issues/6484 | [] | kenfus | 2 |
ckan/ckan | api | 7,617 | Customize R/W datastore resource `url_type`s | The datastore API returns an error when `datastore_create`, `datastore_upsert`, `datastore_delete` are used on resources that have a resource_type value different than `"datastore"`, unless `force=True` is passed.
This is a *good* thing for the common case of data in the datastore created by xloader/datapusher because any updated data will be lost on the next reload.
Other cases (like table designer resources) should be able to allow use without `force=True`, so I'd like to make it possible for plugins to define new resource types that allow datastore modifications. | closed | 2023-05-29T18:29:03Z | 2023-06-01T10:23:09Z | https://github.com/ckan/ckan/issues/7617 | [] | wardi | 0 |
gee-community/geemap | streamlit | 652 | Deleting RoI | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.8.18
- Python version: 3.9
- Operating System: Debian
### Description
If the user has drawn many ROIs, it works perfectly when you apply any kinda processing on the last drawn RoI, However, if you delete the last drawn RoI it will keep applying the processing on the RoI you deleted instead of using the previous RoI.
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
| closed | 2021-08-31T17:40:47Z | 2021-09-03T12:57:37Z | https://github.com/gee-community/geemap/issues/652 | [
"bug"
] | Gedeon-m-gedus | 2 |
slackapi/bolt-python | fastapi | 612 | Setting the logger for the Slack Bolt APP with a NullHandler still seems to log | ## Reproducible in:
```
slack_bolt >= 1.11.1
slack_sdk>=3.9.0,<4
```
## Python runtime version
`python3.9
`
## OS info
Not relevant
## Steps to reproduce:
```python
import logging
import os
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
from slack_sdk import WebClient
app_token = os.environ.get("SLACK_APP_TOKEN")
bot_token = os.environ.get("SLACK_BOT_TOKEN")
http_proxy = os.environ.get("SLACK_HTTP_PROXY")
signing_secret = os.environ.get("SLACK_SIGNING_SECRET")
# Our Logger
logger = logging.getLogger("our_logger")
# Logger passed into slack to silence internal noise
slack_logger = logging.getLogger("internal_slack")
slack_logger.addHandler(logging.NullHandler())
def create_app() -> App:
client = WebClient(
token=bot_token,
proxy=http_proxy,
)
app = App(
client=client,
token=bot_token,
ssl_check_enabled=False,
signing_secret=signing_secret,
logger=slack_logger, # Expected this would silence the internal logs
)
@app.event("message")
def handle_other_message_events(message, say) -> None:
logger.info("Expect to see this as it is my logger")
# ... Some Logic
return app
if __name__ == "__main__":
app = create_app()
handler = SocketModeHandler(app, app_token, proxy=http_proxy)
handler.start()
```
## Expected/Actual result
As a follow up to https://github.com/slackapi/bolt-python/issues/605 we decided to set our own logger and pass that into the app like what is shown here https://github.com/slackapi/bolt-python/blob/main/slack_bolt/app/app.py#L90 but this still lead to us seeing internal logs when running:
```
2022-03-04T18:56:25.130465+0000 WARNING [1:MainThread] app.py:203 As you gave `client` as well, `token` will be unused.
2022-03-04T18:56:25.763592+0000 INFO [1:MainThread] client.py:197 A new session has been established (session id: 719380c4-be12-4d31-80b7-2c4bacbfa608)
2022-03-04T18:56:25.763798+0000 INFO [1:MainThread] base_handler.py:50 ⚡️ Bolt app is running!
2022-03-04T18:56:25.800199+0000 INFO [1:Thread-1] client.py:278 Starting to receive messages from a new connection (session id: 719380c4-be12-4d31-80b7-2c4bacbfa608)
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules. | closed | 2022-03-04T18:56:55Z | 2022-03-05T17:22:21Z | https://github.com/slackapi/bolt-python/issues/612 | [
"question",
"area:sync",
"need info"
] | mwilli20 | 2 |
kizniche/Mycodo | automation | 739 | Integration with Home Assistant and OpenHab | What scenario should be used for such integration?
My instance of MyCodo has many sensors connected, how can I use the data from sensors in Home Assistant?
I thought that I can use MQTT broker. Is it the correct way? | closed | 2020-02-04T11:18:07Z | 2023-02-25T14:27:27Z | https://github.com/kizniche/Mycodo/issues/739 | [] | cohe4ko | 4 |
adbar/trafilatura | web-scraping | 672 | spider: restrict search to given URL pattern | Both on the CLI and with Python the spider component stores and retrieves URLs which are possibly out of scope if the input URL is restricted to a portion of a domain, e.g. `https://www.example.org/news/en/`.
This behavior should be further investigated, tested and/or improved. | closed | 2024-08-13T11:15:45Z | 2024-08-13T17:22:48Z | https://github.com/adbar/trafilatura/issues/672 | [
"enhancement"
] | adbar | 0 |
httpie/cli | api | 1,044 | shadow file name 'ssl.py' may cause run time error when tring to run the program from source code | 
As you can see in the screenshot in Pycharm, it shows that the ssl has no property named 'CERT_REQUIRED'. But when I looked up the official doc of urlib and pyopenssl. I am sure that it should not be that property doesn't exist.
I reinstall the requirements and openssl plus urlib in the venv still not work. Then I think maybe the shadow name of ssl.py cause the problem. After rename the ssl.py everything went normal.
So I just change the name of ssl.py in the httpie folder and change the import name of ssl in the httpie folder to solve this | closed | 2021-03-06T06:38:12Z | 2021-10-13T08:50:10Z | https://github.com/httpie/cli/issues/1044 | [
"invalid"
] | Matrix-Cain | 3 |
rougier/scientific-visualization-book | numpy | 29 | Transparency example (fig 12.1) not showing properly in pdf | In figure 12.1 on page 149, the rightmost 3 markers are shown as white in the pdf.
When I run the [script](https://github.com/rougier/scientific-visualization-book/blob/master/code/optimization/transparency.py) and save as .pdf the same happens, but when I save that figure as .png the markers are shades of grey (as expected).
Might just be a bug in matplotlib, since one would expect that saving as .pdf produces the same image as saving as .png. Besides the fact that one is vector based and the other is raster based. | closed | 2021-11-28T11:04:54Z | 2022-01-03T20:55:29Z | https://github.com/rougier/scientific-visualization-book/issues/29 | [] | RElbers | 10 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 181 | 替换cookie无效 | 我看到有说自行替换cookie,但实测s_v_web_id这个值换成我浏览器中的值就不行,使用作者项目中的就可以,很奇怪。排除其它cookie影响,我做过测试。请问作者是在什么条件下获取的s_v_web_id的值? | closed | 2023-03-23T11:10:16Z | 2023-03-23T12:18:11Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/181 | [] | q1016134519 | 1 |
matplotlib/matplotlib | data-visualization | 29,774 | [Bug]: triage_tests.py is brittle against failures in test modules that have only check_figures_equal test | ### Bug summary
If a check_figures_equal test fails in test_pickle.py (which has no baseline image tests at all), then running triage_tests.py fails with "ValueError: Can't find baseline dir for test_pickle".
### Code for reproduction
```Python
Make test_pickle::test_complete fail, e.g. by commenting out the `fig_test.figimage(loaded.canvas.renderer.buffer_rgba())` line; run that test (which should fail), and run tools/triage_tests.py.
```
### Actual outcome
ValueError: Can't find baseline dir for test_pickle
### Expected outcome
The test triager succeeds.
### Additional information
Fixing involves either creating an empty test_pickle directory in the baseline_images directory, or allowing missing directories in triage_tests (which already interprets missing *baselines* as arising from check_figures_equal tests):
```patch
diff --git i/tools/triage_tests.py w/tools/triage_tests.py
index 5153b1c712..6df720f29d 100644
--- i/tools/triage_tests.py
+++ w/tools/triage_tests.py
@@ -263,7 +263,7 @@ class Entry:
]
self.thumbnails = [self.dir / x for x in self.thumbnails]
- if not Path(self.destdir, self.generated).exists():
+ if self.destdir is None or not Path(self.destdir, self.generated).exists():
# This case arises from a check_figures_equal test.
self.status = 'autogen'
elif ((self.dir / self.generated).read_bytes()
@@ -281,7 +281,6 @@ class Entry:
path = self.source / baseline_dir / reldir
if path.is_dir():
return path
- raise ValueError(f"Can't find baseline dir for {reldir}")
@property
def display(self):
```
Opening as an issue because I don't really mind about which solution to pick, so I'll let whoever cares pick one.
### Operating system
_No response_
### Matplotlib Version
3.11.0.dev525+g9f7b3dd205
### Matplotlib Backend
_No response_
### Python version
3.13
### Jupyter version
_No response_
### Installation
git checkout | open | 2025-03-18T23:17:10Z | 2025-03-18T23:17:10Z | https://github.com/matplotlib/matplotlib/issues/29774 | [
"topic: testing"
] | anntzer | 0 |
JoeanAmier/TikTokDownloader | api | 418 | [功能异常] 封面图无法下载 |
当开启 "original_cover": true,封面图无法下载
| open | 2025-03-04T08:10:30Z | 2025-03-04T10:26:43Z | https://github.com/JoeanAmier/TikTokDownloader/issues/418 | [] | w7335213 | 7 |
matplotlib/matplotlib | data-visualization | 29,648 | [Bug]: AttributeError: 'Legend' object has no attribute 'legendHandles' | ### Bug summary
While Creating a simple plot with two lines, I am getting the error: `AttributeError: 'Legend' object has no attribute 'legendHandles'`
### Code for reproduction
```Python
import pandas as pd
import numpy as np
df = pd.DataFrame({
'x': np.arange(0, 10, 1),
'y': np.random.rand(10)
})
df2 = pd.DataFrame({
'x': np.arange(5, 15, 1),
'y': np.random.rand(10)
})
ax = df.plot(x='x', y='y', color='blue', figsize=(30, 6), marker='o', title="title")
df2.plot(x='x', y='y', marker='o', ax=ax, color='orange', markersize=15 , label='my-label')
```
### Actual outcome
<img width="1294" alt="Image" src="https://github.com/user-attachments/assets/cdadf9cc-2f2a-49e6-a4f6-f10c107f13fb" />
### Expected outcome
No error, and a figure with two labels.
### Additional information
_No response_
### Operating system
Debian
### Matplotlib Version
3.10.0
### Matplotlib Backend
module://matplotlib_inline.backend_inline
### Python version
3.10.16
### Jupyter version
_No response_
### Installation
pip | closed | 2025-02-20T09:58:25Z | 2025-02-20T17:30:05Z | https://github.com/matplotlib/matplotlib/issues/29648 | [
"Community support"
] | camilaagw | 2 |
tortoise/tortoise-orm | asyncio | 1,190 | How do I use XA in Tortoise-orm? | I wanted to use XA + Mysql, but so far only find Tortoise.transactions. In_transaction (connection_name=None) in docs, What should I do? | open | 2022-07-26T07:40:41Z | 2022-07-26T09:21:11Z | https://github.com/tortoise/tortoise-orm/issues/1190 | [
"enhancement"
] | tufbel | 6 |
QuivrHQ/quivr | api | 2,747 | Implement Realtime on Knowledge Table | Realtime with only the knowledge of the specific brain of the user
[TECH-47](https://linear.app/getquivr/issue/TECH-47/add-a-processing-status-on-the-knowledge) required before that | closed | 2024-06-26T08:50:30Z | 2024-07-16T15:00:49Z | https://github.com/QuivrHQ/quivr/issues/2747 | [] | StanGirard | 1 |
coqui-ai/TTS | python | 3,433 | [Feature request] CLI or importable script for generating dataset & fine-tuning XTTS | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
Right now (and correct me if I'm wrong) it seems that the [collab](https://colab.research.google.com/drive/1GiI4_X724M8q2W-zZ-jXo7cWTV7RfaH-?usp=sharing#scrollTo=zd2xo_7a8wyj) is the only way to fine-tune XTTS from zero (i.e. no dataset)
I would love to see the code from this collab extracted / separated from Gradio and Collab so that it can be used via CLI or imported into a custom script.
Ideally this would involve one function/CLI command that takes 1 or more audio files as input and generates the dataset (with some kind of optional config, so we can specify format, etc?) from that, then another to train based on that dataset.
| closed | 2023-12-14T22:57:05Z | 2024-02-24T10:55:07Z | https://github.com/coqui-ai/TTS/issues/3433 | [
"wontfix",
"feature request"
] | platform-kit | 5 |
graphql-python/gql | graphql | 342 | Fails to honour HTTPS_PROXY environment variable | **Describe the bug**
`gql` does not handle proxy usage when using the asyncio API. Other libraries such as httpx support this out of the box by honouring the HTTPS_PROXY / HTTP_PROXY environment variable.
**To Reproduce**
- deploy a squid proxy.
- set the HTTPS_PROXY variable to that corresponding proxy
- make a graphql request to github
**Expected behavior**
The request goes through the proxy. We block all outgoing network requests that don't go through the proxy, so this means we get a timeout when making graphql requests.
**System info (please complete the following information):**
- OS: debian
- Python version: 3.8
- gql version: 3.3.0
- graphql-core version: 3.2.1
ps. Alternatively, this could be addressed by supporting async httpx. | closed | 2022-07-14T19:50:10Z | 2022-12-07T10:39:04Z | https://github.com/graphql-python/gql/issues/342 | [
"type: question or discussion"
] | jonathan-boudreau-work | 3 |
apache/airflow | automation | 47,220 | on_failure_callback functions specified in task aren't executed | ### Apache Airflow version
2.10.5
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
on_failure_callback is not called when defined in a `@task`.
### What you think should happen instead?
Simple example below included for clarity. On running this DAG, I would expect the `on_callback_failure` function to be called twice. However based on the logs, only the function call specified within the `@dag` decorator is called.
```
"""dags/some_dag.py"""
from functools import partial
import logging
from airflow.decorators import dag, task
logger = logging.getLogger(__name__)
def failure_callback(context, message):
logger.warning(f"+++ {message} +++")
@dag(on_failure_callback=partial(failure_callback, message="Called from DAG"))
def some_dag() -> None:
@task(on_failure_callback=partial(failure_callback, message="Called from within task"))
def task_to_fail():
raise ValueError
task_to_fail()
some_dag()
```
Checking the logs at `airflow/logs/scheduler/latest/dags/some_dag.py.log` shows
```
[2025-02-28T15:44:46.563+0000] {logging_mixin.py:190} INFO - [2025-02-28T15:44:46.563+0000] {some_dag.py:10} WARNING - +++ Called from DAG +++
```
But does not show the message +++ Called from within task +++.
### How to reproduce
Create DAG file as specified above. Run Dag and inspect logs.
### Operating System
Debian GNU/Linux 12 (Bookworm)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.16.0
apache-airflow-providers-apache-spark==4.11.0
apache-airflow-providers-celery==3.8.5
apache-airflow-providers-common-compat==1.3.0
apache-airflow-providers-common-io==1.5.0
apache-airflow-providers-common-sql==1.21.0
apache-airflow-providers-fab==1.5.2
apache-airflow-providers-ftp==3.12.0
apache-airflow-providers-http==5.0.0
apache-airflow-providers-imap==3.8.0
apache-airflow-providers-postgres==5.13.0
apache-airflow-providers-slack==7.3.1
apache-airflow-providers-smtp==1.9.0
apache-airflow-providers-sqlite==4.0.0
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-02-28T15:54:52Z | 2025-02-28T16:09:18Z | https://github.com/apache/airflow/issues/47220 | [
"kind:bug",
"area:core",
"needs-triage"
] | CEardley | 2 |
samuelcolvin/watchfiles | asyncio | 245 | Newly created subdirectories themselves arent getting watched recursively | ### Description
Not sure if this is desired behavior or not but what I want to occur when a new subdirectory is created, is for files within that subdirectory to trigger a watch event. Right now it seems they are not. Say I watch parentDirectoryA and create childDirectoryB. I see the file event for a newly created directory trigger. But when I add a file to childDirectoryB the filewatcher does not seem to trigger.
If this is actually the desired behavior, what is best practice for watching these additional subdirectories as they created?
### Example Code
_No response_
### Watchfiles Output
_No response_
### Operating System & Architecture
OS X Monterey 12.6
### Environment
Docker
### Python & Watchfiles Version
python:3.9.16 watchfiles: 0.19.0
### Rust & Cargo Version
_No response_ | closed | 2023-08-30T15:55:38Z | 2023-10-13T12:58:47Z | https://github.com/samuelcolvin/watchfiles/issues/245 | [
"bug"
] | nicholasbulka | 4 |
d2l-ai/d2l-en | data-science | 1,917 | The training process of CNN implemented by Pytorch may have something wrong on the master branch. | In [7.6.2](http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_convolutional-neural-networks/lenet.html#training), [8.1.3](http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_convolutional-modern/alexnet.html#training), [8.2.3](http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_convolutional-modern/vgg.html#training), [8.3.3](http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_convolutional-modern/nin.html#training), and [8.4.3](http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_convolutional-modern/googlenet.html#training), the loss of CNN implemented by Pytorch has not decreased at all or has not decreased in the first few epochs, which is different from the results of Mxnet and Tensorflow. | closed | 2021-10-01T07:41:33Z | 2022-01-07T00:14:26Z | https://github.com/d2l-ai/d2l-en/issues/1917 | [] | nnnnnzy | 2 |
freqtrade/freqtrade | python | 11,034 | Conditional Hyperopting of Parameters - Possible? | <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Ubuntu 24.04 LTS WSL
* Python Version: Python 3.10.12
* CCXT version: ccxt==4.4.25
* Freqtrade Version: freqtrade 2024.11
## Your question
*Ask the question you have not been able to find an answer in the [Documentation](https://www.freqtrade.io/en/latest/)*
Let's say I have this strategy
```
from freqtrade.strategy import IStrategy, IntParameter
import talib.abstract as ta
import pandas as pd
import freqtrade.vendor.qtpylib.indicators as qtpylib
class SimpleEMA(IStrategy):
timeframe = '5m'
startup_candle_count = 720
stoploss = -0.99
minimal_roi = {"0": 0.2}
a_timeperiod = IntParameter(5, 18, default=10, space="buy")
b_timeperiod = IntParameter(60, 80, default=10, space="buy")
x_timeperiod = IntParameter(5, 18, default=10, space="buy")
y_timeperiod = IntParameter(60, 80, default=10, space="buy")
c_timeperiod = IntParameter(5, 18, default=10, space="buy")
d_timeperiod = IntParameter(60, 80, default=10, space="buy")
def leverage(self, pair: str, current_time, current_rate, proposed_leverage: float, **kwargs) -> float:
return 8.0
def populate_indicators(self, dataframe: pd.DataFrame, metadata: dict) -> pd.DataFrame:
dataframe['a'] = ta.EMA(dataframe['ha_close'], timeperiod=self.a_timeperiod.value)
dataframe['b'] = ta.EMA(dataframe['ha_close'], timeperiod=self.b_timeperiod.value)
dataframe['x'] = ta.EMA(dataframe['ha_close'], timeperiod=self.x_timeperiod.value)
dataframe['y'] = ta.EMA(dataframe['ha_close'], timeperiod=self.y_timeperiod.value)
dataframe['c'] = ta.EMA(dataframe['ha_close'], timeperiod=self.c_timeperiod.value)
dataframe['d'] = ta.EMA(dataframe['ha_close'], timeperiod=self.d_timeperiod.value)
return dataframe
def populate_entry_trend(self, dataframe, metadata):
if (condition_1):
long_condition = (
qtpylib.crossed_above(dataframe['a'], dataframe['b'])
)
elif (condition_2):
long_condition = (
qtpylib.crossed_below(dataframe['x'], dataframe['y'])
)
else:
long_condition = (
qtpylib.crossed_above(dataframe['c'], dataframe['d'])
)
dataframe.loc[long_condition, 'enter_long'] = 1
return dataframe
def populate_exit_trend(self, dataframe, metadata):
return dataframe
```
My questions:
- During hyperopting, will all the parameters be set, or only the matching condition values are tweaked on the fly?
- If not, is there a way to let hyperopt only spend resource on the matching entry conditions params only at any iteration in time? As in, if `condition_1` matches, THEN hyperopt tweaks the values of the corresponding `a_timeperiod` and `b_timeperiod`
- If none of the above is anywhere close to what actually happens with hyperopting, kindly enlighten me
My current understanding of hyperopting:
Hyperopting is an automated version of backtesting which swaps out the range of parameter values in millions (if not thousands) of combinations, and picks the best outputs in the end. | closed | 2024-12-03T17:48:28Z | 2024-12-03T18:46:13Z | https://github.com/freqtrade/freqtrade/issues/11034 | [
"Question",
"Hyperopt"
] | seanmavley | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 614 | Remove `indices_tuple` argument from CrossBatchMemory | It doesn't make sense anymore | open | 2023-04-17T21:13:15Z | 2023-04-17T21:13:15Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/614 | [
"enhancement"
] | KevinMusgrave | 0 |
InstaPy/InstaPy | automation | 6,014 | InstaPy liking all available posts within parameters, disregards my set limits |
As the title suggests, I'm having issues where InstaPy likes all of the posts it sees UNLESS it violates my **session.set_dont_like** tags or my **session.set_delimit_liking** parameters. This is an issue because I have my
> session.set_do_like(enabled=True, percentage=30)
## Expected Behavior
InstaPy likes ~%30 of valid posts
## Current Behavior
InstaPy likes ALL available valid posts
## Possible Solution (optional)
I'm thinking I may have configured wrong but I've checked. Any help would be appreciated! Thank you.
## InstaPy configuration
```
# !/usr/bin/python3.9
import random
from instapy import InstaPy
from instapy import smart_run
# get a session!
session = InstaPy(username='username', password='password', headless_browser=True)
# let's go! :>
with smart_run(session):
hashtags = []
random.shuffle(hashtags)
my_hashtags = hashtags[:5]
# general settings
session.set_dont_like([])
session.set_do_follow(enabled=True, percentage=30, times=1)
session.set_do_comment(enabled=False, percentage=10)
session.set_comments([
u'What an amazing shot! :heart_eyes: What do '
u'you think of my recent shot?',
u'What an amazing shot! :heart_eyes: I think '
u'you might also like mine. :wink:',
u'Wonderful!! :heart_eyes: Would be awesome if '
u'you would checkout my photos as well!',
u'Wonderful!! :heart_eyes: I would be honored '
u'if you would checkout my images and tell me '
u'what you think. :wink:',
u'This is awesome!! :heart_eyes: Any feedback '
u'for my photos? :wink:',
u'This is awesome!! :heart_eyes: maybe you '
u'like my photos, too? :wink:',
u'I really like the way you captured this. I '
u'bet you like my photos, too :wink:',
u'I really like the way you captured this. If '
u'you have time, check out my photos, too. I '
u'bet you will like them. :wink:',
u'Great capture!! :smiley: Any feedback for my '
u'recent shot? :wink:',
u'Great capture!! :smiley: :thumbsup: What do '
u'you think of my recent photo?'],
media='Photo')
# ~30% of the by InstaPy viewed posts will be liked
session.set_do_like(enabled=True, percentage=30)
session.set_delimit_liking(enabled=True, max_likes=300)
session.set_delimit_commenting(enabled=True, max_comments=20, min_comments=0)
session.set_relationship_bounds(enabled=True,
potency_ratio=1.25,
delimit_by_numbers=True,
max_followers=50000,
max_following=1000,
min_followers=10,
min_following=10)
session.set_quota_supervisor(enabled=True, sleep_after=["likes_h", "comments_h", "follows", "unfollows", "server_calls_h"],
sleepyhead=True,
stochastic_flow=True,
notify_me=False,
peak_likes_hourly=50,
peak_likes_daily=300,
peak_comments_hourly=0,
peak_comments_daily=0,
peak_follows_hourly=30,
peak_follows_daily=None,
peak_unfollows_hourly=30,
peak_unfollows_daily=50,
peak_server_calls_hourly=270,
peak_server_calls_daily=None)
session.set_skip_users(skip_private=True,
private_percentage=100,
skip_no_profile_pic=True,
no_profile_pic_percentage=100,
skip_business=False,
skip_non_business=False,
business_percentage=100,
skip_business_categories=[],
dont_skip_business_categories=[],
skip_bio_keyword=[],
mandatory_bio_keywords=[])
# customized settings
session.set_action_delays(enabled=True, like=10, randomize=True, random_range_from=2, random_range_to=100)
session.set_relationship_bounds(min_posts=10)
session.set_simulation(enabled=False, percentage=34)
session.set_mandatory_language(enabled=True, character_set=['LATIN', 'CYRILLIC', 'GREEK', 'KATAKANA',
'HIRAGANA', 'HANGUL', 'CJK'])
# activity
session.like_by_tags(my_hashtags, amount=40, media="Photo", skip_top_posts=False)
session.set_user_interact(amount=2, randomize=True, percentage=40)
session.unfollow_users(amount=24, instapy_followed_enabled=True, instapy_followed_param="nonfollowers",
style="FIFO",
unfollow_after=12 * 60 * 60, sleep_delay=501)
session.unfollow_users(amount=24, instapy_followed_enabled=True, instapy_followed_param="all",
style="FIFO", unfollow_after=24 * 60 * 60,
sleep_delay=501)
``` | open | 2021-01-08T17:24:16Z | 2021-07-21T00:19:37Z | https://github.com/InstaPy/InstaPy/issues/6014 | [
"wontfix"
] | rouncewell | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.