repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
allure-framework/allure-python | pytest | 274 | Robotframework keyword args improvement | [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
Keyword arguments are displayed as `arg1 123 ... argN 456`,
or if the argument is named - `arg1 some_arg=123 ...`

#### What is the expected behavior?
Arguments should be naming correctly - `some_arg 123`
| closed | 2018-07-10T13:56:02Z | 2024-02-01T18:12:15Z | https://github.com/allure-framework/allure-python/issues/274 | [
"type:enhancement",
"theme:robotframework"
] | skhomuti | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,305 | Enums are not created in new type alias syntax | ### Describe the bug
Using the new type alias syntax with a literal causes errors during the model creation. I would expect both the syntaxes to work OOTB as both are type aliases (check Additional Info for screenshot)
Tried adding `type_annotation_map` (code as seen in Additional Info) same error
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
https://docs.sqlalchemy.org/en/20/orm/declarative_tables.html#using-python-enum-or-pep-586-literal-types-in-the-type-map
### SQLAlchemy Version in Use
2.0.29
### DBAPI (i.e. the database driver)
NA - Happens at model creation
### Database Vendor and Major Version
NA - Happens at model creation
### Python Version
3.12
### Operating system
OSX
### To Reproduce
```python
from typing import Literal, reveal_type
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
type Options484 = Literal["one", "two"]
Options = Literal["one", "two"]
reveal_type(Options) # Runtime type is '_LiteralGenericAlias'
reveal_type(Options484) # Runtime type is 'TypeAliasType'
class Base(DeclarativeBase): ...
class MyTable(Base):
__tablename__ = "my_table"
id: Mapped[int] = mapped_column(primary_key=True)
option_1: Mapped[Options] # works
option_2: Mapped[Options484] # does not work
```
### Error
```
sqlalchemy.exc.ArgumentError: Could not locate SQLAlchemy Core type for Python type typing.Literal['one', 'two'] inside the 'option_2' attribute Mapped annotation
```
### Additional context
Hovering over the aliases in reveal_type shows they are both picked up by Pylance (Pyright) as type aliases.
**For `Options`**
<img width="564" alt="image" src="https://github.com/sqlalchemy/sqlalchemy/assets/45509143/f58e7612-d8ea-47f5-8c2e-91115f71f177">
**For `Options484`**
<img width="568" alt="image" src="https://github.com/sqlalchemy/sqlalchemy/assets/45509143/b8b14b99-7a17-4f1e-8601-5406ee82fdeb">
Tried adding `type_annotation_map` for `TypeAliasType`, same error as shown in "Error".
```py
from typing import Literal, TypeAliasType
from enum import Enum
from sqlalchemy import Enum as SAEnum
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
type Options484 = Literal["one", "two"]
Options = Literal["one", "two"]
class Base(DeclarativeBase): ...
class MyTable(Base):
__tablename__ = "my_table"
id: Mapped[int] = mapped_column(primary_key=True)
option_1: Mapped[Options] # works
option_2: Mapped[Options484] # does not work
type_annotation_map = {
TypeAliasType: SAEnum(Enum),
}
```
PS: 484 should actually be 695, a reference to https://peps.python.org/pep-0695/ which describes the syntax. I got the PEPs confused. 🙃 | closed | 2024-04-23T02:26:42Z | 2025-01-12T19:00:57Z | https://github.com/sqlalchemy/sqlalchemy/issues/11305 | [
"bug",
"near-term release",
"orm - annotated declarative"
] | Alc-Alc | 23 |
sepandhaghighi/samila | matplotlib | 48 | Test Coverage Enhancement | #### Description
Codecov percentage is currently `88%` and I believe I can cover more than that with tests. | closed | 2021-10-08T16:31:42Z | 2021-10-14T08:50:14Z | https://github.com/sepandhaghighi/samila/issues/48 | [
"enhancement"
] | sadrasabouri | 0 |
plotly/dash | flask | 2,487 | [BUG] Updating options of a dropdown triggers update on value of the dropdown for pattern-matching callbacks | If there are two pattern-matching callbacks for a dropdowns' options and value, the callback that changes the dropdown's options also triggers the other callback that uses the dropdown's value.
```
import dash
from dash import dcc, html, Input, Output, MATCH, ALL
import random
app = dash.Dash(__name__)
app.layout = html.Div([
dcc.Dropdown(id="d1", options=[1, 2, 3]),
dcc.Dropdown(id={'type': 'd2', "id": 1}, options=[]),
dcc.Dropdown(id={'type': 'd2', "id": 2}, options=[]),
html.Pre(id={"type": 'out', "id": 1}, style={'margin-top': '100px'}),
html.Pre(id={"type": 'out', "id": 2}, style={'margin-top': '100px'})
])
@app.callback(
Output({"type": 'd2', "id": ALL}, 'options'),
Input('d1', 'value'),
prevent_initial_call=True
)
def update_options(val):
return [{
1: ['a', 'b'],
2: ['A', 'B'],
3: ['x', 'y'],
}[val] for _ in range(2)]
@app.callback(
Output({"type": 'out', "id": MATCH}, 'children'),
Input({"type": 'd2', "id": MATCH}, 'value'),
prevent_initial_call=True
)
def update(_):
return f'got_triggered_{random.randint(1, 10000)}'
if __name__ == "__main__":
app.run_server(debug=True)
```
In the example above, when I use the dropdown 'd1' to change the options of 'd2's, the first callback triggers as expected but after that second callback also triggers where it is not supposed to.
**pip list**
- asyncstdlib 3.10.6
- boto3 1.26.96
- botocore 1.29.96
- Brotli 1.0.9
- cachelib 0.9.0
- cachetools 5.3.0
- certifi 2022.12.7
- charset-normalizer 3.1.0
- click 8.1.3
- colorama 0.4.6
- confuse 2.0.0
- Cython 0.29.33
- dash 2.9.2
- dash-bootstrap-components 1.4.1
- dash-core-components 2.0.0
- dash-html-components 2.0.0
- dash-table 5.0.0
- dnspython 2.3.0
- EditorConfig 0.12.3
- Flask 2.2.3
- Flask-Caching 2.0.2
- Flask-Compress 1.13
- gunicorn 20.1.0
- idna 3.4
- itsdangerous 2.1.2
- Jinja2 3.1.2
- jmespath 1.0.1
- jsbeautifier 1.14.7
- MarkupSafe 2.1.2
- more-itertools 9.1.0
- numpy 1.24.2
- pandas 1.5.3
- pip 23.0.1
- plotly 5.13.1
- pymongo 4.3.3
- python-amazon-sp-api 0.18.2
- python-dateutil 2.8.2
- python-dotenv 1.0.0
- pytz 2023.2
- PyYAML 6.0
- requests 2.28.2
- s3transfer 0.6.0
- setuptools 65.5.0
- six 1.16.0
- tenacity 8.2.2
- urllib3 1.26.15
- Werkzeug 2.2.3 | closed | 2023-03-31T09:13:17Z | 2023-05-30T15:41:27Z | https://github.com/plotly/dash/issues/2487 | [] | ltsimple | 4 |
Kanaries/pygwalker | matplotlib | 448 | [BUG] Interpret Data meet Oops! | **Describe the bug**
I render a bar graph, when click 'Interpret Data' meet a Oops bug.
**To Reproduce**
Steps to reproduce the behavior:
1. render a bar graph
2. click 'Interpret Data' on a bar, There will be a Oops Error
**Expected behavior**
**Screenshots**


**Versions**
- pygwalker version: 0.4.6
- python version: 3.8
- browser: chrome latest
**Additional context**
| closed | 2024-02-29T08:58:48Z | 2024-03-09T03:54:33Z | https://github.com/Kanaries/pygwalker/issues/448 | [
"bug",
"P1"
] | DonYum | 2 |
geex-arts/django-jet | django | 162 | Unable to remove or hide or rename models(resource) that are not necessory. | Unable to remove or hide or rename models(resource) that are not necessory. | open | 2017-01-01T20:14:41Z | 2017-08-15T15:48:45Z | https://github.com/geex-arts/django-jet/issues/162 | [] | ManojDatt | 4 |
serengil/deepface | machine-learning | 678 | Batch Predictions | How can we get facial attributes (gender, age, etc.) for multiple images (BATCH PREDICTIONs)?
Currently, "analyze" function builds the model when run for each image individually - leading to longer processing time. | closed | 2023-02-16T17:26:32Z | 2023-08-01T14:42:40Z | https://github.com/serengil/deepface/issues/678 | [
"question"
] | swapnika92 | 2 |
thewhiteh4t/pwnedOrNot | api | 33 | invalidad syntax when try to run | 
| closed | 2019-07-17T23:49:53Z | 2019-07-18T14:53:35Z | https://github.com/thewhiteh4t/pwnedOrNot/issues/33 | [] | Tyferiusk | 4 |
microsoft/qlib | deep-learning | 1,799 | Backtest too slow when using my own data | I'm using CSMAR's data for chinese market and use following backtest code:
`from pprint import pprint
import qlib
import pandas as pd
from qlib.utils.time import Freq
from qlib.utils import flatten_dict
from qlib.backtest import backtest, executor
from qlib.contrib.evaluate import risk_analysis
from qlib.contrib.strategy import TopkDropoutStrategy
if __name__ == "__main__":
qlib.init(provider_uri=r"../../benchmark/cn_data/qlib_data/")
score_df = pd.read_csv("../pred.csv")
score_df["datetime"] = pd.to_datetime(score_df["datetime"])
pred_score = score_df.set_index(["datetime", "instrument"])["score"]
CSI300_BENCH = "SH000300"
FREQ = "day"
STRATEGY_CONFIG = {
"topk": 50,
"n_drop": 10,
"signal": pred_score,
}
EXECUTOR_CONFIG = {
"time_per_step": "day",
"generate_portfolio_metrics": True,
"verbose": True,
}
backtest_config = {
"start_time": "2016-01-01",
"end_time": "2016-12-31",
"account": 100000000,
"benchmark": CSI300_BENCH,
"exchange_kwargs": {
"trade_unit": 100,
"freq": FREQ,
"limit_threshold": 0.095,
"deal_price": "close",
"open_cost": 0.0015,
"close_cost": 0.0025,
"min_cost": 5,
},
}
strategy_obj = TopkDropoutStrategy(**STRATEGY_CONFIG)
executor_obj = executor.SimulatorExecutor(**EXECUTOR_CONFIG)
portfolio_metric_dict, indicator_dict = backtest(executor=executor_obj, strategy=strategy_obj, **backtest_config)
analysis_freq = "{0}{1}".format(*Freq.parse(FREQ))
report_normal, positions_normal = portfolio_metric_dict.get(analysis_freq)
analysis = dict()
analysis["excess_return_without_cost"] = risk_analysis(
report_normal["return"] - report_normal["bench"], freq=analysis_freq
)
analysis["excess_return_with_cost"] = risk_analysis(
report_normal["return"] - report_normal["bench"] - report_normal["cost"], freq=analysis_freq
)
analysis_df = pd.concat(analysis) # type: pd.DataFrame
# log metrics
analysis_dict = flatten_dict(analysis_df["risk"].unstack().T.to_dict())
# print out results
pprint(f"The following are analysis results of benchmark return({analysis_freq}).")
pprint(risk_analysis(report_normal["bench"], freq=analysis_freq))
pprint(f"The following are analysis results of the excess return without cost({analysis_freq}).")
pprint(analysis["excess_return_without_cost"])
pprint(f"The following are analysis results of the excess return with cost({analysis_freq}).")
pprint(analysis["excess_return_with_cost"])`
The whole process was too slow that it took about 1 hour to test on just 1 year. Wonder what could be the reason? I do see the "future error", "no common_infra" and "nan in close" warning errors. | closed | 2024-05-28T06:32:00Z | 2024-05-30T06:03:16Z | https://github.com/microsoft/qlib/issues/1799 | [
"question"
] | TompaBay | 1 |
retentioneering/retentioneering-tools | data-visualization | 63 | Strange results in StepSankey variable time_to_next_sum | Seems like time_to_next_sum in StepSankey graph edges contain incorrect negative values, which I could calculate correctly myself using python and same dataset.
<img width="300" alt="Screenshot 2024-07-30 at 15 54 54" src="https://github.com/user-attachments/assets/484aec5f-9e96-4e30-95d0-da5c9cc72171">
| closed | 2024-07-30T13:00:33Z | 2024-09-16T08:25:26Z | https://github.com/retentioneering/retentioneering-tools/issues/63 | [] | AnastasiiaTrimasova | 2 |
google-research/bert | tensorflow | 983 | Códigos | CONTRIBUTING.md
| open | 2020-01-07T14:13:50Z | 2020-01-07T14:13:50Z | https://github.com/google-research/bert/issues/983 | [] | vicyalo | 0 |
taverntesting/tavern | pytest | 897 | DeprecationWarning with jsonschema > 4 | I am getting this error when running my tavern tests:
```
lib/python3.11/site-packages/tavern/_core/schema/jsonschema.py:85: DeprecationWarning: Passing a schema to Validator.is_valid is deprecated and will be removed in a future release. Call validator.evolve(schema=new_schema).is_valid(...) instead.
```
I used to be able to make this go away by installing jsonschema < 4, but now I have another dependency that requires a higher version of jsonschema. | closed | 2023-12-11T23:44:16Z | 2024-01-06T14:19:49Z | https://github.com/taverntesting/tavern/issues/897 | [] | immesys | 1 |
MycroftAI/mycroft-core | nlp | 3,131 | Allow using the Whisper speech recognition model | **Is your feature request related to a problem? Please describe.**
While Mimic has been continually improving, Open AI just released their [Whisper speech recognition model](https://github.com/openai/whisper) under the MIT license that seems to be superior, yet still usable offline.
**Describe the solution you'd like**
It'd be great if Mycroft could either replace Mimic with Whisper or offer Whisper as an option. | closed | 2022-09-25T21:19:51Z | 2024-09-08T08:21:32Z | https://github.com/MycroftAI/mycroft-core/issues/3131 | [
"enhancement"
] | 12people | 13 |
tfranzel/drf-spectacular | rest-api | 1,208 | drf spectcular-side car doesnot work in custom html | I've recently integrated drfspectcular-sidecar into my Django project to manage internal documentation, utilizing cloud services like S3 bucket for hosting. After completing the integration, I added drf-spectcular-sidecar to the installed apps section of my Django settings and ran the collectstatic command. This process generated a swagger-ui-dist folder containing several necessary files:
favicon-32x32.png
oauth2-redirect.html
swagger-ui-bundle.js
swagger-ui-bundle.js.LICENSE.txt
swagger-ui-bundle.js.map
swagger-ui-standalone-preset.js
swagger-ui-standalone-preset.js.map
swagger-ui.css
swagger-ui.css.map
However, upon attempting to display Swagger UI documentation using a custom HTML file hosted on an S3 bucket, nothing appears. Here's the code snippet I've utilized:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Swagger UI Documentation</title>
<!-- Include Swagger UI CSS -->
<link rel="stylesheet" type="text/css" href="swagger-ui-dist/swagger-ui.css">
</head>
<body>
<!-- Swagger UI Container -->
<div id="swagger-ui"></div>
<!-- Include Swagger UI JavaScript -->
<script src="swagger-ui-dist/swagger-ui-bundle.js"></script>
<script src="swagger-ui-dist/swagger-ui-standalone-preset.js"></script>
</body>
</html>
I would appreciate any guidance or insights on resolving this issue and ensuring proper integration of Swagger UI with Django REST Framework Spectacular in this context. Additionally, if there are any suggestions for automating the endpoint documentation update process in standalone central location like s3 bucket, they would be highly valuable. Thank you for your assistance.
| open | 2024-03-21T19:37:10Z | 2024-03-23T11:30:15Z | https://github.com/tfranzel/drf-spectacular/issues/1208 | [] | meenumathew | 1 |
ipython/ipython | jupyter | 14,358 | TypeError: in interactiveshell.py, InteractiveShell class, line 331 | Errror:
"/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 331, in InteractiveShell
ast_transformers: List[ast.NodeTransformer] = List(
TypeError: 'type' object is not subscriptable
Solution:
Line 331 should be changed to:
ast_transformers: List = List( | closed | 2024-03-01T19:32:44Z | 2024-07-18T20:12:31Z | https://github.com/ipython/ipython/issues/14358 | [] | manish-abio | 6 |
plotly/dash | jupyter | 2,874 | Can't build custom components with react-docgen 7 | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.0
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-testing-stub 0.0.2
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Describe the bug**
I developed a library with some custom components. I'm trying to update some dependencies, and it turns out that I can't update react-docgen because of dash's extract-meta.js, which uses a deprecated `require()`
A clear and concise description of what the bug is.
```
$ npm run build
> dao-analyzer-components@0.0.23 build
> npm run build:js && npm run build:backends
> dao-analyzer-components@0.0.23 build:js
> webpack --mode production
asset dao_analyzer_components.min.js 13.4 KiB [emitted] [compared for emit] [minimized] (name: main) 1 related asset
orphan modules 21.9 KiB [orphan] 6 modules
runtime modules 2.42 KiB 5 modules
./src/lib/index.js + 6 modules 22.2 KiB [not cacheable] [built] [code generated]
webpack 5.91.0 compiled successfully in 1287 ms
> dao-analyzer-components@0.0.23 build:backends
> dash-generate-components ./src/lib/components dao_analyzer_components -p package-info.json --ignore \.test\.
/home/davo/Documents/GRASIA/dao-analyzer/.direnv/python-3.12/lib/python3.12/site-packages/dash/extract-meta.js:15
const reactDocs = require('react-docgen');
^
Error [ERR_REQUIRE_ESM]: require() of ES Module /home/davo/Documents/GRASIA/dao-analyzer/dao_analyzer_components/node_modules/react-docgen/dist/main.js from /home/davo/Documents/GRASIA/dao-analyzer/.direnv/python-3.12/lib/python3.12/site-packages/dash/extract-meta.js not supported.
Instead change the require of main.js in /home/davo/Documents/GRASIA/dao-analyzer/.direnv/python-3.12/lib/python3.12/site-packages/dash/extract-meta.js to a dynamic import() which is available in all CommonJS modules.
at Object.<anonymous> (/home/davo/Documents/GRASIA/dao-analyzer/.direnv/python-3.12/lib/python3.12/site-packages/dash/extract-meta.js:15:19) {
code: 'ERR_REQUIRE_ESM'
}
Node.js v22.2.0
Error generating metadata in dao_analyzer_components (status=1)
```
**Expected behavior**
To be able to build my library
| open | 2024-06-04T10:57:57Z | 2024-08-13T19:51:30Z | https://github.com/plotly/dash/issues/2874 | [
"bug",
"infrastructure",
"sev-4",
"P3"
] | daviddavo | 0 |
tflearn/tflearn | tensorflow | 901 | Generate sequence from tensor seed | I am building a SequenceGenerator using a network of LSTMs. The input of these LSTMs is different from what I've seen in most implementations.
I want to have embedding information besides the character one-hot encoding. Therefore, instead of having an input shape as [instance_nb, sequence_length, char_idx_size], my 3D tensor is: [instance_nb, sequence_length+1, embedding_size]. The +1 here is for the embedding vector.
When I am testing the generated model, tflearn can't use a simple sequence to the model to generate the test. I get the following error:
```
File "lstm.py", line 92, in <module>
print(m.generate(50, temperature=1.0, seq_seed=seed))
File "/export/home/fiorinin/autocomplete/lib/python3.4/site-packages/tflearn/models/generator.py", line 216, in generate
preds = self._predict(x)[0].tolist()
File "/export/home/fiorinin/autocomplete/lib/python3.4/site-packages/tflearn/models/generator.py", line 180, in _predict
return self.predictor.predict(feed_dict)
File "/export/home/fiorinin/autocomplete/lib/python3.4/site-packages/tflearn/helpers/evaluator.py", line 69, in predict
return self.session.run(self.tensors[0], feed_dict=feed_dict)
File "/export/home/fiorinin/autocomplete/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/export/home/fiorinin/autocomplete/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 1, 38) for Tensor 'InputData/X:0', which has shape '(?, 2, 200)'
```
In my case, the sequence size is one, a single character. 38 is the char_idx_size and 200 is the embedding_size. How can I work around this? There aren't much examples for embeddings on tflearn, so I might also just be using it the wrong way. | open | 2017-09-11T17:55:48Z | 2017-09-11T17:56:59Z | https://github.com/tflearn/tflearn/issues/901 | [] | fiorinin | 0 |
davidsandberg/facenet | computer-vision | 1,098 | KeyError: "The name 'batch_join:0' refers to a Tensor which does not exist. The operation, 'batch_join', does not exist in the graph." | Hi,
I am facing the issue when running the real_time_face_recognition.py script,
File "/mnt/work/rna_payload/ros2_ws/build/video_analysis/video_analysis/facenet/contributed/face.py", line 86, in identify
face.embedding = self.encoder.generate_embedding(face)
File "/mnt/work/rna_payload/ros2_ws/build/video_analysis/video_analysis/facenet/contributed/face.py", line 128, in generate_embedding
images_placeholder = tf.compat.v1.get_default_graph().get_tensor_by_name("batch_join:0")
File "/home/jaiganesh/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3972, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/home/jaiganesh/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3796, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/home/jaiganesh/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3838, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'batch_join:0' refers to a Tensor which does not exist. The operation, 'batch_join', does not exist in the graph
May i know how to fix this issue? | open | 2019-10-21T09:57:36Z | 2021-09-23T07:28:01Z | https://github.com/davidsandberg/facenet/issues/1098 | [] | jaiatncson7 | 1 |
pydantic/pydantic-core | pydantic | 818 | v2 URL cannot be casted to DB | ```
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'pydantic_core._pydantic_core.Url'
```
This used to work in v1, translating to str automatically.
Selected Assignee: @adriangb | closed | 2023-07-24T17:55:08Z | 2023-07-25T08:23:47Z | https://github.com/pydantic/pydantic-core/issues/818 | [
"unconfirmed"
] | gaborbernat | 3 |
autogluon/autogluon | data-science | 4,795 | Low GPU utilization with TabPFNMix model when using presets | I'm using the TabPFNMix model with AutoGluon and noticed a significant difference in GPU utilization depending on whether presets are used in the fit() function.
**Steps to Reproduce:**
Define the hyperparameters for TabPFNMix:
```python
tabpfnmix_default = {
"model_path_classifier": "autogluon/tabpfn-mix-1.0-classifier",
"model_path_regressor": "autogluon/tabpfn-mix-1.0-regressor",
"n_ensembles": 1,
"max_epochs": 30,
}
hyperparameters = {
"TABPFNMIX": [
tabpfnmix_default,
],
}
```
**Train the TabPFNMix model without any preset:**
```python
predictor = TabularPredictor(label='label', path='model_save_path', eval_metric='accuracy', problem_type='binary')
predictor.fit(train_data, hyperparameters=hyperparameters, verbosity=3, time_limit=3600, num_gpus=1)
```
GPU utilization: ~11.6 GB VRAM + 9 GB on my dataset.
RAM on CPU: ~2 GB.
Train the same model with a preset (e.g., best_quality):
```python
predictor = TabularPredictor(label='label', path='model_save_path', eval_metric='accuracy', problem_type='binary')
predictor.fit(train_data, presets='best_quality', hyperparameters=hyperparameters, verbosity=3, time_limit=3600, num_gpus=1)
```
GPU utilization: <2 GB VRAM.
Training and inference are significantly slower.
**Expected Behavior:**
When using a preset like best_quality, GPU utilization should remain high (similar to the no-preset scenario), ensuring faster training and inference times.
**Observed Behavior:**
Using presets reduces GPU usage drastically, leading to slower training and inference.
**Questions:**
- Is there a way to ensure high GPU utilization when using presets with TabPFNMix?
- Are there specific parameters or configurations that could mitigate this issue?
- Is this a known limitation or a bug related to the presets' implementation? | open | 2025-01-14T16:15:03Z | 2025-02-23T14:16:48Z | https://github.com/autogluon/autogluon/issues/4795 | [
"bug",
"module: tabular"
] | Killer3048 | 4 |
3b1b/manim | python | 1,919 | Errno 2 Help | ### Describe the error
<!-- A clear and concise description of what you want to make. -->
Hello, I've been trying to install Manim, but I keep running into the same errors. I've followed the instructions like how they were laid out, but I can't open up any of the example scenes. I'm running Mac OS Ventura on an m1 pro . Anyways, this is what I get when input <manim example_scenes.py OpeningManimExample>.
**Error**
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manimpango/__init__.py", line 14, in <module>
from .cmanimpango import * # noqa: F403,F401
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manimpango/cmanimpango.cpython-310-darwin.so, 0x0002): Library not loaded: /opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib
Referenced from: <AC8274D2-5D8D-36CC-8260-E72A6F493B1E> /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manimpango/cmanimpango.cpython-310-darwin.so
Reason: tried: '/opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib' (no such file), '/opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib' (no such file), '/usr/lib/libpangocairo-1.0.0.dylib' (no such file, not in dyld cache)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/bin/manim", line 5, in <module>
from manim.__main__ import main
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manim/__init__.py", line 76, in <module>
from .mobject.table import *
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manim/mobject/table.py", line 77, in <module>
from manim.mobject.text.text_mobject import Paragraph
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manim/mobject/text/text_mobject.py", line 64, in <module>
import manimpango
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manimpango/__init__.py", line 35, in <module>
raise ImportError(msg)
ImportError:
ManimPango could not import and load the necessary shared libraries.
This error may occur when ManimPango and its dependencies are improperly set up.
Please make sure the following versions are what you expect:
* ManimPango v0.4.2, Python v3.10.7
If you believe there is a greater problem,
feel free to contact us or create an issue on GitHub:
* Discord: https://www.manim.community/discord/
* GitHub: https://github.com/ManimCommunity/ManimPango/issues
Original error: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manimpango/cmanimpango.cpython-310-darwin.so, 0x0002): Library not loaded: /opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib
Referenced from: <AC8274D2-5D8D-36CC-8260-E72A6F493B1E> /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/manimpango/cmanimpango.cpython-310-darwin.so
Reason: tried: '/opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib' (no such file), '/opt/homebrew/opt/pango/lib/libpangocairo-1.0.0.dylib' (no such file), '/usr/lib/libpangocairo-1.0.0.dylib' (no such file, not in dyld cache) | open | 2022-11-26T16:02:58Z | 2022-11-26T16:02:58Z | https://github.com/3b1b/manim/issues/1919 | [] | zunzun08 | 0 |
plotly/dash | dash | 2,262 | Method for generating component IDs | Hi there,
I wanted to know the method used to auto-generate component ids in Dash. Does it use UUIDs or something else?
Thanks | closed | 2022-10-07T07:21:20Z | 2022-10-11T14:31:19Z | https://github.com/plotly/dash/issues/2262 | [] | anu0012 | 1 |
apify/crawlee-python | web-scraping | 1,102 | Call for feedback on Crawlee for Python [Win 5 Scale plans for a month and exclusive Crawlee goodies] 🎁 |
Hey everyone 👋
We launched Crawlee for Python around this time last year. Since then, we've received tremendous positive responses from our developer community, along with valuable suggestions that have helped us enhance the platform even further.
Now, we're taking Crawlee for Python to the next level and invite your input on our upcoming major release.
- Share what functionality you find missing in the library
- Describe any bugs you've encountered while using it in your projects
- Suggest feature additions that would improve your workflow
We'll reward the five most valuable contributions with Scale plans and exclusive Crawlee merch. 🎁
Reply to this issue with your feedback, thanks! | open | 2025-03-18T13:05:56Z | 2025-03-18T13:06:12Z | https://github.com/apify/crawlee-python/issues/1102 | [
"t-tooling"
] | souravjain540 | 0 |
microsoft/nni | data-science | 5,464 | Error: tuner_command_channel: Tuner loses responsive | **Describe the issue**:
Unable to operate stably.
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04
- Server OS (for remote mode only): \
- Python version: Python 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] on linux
- PyTorch/TensorFlow version: \
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
`
experiment = Experiment('local')
experiment.id = '*'
experiment.config.trial_command = 'python model.py'
experiment.config.trial_code_directory = '.'
experiment.config.search_space = search_space
experiment.config.tuner.name = 'TPE'
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
experiment.config.max_trial_number = 5000
experiment.config.trial_concurrency = 4
experiment.run(58000)
experiment.stop()
`
- Search space: quniform and choice
**Log message**:
- nnimanager.log:
```
[2023-03-21 10:20:57] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 141,
hyperParameters: {
value: '{"parameter_id": 141, "parameter_source": "algorithm", "parameters": {"RelSamplingDistance": 0.37, "KeyPointFraction": 0.77, "max_overlap_dist_rel": 0.9, "pose_ref_num_steps": 1.0, "pose_ref_sub_sampling": 7.0, "pose_ref_dist_threshold_rel": 0.1, "pose_ref_scoring_dist_rel": 0.2, "pose_ref_use_scene_normals": "false"}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-03-21 10:21:05] INFO (NNIManager) Trial job GJKyK status changed from WAITING to RUNNING
[2023-03-21 10:21:17] ERROR (tuner_command_channel.WebSocketChannel) Error: Error: tuner_command_channel: Tuner loses responsive
at WebSocketChannelImpl.heartbeat (/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:119:30)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
[2023-03-21 10:21:51] INFO (main) Start NNI manager
[2023-03-21 10:21:51] INFO (NNIDataStore) Datastore initialization done
[2023-03-21 10:21:51] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2023-03-21 10:21:51] WARNING (NNITensorboardManager) Tensorboard may not installed, if you want to use tensorboard, please check if tensorboard installed.
[2023-03-21 10:21:51] INFO (RestServer) REST server started.
[2023-03-21 10:21:52] INFO (NNIManager) Resuming experiment: star_TPE_quniform
[2023-03-21 10:21:52] INFO (NNIManager) Setup training service...
[2023-03-21 10:21:52] INFO (LocalTrainingService) Construct local machine training service.
[2023-03-21 10:21:52] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: VIEWED
```
- dispatcher.log:
```
[2023-03-21 10:08:52] INFO (nni.tuner.tpe/MainThread) Using random seed 1596889983
[2023-03-21 10:08:52] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2023-03-21 10:21:17] WARNING (nni.runtime.tuner_command_channel.channel/MainThread) Exception on receiving: ConnectionClosedError(None, None, None)
[2023-03-21 10:21:17] WARNING (nni.runtime.tuner_command_channel.channel/MainThread) Connection lost. Trying to reconnect...
[2023-03-21 10:21:17] INFO (nni.runtime.tuner_command_channel.channel/MainThread) Attempt #0, wait 0 seconds...
[2023-03-21 10:21:17] INFO (nni.runtime.msg_dispatcher_base/MainThread) Report error to NNI manager: Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/selector_events.py", line 862, in _read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 94, in _receive
command = self._retry_receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 104, in _retry_receive
self._channel.connect()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 62, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 111, in _wait
return future.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 125, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
return fut.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 144, in read_http_response
raise InvalidMessage("did not receive a valid HTTP response") from exc
websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
[2023-03-21 10:21:17] WARNING (nni.runtime.tuner_command_channel.channel/MainThread) Exception on sending: AttributeError("'NoneType' object has no attribute 'send'")
[2023-03-21 10:21:17] ERROR (nni.runtime.tuner_command_channel.channel/MainThread) 'NoneType' object has no attribute 'send'
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/selector_events.py", line 862, in _read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 94, in _receive
command = self._retry_receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 104, in _retry_receive
self._channel.connect()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 62, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 111, in _wait
return future.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 125, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
return fut.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 144, in read_http_response
raise InvalidMessage("did not receive a valid HTTP response") from exc
websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 62, in _send
self._channel.send(command)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 81, in send
_wait(self._ws.send(message))
AttributeError: 'NoneType' object has no attribute 'send'
[2023-03-21 10:21:17] WARNING (nni.runtime.tuner_command_channel.channel/MainThread) Connection lost. Trying to reconnect...
[2023-03-21 10:21:17] INFO (nni.runtime.tuner_command_channel.channel/MainThread) Attempt #0, wait 0 seconds...
[2023-03-21 10:21:17] ERROR (nni.runtime.msg_dispatcher_base/MainThread) Connection to NNI manager is broken. Failed to report error.
[2023-03-21 10:21:17] ERROR (nni.main/MainThread) did not receive a valid HTTP response
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/selector_events.py", line 862, in _read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/__main__.py", line 85, in <module>
main()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 94, in _receive
command = self._retry_receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 104, in _retry_receive
self._channel.connect()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 62, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 111, in _wait
return future.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 125, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
return fut.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 144, in read_http_response
raise InvalidMessage("did not receive a valid HTTP response") from exc
websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
```
- nnictl stdout and stderr:
```
[2023-03-21 09:47:28] Creating experiment, Experiment ID: adapter_plate_square_TPE_quniform
[2023-03-21 09:47:28] Starting web server...
[2023-03-21 09:47:29] WARNING: Timeout, retry...
[2023-03-21 09:47:30] Setting up...
[2023-03-21 09:47:30] Web portal URLs: http://127.0.0.1:58000 http://10.62.137.83:58000 http://198.18.0.1:58000
node:events:504
throw er; // Unhandled 'error' event
^
Error: tuner_command_channel: Tuner loses responsive
at WebSocketChannelImpl.heartbeat (/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:119:30)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
Emitted 'error' event at:
at WebSocketChannelImpl.handleError (/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:135:22)
at WebSocketChannelImpl.heartbeat (/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:119:18)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
Thrown at:
at heartbeat (/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:119:30)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/selector_events.py", line 862, in _read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/__main__.py", line 85, in <module>
main()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 94, in _receive
command = self._retry_receive()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/channel.py", line 104, in _retry_receive
self._channel.connect()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 62, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 111, in _wait
return future.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/runtime/tuner_command_channel/websocket.py", line 125, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
return fut.result()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/websockets/legacy/client.py", line 144, in read_http_response
raise InvalidMessage("did not receive a valid HTTP response") from exc
websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1037, in _send_output
self.send(msg)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 975, in send
self.connect()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f947595e2c0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=58000): Max retries exceeded with url: /api/v1/nni/check-status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f947595e2c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niu/code/halcon/paramsearchhalcon/python/NNI/star_TPE/main.py", line 64, in <module>
experiment.run(58000)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/experiment.py", line 183, in run
self._wait_completion()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/experiment.py", line 163, in _wait_completion
status = self.get_status()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/experiment.py", line 283, in get_status
resp = rest.get(self.port, '/check-status', self.url_prefix)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/rest.py", line 43, in get
return request('get', port, api, prefix=prefix)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/rest.py", line 31, in request
resp = requests.request(method, url, timeout=timeout)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=58000): Max retries exceeded with url: /api/v1/nni/check-status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f947595e2c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
[2023-03-21 09:53:09] Stopping experiment, please wait...
[2023-03-21 09:53:09] ERROR: HTTPConnectionPool(host='localhost', port=58000): Max retries exceeded with url: /api/v1/nni/experiment (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9519c34460>: Failed to establish a new connection: [Errno 111] Connection refused'))
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 1037, in _send_output
self.send(msg)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/http/client.py", line 975, in send
self.connect()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f9519c34460>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=58000): Max retries exceeded with url: /api/v1/nni/experiment (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9519c34460>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/experiment.py", line 143, in _stop_impl
rest.delete(self.port, '/experiment', self.url_prefix)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/rest.py", line 52, in delete
request('delete', port, api, prefix=prefix)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/nni/experiment/rest.py", line 31, in request
resp = requests.request(method, url, timeout=timeout)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/niu/miniconda3/envs/halcon/lib/python3.10/site-packages/requests/adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=58000): Max retries exceeded with url: /api/v1/nni/experiment (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9519c34460>: Failed to establish a new connection: [Errno 111] Connection refused'))
[2023-03-21 09:53:09] WARNING: Cannot gracefully stop experiment, killing NNI process...
[2023-03-21 09:53:09] Experiment stopped
``` | open | 2023-03-21T02:03:31Z | 2023-04-10T02:54:12Z | https://github.com/microsoft/nni/issues/5464 | [] | beta1scat | 7 |
yvann-ba/Robby-chatbot | streamlit | 74 | Wrong Analysis via Robby-chatbot | I added a csv file which has this dataset
```
https://www.kaggle.com/datasets/ncsaayali/cereals-data
```
and I asked a basic question, "What is the average calorie for the whole dataset?"
It answered me a wrong answer considering only 4 cereals and not the whole dataset. I asked it another question, "How many total cereals are there?", it answered "There are 12 cereals", but there are in total 78 cereals.
How is this bot working for you all? | open | 2024-06-17T06:53:19Z | 2024-06-17T06:53:19Z | https://github.com/yvann-ba/Robby-chatbot/issues/74 | [] | ojasmagarwal | 0 |
koxudaxi/datamodel-code-generator | fastapi | 1,536 | AttributeError: 'bool' object has no attribute 'get' when generating models from OpenAPI3 json schema | **Describe the bug**
AttributeError: 'bool' object has no attribute 'get' when generating models from OpenAPI3 json schema
**To Reproduce**
Schema: https://github.com/OAI/OpenAPI-Specification/blob/main/schemas/v3.1/schema.yaml
Used commandline:
```
$ git clone https://github.com/OAI/OpenAPI-Specification/
$ cd OpenAPI-Specification/schemas/v3.1
$ datamodel-codegen --input schema.yaml --output oas31models.py
```
Expected: Generated models, no errors
Got:
```
The input file type was determined to be: jsonschema
This can be specificied explicitly with the `--input-file-type` option.
/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/pydantic/main.py:309: UserWarning: Pydantic serializer warnings:
Expected `Union[list[definition-ref], definition-ref, bool]` but got `JsonSchemaObject` - serialized value may not be as expected
return self.__pydantic_serializer__.to_python(
Traceback (most recent call last):
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/datamodel_code_generator/__main__.py", line 766, in main
generate(
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/datamodel_code_generator/__init__.py", line 433, in generate
results = parser.parse()
^^^^^^^^^^^^^^
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/datamodel_code_generator/parser/base.py", line 1022, in parse
self.parse_raw()
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/datamodel_code_generator/parser/jsonschema.py", line 1606, in parse_raw
self._parse_file(self.raw_obj, obj_name, path_parts)
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/datamodel_code_generator/parser/jsonschema.py", line 1702, in _parse_file
model_name, JsonSchemaObject.parse_obj(models), path
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/typing_extensions.py", line 2562, in wrapper
return __arg(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/pydantic/main.py", line 976, in parse_obj
return cls.model_validate(obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/pydantic/main.py", line 504, in model_validate
return cls.__pydantic_validator__.validate_python(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/datamodel_code_generator/parser/jsonschema.py", line 275, in __init__
super().__init__(**data)
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/pydantic/main.py", line 165, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
File "/home/alex/work/OpenAPI-Specification/schemas/v3.1/.venv/lib/python3.11/site-packages/datamodel_code_generator/parser/jsonschema.py", line 199, in validate_exclusive_maximum_and_exclusive_minimum
exclusive_maximum: Union[float, bool, None] = values.get('exclusiveMaximum')
^^^^^^^^^^
AttributeError: 'bool' object has no attribute 'get'
```
**Version:**
- OS: Linux
- Python version: 3.11.3
- datamodel-code-generator version: 22.4 (tried master branch too)
- pydantic version: 2.3.0
**Additional context**
This also fails:
```
$ datamodel-code-generator --url https://raw.githubusercontent.com/oasis-tcs/sarif-spec/main/Schemata/sarif-schema-2.1.0.json --output model.py
```
Looks like #1442, but it's closed and does not have pydantic warning. | closed | 2023-09-06T10:36:28Z | 2023-11-16T02:47:57Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1536 | [
"bug"
] | Darion | 8 |
pydata/xarray | numpy | 9,291 | Segmentation fault similar to issue 8410 | ### What is your issue?
This is my crash report and I think it's similar to #8410 :
```
Fatal Python error: Segmentation fault
Thread 0x00007f4ff8daf700 (most recent call first):
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/concurrent/futures/thread.py", line 81 in _worker
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 982 in run
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1045 in _bootstrap_inner
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1002 in _bootstrap
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydev_bundle/pydev_monkey.py", line 817 in __call__
Thread 0x00007f505a78f700 (most recent call first):
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/selectors.py", line 468 in select
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py", line 263 in _run_once
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py", line 218 in run_forever
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 982 in run
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1045 in _bootstrap_inner
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1002 in _bootstrap
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydev_bundle/pydev_monkey.py", line 817 in __call__
Thread 0x00007f5157fff700 (most recent call first):
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 331 in wait
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 629 in wait
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/pydevd.py", line 157 in _on_run
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 219 in run
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1045 in _bootstrap_inner
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1002 in _bootstrap
Thread 0x00007f515c818700 (most recent call first):
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 293 in _on_run
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 219 in run
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1045 in _bootstrap_inner
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1002 in _bootstrap
Thread 0x00007f515d019700 (most recent call first):
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 331 in wait
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/queue.py", line 180 in get
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 370 in _on_run
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydevd_bundle/pydevd_comm.py", line 219 in run
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1045 in _bootstrap_inner
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/threading.py", line 1002 in _bootstrap
Current thread 0x00007f516fb65740 (most recent call first):
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/file_manager.py", line 217 in _acquire_with_cache_info
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/file_manager.py", line 199 in acquire_context
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/contextlib.py", line 137 in __enter__
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/netCDF4_.py", line 411 in _acquire
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/netCDF4_.py", line 417 in ds
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/netCDF4_.py", line 355 in __init__
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/netCDF4_.py", line 408 in open
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/netCDF4_.py", line 645 in open_dataset
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/xarray/backends/api.py", line 571 in open_dataset
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/hydrodatasource/reader/access_fs.py", line 92 in read_valid_data
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/hydrodatasource/reader/access_fs.py", line 20 in spec_path
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/hydrodatasource/reader/data_source.py", line 438 in read_MP
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_scalers.py", line 293 in mean_prcp
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_scalers.py", line 414 in get_data_obs
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_scalers.py", line 487 in load_data
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_scalers.py", line 104 in __init__
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_sets.py", line 279 in _normalize
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_sets.py", line 620 in _normalize
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_sets.py", line 261 in _load_data
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_sets.py", line 119 in __init__
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_sets.py", line 613 in __init__
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_sets.py", line 678 in __init__
File "/home/wangyang1/torchhydro/torchhydro/datasets/data_sets.py", line 762 in __init__
File "/home/wangyang1/torchhydro/torchhydro/trainers/deep_hydro.py", line 227 in make_dataset
File "/home/wangyang1/torchhydro/torchhydro/trainers/deep_hydro.py", line 152 in __init__
File "/home/wangyang1/torchhydro/torchhydro/trainers/deep_hydro.py", line 782 in __init__
File "/home/wangyang1/torchhydro/torchhydro/trainers/trainer.py", line 80 in _get_deep_hydro
File "/home/wangyang1/torchhydro/torchhydro/trainers/trainer.py", line 63 in train_and_evaluate
File "/home/wangyang1/torchhydro/experiments/train_with_era5land_gnn.py", line 54 in test_run_model
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/python.py", line 159 in pytest_pyfunc_call
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/python.py", line 1627 in runtest
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/runner.py", line 174 in pytest_runtest_call
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/runner.py", line 242 in <lambda>
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/runner.py", line 341 in from_call
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/runner.py", line 241 in call_and_report
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/runner.py", line 132 in runtestprotocol
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/runner.py", line 113 in pytest_runtest_protocol
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/main.py", line 362 in pytest_runtestloop
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/main.py", line 337 in _main
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/main.py", line 283 in wrap_session
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/main.py", line 330 in pytest_cmdline_main
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/jiaxuwu/.conda/envs/forest-minio/lib/python3.11/site-packages/_pytest/config/__init__.py", line 175 in main
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pycharm/_jb_pytest_runner.py", line 75 in <module>
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18 in execfile
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/pydevd.py", line 1546 in _exec
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/pydevd.py", line 1539 in run
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/pydevd.py", line 2229 in main
File "/home/wangyang1/.local/share/JetBrains/IntelliJIdea2024.1/python/helpers/pydev/pydevd.py", line 2247 in <module>
Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, _pydevd_bundle.pydevd_cython, _pydevd_bundle_ext.pydevd_cython, pyarrow.lib, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pandas._libs.ops, bottleneck.move, bottleneck.nonreduce, bottleneck.nonreduce_axis, bottleneck.reduce, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, yaml._yaml, _brotli, charset_normalizer.md, cytoolz.utils, cytoolz.itertoolz, cytoolz.functoolz, cytoolz.dicttoolz, cytoolz.recipes, ujson, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, _cffi_backend, frozenlist._frozenlist, sklearn.__check_build._check_build, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, lz4._version, lz4.frame._frame, psutil._psutil_linux, psutil._psutil_posix, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._cdflib, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._ansari_swilk_statistics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper, sklearn.utils._isfinite, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.preprocessing._target_encoder_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._base, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distances_reduction._argkmin_classmode, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_distances_reduction._radius_neighbors_classmode, sklearn.metrics._pairwise_fast, sklearn.utils._random, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, dgl._ffi._cy3.core, pyarrow._parquet, pyarrow._fs, pyarrow._azurefs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, shapely.lib, shapely._geos, shapely._geometry_helpers, pyproj._compat, pyproj._datadir, pyproj._network, pyproj._geod, pyproj.list, pyproj._crs, pyproj.database, pyproj._transformer, pyproj._sync, igraph._igraph, matplotlib._image, cftime._cftime, netCDF4._netCDF4, markupsafe._speedups, cupy_backends.cuda._softlink, cupy_backends.cuda.api._runtime_enum, cupy_backends.cuda.api.runtime, cupy._util, cupy.cuda.device, fastrlock.rlock, cupy.cuda.memory_hook, cupy_backends.cuda.stream, cupy.cuda.graph, cupy.cuda.stream, cupy_backends.cuda.api._driver_enum, cupy_backends.cuda.api.driver, cupy.cuda.memory, cupy._core.internal, cupy._core._carray, cupy.cuda.texture, cupy.cuda.function, cupy_backends.cuda.libs.nvrtc, cupy.cuda.pinned_memory, cupy.cuda.common, cupy.cuda.cub, cupy_backends.cuda.libs.nvtx, cupy.cuda.thrust, cupy._core._dtype, cupy._core._scalar, cupy._core._accelerator, cupy._core._memory_range, cupy._core._fusion_thread_local, cupy._core._kernel, cupy._core._routines_manipulation, cupy._core._routines_binary, cupy._core._optimize_config, cupy._core._cub_reduction, cupy._core._reduction, cupy._core._routines_math, cupy._core._routines_indexing, cupy._core._routines_linalg, cupy._core._routines_logic, cupy._core._routines_sorting, cupy._core._routines_statistics, cupy._core.dlpack, cupy._core.flags, cupy._core.core, cupy._core._fusion_variable, cupy._core._fusion_trace, cupy._core._fusion_kernel, cupy._core.new_fusion, cupy._core.fusion, cupy._core.raw, cupy.fft._cache, cupy.fft._callback, cupy.random._bit_generator, cupy.lib._polynomial, scipy.fftpack.convolve, psycopg2._psycopg, numcodecs.compat_ext, numcodecs.blosc, numcodecs.zstd, numcodecs.lz4, numcodecs._shuffle, msgpack._cmsgpack, numcodecs.jenkins, numcodecs.vlen, numcodecs.fletcher32, h5py._errors, h5py.defs, h5py._objects, h5py.h5, h5py.utils, h5py.h5t, h5py.h5s, h5py.h5ac, h5py.h5p, h5py.h5r, h5py._proxy, h5py._conv, h5py.h5z, h5py.h5a, h5py.h5d, h5py.h5ds, h5py.h5g, h5py.h5i, h5py.h5f, h5py.h5fd, h5py.h5pl, h5py.h5o, h5py.h5l, h5py._selector, google._upb._message, scipy.cluster._vq, scipy.cluster._hierarchy, scipy.cluster._optimal_leaf_ordering, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, numba.experimental.jitclass._box, sklearn.utils._fast_dict, sklearn.cluster._hierarchical_fast, sklearn.cluster._k_means_common, sklearn.cluster._k_means_elkan, sklearn.cluster._k_means_lloyd, sklearn.cluster._k_means_minibatch, sklearn.neighbors._partition_nodes, sklearn.neighbors._ball_tree, sklearn.neighbors._kd_tree, sklearn.utils.arrayfuncs, sklearn.utils._seq_dataset, sklearn.linear_model._cd_fast, sklearn._loss._loss, sklearn.svm._liblinear, sklearn.svm._libsvm, sklearn.svm._libsvm_sparse, sklearn.utils._weight_vector, sklearn.linear_model._sgd_fast, sklearn.linear_model._sag_fast, sklearn.decomposition._online_lda_fast, sklearn.decomposition._cdnmf_fast, sklearn.cluster._dbscan_inner, sklearn.cluster._hdbscan._tree, sklearn.cluster._hdbscan._linkage, sklearn.cluster._hdbscan._reachability, sklearn._isotonic, sklearn.tree._utils, sklearn.tree._tree, sklearn.tree._splitter, sklearn.tree._criterion, sklearn.neighbors._quad_tree, sklearn.manifold._barnes_hut_tsne, sklearn.manifold._utils, sklearn.ensemble._gradient_boosting, sklearn.ensemble._hist_gradient_boosting.common, sklearn.ensemble._hist_gradient_boosting._gradient_boosting, sklearn.ensemble._hist_gradient_boosting._binning, sklearn.ensemble._hist_gradient_boosting._bitset, sklearn.ensemble._hist_gradient_boosting.histogram, sklearn.ensemble._hist_gradient_boosting._predictor, sklearn.ensemble._hist_gradient_boosting.splitting, sklearn.ensemble._hist_gradient_boosting.utils, shap._cext, shap._cext_gpu, sklearn.datasets._svmlight_format_fast, sklearn.feature_extraction._hashing_fast, rasterio._version, rasterio._err, rasterio._filepath, rasterio._env, rasterio._transform, rasterio._base, rasterio.crs, rasterio._features, rasterio._warp, rasterio._io, tornado.speedups, lz4.block._block (total: 403)
进程已结束,退出代码为 139 (interrupted by signal 11:SIGSEGV)
```
I have downgraded `netCDF4` to `1.6.0` but the problem still occurs.
Do you have any other solutions, rather than converting netcdf files to other formats? | closed | 2024-07-30T12:29:30Z | 2024-07-30T16:49:21Z | https://github.com/pydata/xarray/issues/9291 | [
"needs mcve",
"needs triage"
] | forestbat | 1 |
sherlock-project/sherlock | python | 1,456 | The following sites are reporting false positives | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a website that is returning **false positive** results
- [x] I've checked for similar site support requests including closed ones
- [x] I've checked for pull requests attempting to fix this false positive
- [] I'm only reporting **one** site (create a separate issue for each site)
## Description
<!--
Provide the username that is causing Sherlock to return a false positive, along with any other information that might help us fix this false positive.
-->
WRITE DESCRIPTION HERE
The following sites are reporting false positives:
https://gurushots.com
https://robertsspaceindustries.com
https://forums.whonix.org | closed | 2022-09-08T00:30:41Z | 2022-09-24T09:37:33Z | https://github.com/sherlock-project/sherlock/issues/1456 | [
"false positive"
] | JassonCordones | 2 |
ray-project/ray | machine-learning | 50,976 | [Ray Core] Ray task error stack trace is incomplete | ### What happened + What you expected to happen
If the error message is too long, it seems that it cannot be fully displayed. Is there a length limit somewhere? Is there an environment variable that can be configured?

### Versions / Dependencies
Ray v2.38.0
### Reproduction script
Calling a C++ binary program with many parameters and then reporting an error internally
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | closed | 2025-02-28T03:56:51Z | 2025-03-03T03:58:10Z | https://github.com/ray-project/ray/issues/50976 | [
"bug",
"dashboard",
"observability"
] | Moonquakes | 1 |
pyqtgraph/pyqtgraph | numpy | 3,048 | PlotItem with a custom ViewBox | I'm creating a custom ViewBox class (subclassing `pg.ViewBox`) and using it for `pg.PlotItem`. Reviewing the code, it appears that the viewbox parent must be set manually unlike the default viewbox. In other words, the
```python
view_box = MyViewBox()
plot_item = pg.PlotItem(viewBox=view_box)
assert plot_item==view_box.parent() # fails
```
While I can simply `view_box.setParent(plot_item)` after the `plot_item` is instantiated, it seems logical to do this internally. That is modify [Line 129-131](https://github.com/pyqtgraph/pyqtgraph/blob/7b1510afc7509ed04e08425e630b58b41b6bef8f/pyqtgraph/graphicsItems/PlotItem/PlotItem.py#L129) to:
```python
if viewBox is None:
viewBox = ViewBox(parent=self, enableMenu=enableMenu)
else:
viewBox.setParent(self)
self.vb = viewBox
```
Is there a rationale against it? (I'm fairly new to the world of Qt.)
If this is a viable change, I can post a PR for this. Thanks.
Edit:
Never mind. The `view_box.parent()` is `None` even with the default code path. | closed | 2024-06-05T15:08:30Z | 2024-06-05T15:45:11Z | https://github.com/pyqtgraph/pyqtgraph/issues/3048 | [] | tikuma-lsuhsc | 4 |
torchbox/wagtail-grapple | graphql | 369 | Add preserveSvg argument to ImageObjectType.srcSet | The `preserveSvg` argument, introduced in https://github.com/torchbox/wagtail-grapple/commit/802428fd0e0e56be484a1f279917739a83cc435e on the `rendition` field, would also be super useful on the `srcSet` field, to avoid such errors when requesting `srcSet(sizes: [360, 1024], format: "webp")` on an SVG image:
> 'SvgImage' object has no attribute 'save_as_webp'
Ref: #359 | closed | 2023-09-26T12:01:24Z | 2023-09-27T08:35:52Z | https://github.com/torchbox/wagtail-grapple/issues/369 | [] | mgax | 0 |
thp/urlwatch | automation | 241 | "unsupported operand type(s) for +=" with connection retry code | I'm having a problem after upgrading from 2.9 to 2.10 with the new connection retry code. One of the URLs I'm checking (http://wndw.net/book.html) is currently faulty and the connection hangs. 2.9 was able to cope with this but with 2.10 this now results in urlwatch exiting with an error as below.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
six.raise_from(e, None)
File "<string>", line 2, in raise_from
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.6/http/client.py", line 1331, in getresponse
response.begin()
File "/usr/local/lib/python3.6/http/client.py", line 297, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.6/http/client.py", line 258, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/local/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [Errno 54] Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/requests/adapters.py", line 440, in send
timeout=timeout
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python3.6/site-packages/urllib3/util/retry.py", line 357, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.6/site-packages/urllib3/packages/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
six.raise_from(e, None)
File "<string>", line 2, in raise_from
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.6/http/client.py", line 1331, in getresponse
response.begin()
File "/usr/local/lib/python3.6/http/client.py", line 297, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.6/http/client.py", line 258, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/local/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(54, 'Connection reset by peer'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urlwatch/handler.py", line 65, in process
data = self.job.retrieve(self)
File "/usr/local/lib/python3.6/site-packages/urlwatch/jobs.py", line 234, in retrieve
proxies=proxies)
File "/usr/local/lib/python3.6/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/adapters.py", line 490, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(54, 'Connection reset by peer'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/urlwatch", line 111, in <module>
urlwatch_command.run()
File "/usr/local/lib/python3.6/site-packages/urlwatch/command.py", line 204, in run
self.urlwatcher.run_jobs()
File "/usr/local/lib/python3.6/site-packages/urlwatch/main.py", line 93, in run_jobs
run_jobs(self)
File "/usr/local/lib/python3.6/site-packages/urlwatch/worker.py", line 60, in run_jobs
(JobState(cache_storage, job) for job in jobs)):
File "/usr/local/lib/python3.6/site-packages/urlwatch/worker.py", line 49, in run_parallel
raise exception
File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.6/site-packages/urlwatch/worker.py", line 59, in <lambda>
for job_state in run_parallel(lambda job_state: job_state.process(),
File "/usr/local/lib/python3.6/site-packages/urlwatch/handler.py", line 93, in process
self.tries += 1
TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'
``` | closed | 2018-05-19T16:23:12Z | 2018-05-19T16:39:35Z | https://github.com/thp/urlwatch/issues/241 | [] | sthen | 0 |
tableau/server-client-python | rest-api | 1,213 | Invalid version: 'Unknown' | There are two Tableau sites / servers I am trying to use TSC to connect to:
- One server works fine, version returned by below code is 3.19; Tableau version 2023.1.0 (20231.23.0324.0029) 64-bit Linux
- The other server returns a Server Version of 'Unknown' and is unable to proceed; Tableau version 2022.3.3 (20223.23.0112.0407) 64-bit Windows
The username, password and site are correct.
Code:
```
# Details for Server Sign-in. The variables for tsc_username, tsc_password and tsc_site are defined earlier in the script
tableau_auth = TSC.TableauAuth(tsc_username, tsc_password, site_id=tsc_site)
server = TSC.Server(tsc_server, use_server_version=True)
print("3. Version: ",server.version)
# Attempt to sign into the Tableau Site
print("4. Attempting to Sign-In to Tableau Site")
try:
server.auth.sign_in(tableau_auth)
can_sign_in = 1
print("4a. Attempt to sign in was successful")
except Exception as error:
can_sign_in = 0
print("4b. Attempt to sign in was not successful. Error: " + str(error))
# Generate a list of the workbooks if it is possible to sign in
if can_sign_in == 1:
with server.auth.sign_in(tableau_auth):
print("5a. Siigned In")
all_workbooks_items, pagination_item = server.workbooks.get()
print("5b. Workbook list prepared")
workbook_names = [workbook.name for workbook in all_workbooks_items]
workbook_ids = [workbook.id for workbook in all_workbooks_items]
print('5c. Workbooks identified for download attempt: ' + str(workbook_names))
else:
print('Cannot sign into server, no files can be obtained')
```
Output:
```
3. Version: Unknown
4. Attempting to Sign-In to Tableau Site
4b. Attempt to sign in was not successful. Error: Invalid version: 'Unknown'
Cannot sign into server, no files can be obtained
``` | closed | 2023-03-30T12:06:52Z | 2024-01-11T19:52:33Z | https://github.com/tableau/server-client-python/issues/1213 | [] | Soltis12 | 2 |
home-assistant/core | python | 141,217 | Unexpected error in Tado integration | ### The problem
Tado integration is throwing an 'Unexpected error' message when reconfiguring.
I already tried re-authenticating, no changes. I also tried reconfiguring, same issue.
Credentials are working in my.tado.com
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
core-2025.3.3
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Tado
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/tado/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
025-03-23 16:16:11.402 ERROR (MainThread) [homeassistant.components.tado.config_flow] Unexpected exception
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 131, in async_step_reconfigure
await validate_input(self.hass, user_input)
File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 52, in validate_input
tado = await hass.async_add_executor_job(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Tado, data[CONF_USERNAME], data[CONF_PASSWORD]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.13/site-packages/PyTado/interface/interface.py", line 46, in __init__
self._http = Http(
~~~~^
username=username,
^^^^^^^^^^^^^^^^^^
...<2 lines>...
debug=debug,
^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 153, in __init__
self._id, self._token_refresh = self._login()
~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 333, in _login
raise TadoException(
f"Login failed for unknown reason with status code {response.status_code}"
)
PyTado.exceptions.TadoException: Login failed for unknown reason with status code 403
```
### Additional information
_No response_ | closed | 2025-03-23T15:17:15Z | 2025-03-23T15:40:57Z | https://github.com/home-assistant/core/issues/141217 | [
"integration: tado"
] | bkortleven | 6 |
plotly/dash | data-science | 3,207 | Editshape - overwriting the behavior of the editable properly in shape definition | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Context**
In DCC Graph if `'edits': {'shapePosition':True}` is defined - it overwrites the editable property of the shapes when defining the shapes. Is that the expected behavior?
The shapes are defined as following (I was hoping to have two shapes as non-moveable / editable and two shapes to be moveable):
```python
if command_issued is not None:
fig.add_shape(dict(type='line', x0=command_issued, x1=command_issued, y0=0, y1=1, yref='paper', xref='x', line_color="blue", line_width=1.5, line_dash="dash", editable=True, opacity=0.75,
layer="between",
label=dict(text=f"Command Issue Time", textangle=0, xanchor="left", )))
if limit_reached is not None:
fig.add_shape(dict(type='line', x0=limit_reached, x1=limit_reached, y0=0, y1=1, yref='paper', xref='x', line_color="red", line_width=1.5, line_dash="dash", editable=True, opacity=0.75,
layer="between",
label=dict(text=f"Power Limit Reach Time", textangle=0, xanchor="left", )))
fig.add_shape(dict(type='line', x0=0, x1=1, y0=active_power_limit / 100, y1=active_power_limit / 100, yref='y', xref='paper',
line_color="green", line_width=1.0, line_dash="dash", editable=False, opacity=0.75,
layer="between",
label=dict(text=f"Active Power Limit ({active_power_limit:0.2f})%", textangle=0, )))
fig.add_shape(type="rect",editable=False,
x0=0, y0=active_power_limit / 100 - 0.05, x1=1, y1=active_power_limit / 100 + 0.05,xref='paper',
line=dict(
color="yellow",
width=1,
),
fillcolor="yellow",opacity=0.2,
)
```
- replace the result of `pip list | grep dash` below
```
dash 2.18
```
**Expected behavior**
Expected behavior was if editable property are defined that should be respected and editshapes should only allow the user to move whatever shapes the developer allowed to move while defining the shape.
| closed | 2025-03-11T05:13:02Z | 2025-03-11T05:15:39Z | https://github.com/plotly/dash/issues/3207 | [] | sssaha | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 248 | 数据分布及更换数据集的问题 | 感谢大佬的耐心解答,Faster R-CNN终于跑起来了😁
现有一个新的问题,在原来VOC数据集时,为什么Epoch只训练了5823个数据?
如今我更换了一个新的数据集,有5000张图片及其对应的5000个xml文件,
通过使用split_data.py划分了2500条train.txt和2500条val.txt,
为什么跑起来Epoch只用313条数据?

| closed | 2021-05-08T12:36:28Z | 2021-05-09T03:23:19Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/248 | [] | zmz125 | 1 |
LibrePhotos/librephotos | django | 755 | LibrePhotos keeps restarting and cannot access pages | # 🐛 Bug Report
* [X] 📁 I've Included a ZIP file containing my librephotos `log` files
* [X] ❌ I have looked for similar issues (including closed ones)
* [X] 🎬 (If applicable) I've provided pictures or links to videos that clearly demonstrate the issue
## 📝 Description of issue:
My setup keeps restarting it seems (cannot access pages most of the time)
I get this in the logs, not sure if this is an error or not (it is labeled as a warning but who knows):
## 🔁 How can we reproduce it:
## Please provide additional information:
- 💻 Operating system: Ubuntu
- ⚙ Architecture (x86 or ARM): x86
- 🔢 Librephotos version: Docker latest version
- 📸 Librephotos installation method (Docker, Kubernetes, .deb, etc.):
* 🐋 If Docker or Kubernets, provide docker-compose image tag:
reallibrephotos/librephotos:dev
- 📁 How is you picture library mounted (Local file system (Type), NFS, SMB, etc.):
local filesystem (ext4)
- ☁ If you are virtualizing librephotos, Virtualization platform (Proxmox, Xen, HyperV, etc.):
-
[logs.txt](https://github.com/LibrePhotos/librephotos/files/10725565/logs.txt)
| closed | 2023-02-13T19:23:57Z | 2023-03-31T09:59:22Z | https://github.com/LibrePhotos/librephotos/issues/755 | [
"bug",
"backend"
] | savvasdalkitsis | 5 |
axnsan12/drf-yasg | django | 247 | Inline css and javascript for swaggerUI and redoc | What do you think about having the css and javascript rendered inline in the swagger-ui and redoc views?
I currently have a use case where I use yasg on a micro service on kubernetes, but I don't want to add nginx or cloudfront to serve the staticfiles. | closed | 2018-11-06T13:49:54Z | 2018-11-06T14:00:54Z | https://github.com/axnsan12/drf-yasg/issues/247 | [] | lockwooddev | 1 |
streamlit/streamlit | deep-learning | 10,092 | Since 1.41.0, generate_chart() can also return an alt.LineChart, which can break external code using this utility function | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Not really a bug to be reported, more of a discussion :
Before 1.41.0, when we wanted to craft our own altair_chart, a convenient way was to first call the very useful _generate_chart()_ function and then to adjust it on the returned _alt.Chart_, according to our needs. This was easily doable as this function used to always return an _alt.Chart_.
Since 1.41.0 and especially [#9674](https://github.com/streamlit/streamlit/pull/9674), this function, if given an ChartType.LINE as input, can now also return an _alt.LayerChart_.
This change had incidence on __altair_chart()_ which, now, accepts also either an _alt.Chart_ and a _alt.LayerChart_.
Do you think it would be more consistent to keep a fixed, generic, returned type for this utility function?
This [#9674](https://github.com/streamlit/streamlit/pull/9674) PR was very useful and more might come in the future to improve altair charts, not always related to line ones only.
Thank you.
### Reproducible Code Example
_No response_
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.0 <-> 1.41.0
- Python version: 3.11
- Operating System: Win10
- Browser: Firefox
### Additional Information
_No response_ | closed | 2024-12-30T00:41:20Z | 2024-12-30T01:59:03Z | https://github.com/streamlit/streamlit/issues/10092 | [
"type:bug",
"status:needs-triage"
] | ybou10 | 1 |
ansible/ansible | python | 84,232 | Create stable branches for default container repos | ### Summary
This should be done immediately following code freeze.
### Issue Type
Feature Idea
### Component Name
`ansible-test` | open | 2024-11-01T18:59:04Z | 2024-11-01T19:07:00Z | https://github.com/ansible/ansible/issues/84232 | [
"feature",
"ansible-test"
] | mattclay | 1 |
jupyter/nbgrader | jupyter | 1,668 | Demo not starting | I've tried as both the remote server and the Docker image route (in both cases, the root OS is ubuntu 22:04) and get the same error:
```
Running setup.py develop for nbgrader
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [77 lines of output]
running develop
/usr/lib/python3/dist-packages/setuptools/command/easy_install.py:158: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/usr/lib/python3/dist-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: 1.1build1 is an invalid version and will not be supported in a future release
warnings.warn(
running egg_info
writing nbgrader.egg-info/PKG-INFO
writing dependency_links to nbgrader.egg-info/dependency_links.txt
writing entry points to nbgrader.egg-info/entry_points.txt
writing requirements to nbgrader.egg-info/requires.txt
writing top-level names to nbgrader.egg-info/top_level.txt
reading manifest file 'nbgrader.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no directories found matching 'nbgrader/labextension'
no previously-included directories found matching '**/node_modules'
no previously-included directories found matching 'lib'
no previously-included directories found matching 'nbgrader/docs/build'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.ipynb_checkpoints' found anywhere in distribution
warning: no previously-included files matching '__pycache__' found anywhere in distribution
warning: no previously-included files matching '.git' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*~' found anywhere in distribution
no previously-included directories found matching 'demos'
no previously-included directories found matching 'tools'
no previously-included directories found matching 'paper'
no previously-included directories found matching 'binder'
adding license file 'LICENSE'
writing manifest file 'nbgrader.egg-info/SOURCES.txt'
running build_ext
Creating /usr/local/lib/python3.10/dist-packages/nbgrader.egg-link (link to .)
Adding nbgrader 0.8.0 to easy-install.pth file
Installing nbgrader script to /usr/local/bin
Installed /srv/nbgrader/nbgrader
running post_develop
yarn not found, ignoring yarn.lock file
yarn install v1.21.1
[1/4] Resolving packages...
[2/4] Fetching packages...
warning @blueprintjs/core@3.54.0: Invalid bin entry for "upgrade-blueprint-2.0.0-rename" (in "@blueprintjs/core").
warning @blueprintjs/core@3.54.0: Invalid bin entry for "upgrade-blueprint-3.0.0-rename" (in "@blueprintjs/core").
error @playwright/test@1.22.2: The engine "node" is incompatible with this module. Expected version ">=14". Got "12.22.9"
error Found incompatible module.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/srv/nbgrader/nbgrader/setup.py", line 62, in <module>
setup(**setup_args)
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-gpe5ljz7/overlay/local/lib/python3.10/dist-packages/jupyter_packaging/setupbase.py", line 151, in run
self.run_command(post_build.__name__)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-gpe5ljz7/overlay/local/lib/python3.10/dist-packages/jupyter_packaging/setupbase.py", line 127, in run
func()
File "/tmp/pip-build-env-gpe5ljz7/overlay/local/lib/python3.10/dist-packages/jupyter_packaging/setupbase.py", line 231, in builder
run(npm_cmd + ["install"], cwd=node_package)
File "/tmp/pip-build-env-gpe5ljz7/overlay/local/lib/python3.10/dist-packages/jupyter_packaging/setupbase.py", line 297, in run
return subprocess.check_call(cmd, **kwargs)
File "/usr/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/jlpm', 'install']' returned non-zero exit status 1.
[end of output]
```
..... and I pick out
error @playwright/test@1.22.2: The engine "node" is incompatible with this module. Expected version ">=14". Got "12.22.9"
as the primary issue.
I've tried manually updating `nodejs` manually, and reverting to `Ubuntu 20:04` - neither _magically_ fixed it.
(related to #1664 - checking the action of the `feedback` link)
| closed | 2022-09-22T09:16:45Z | 2022-12-30T16:20:46Z | https://github.com/jupyter/nbgrader/issues/1668 | [] | perllaghu | 2 |
ExpDev07/coronavirus-tracker-api | rest-api | 20 | Integrity checks | The JHU CSSE screw it up again so I guess we need to implement some integrity check against broken input. | open | 2020-03-08T01:10:14Z | 2020-03-08T02:31:11Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/20 | [
"enhancement",
"feedback"
] | Bost | 6 |
iperov/DeepFaceLab | machine-learning | 5,497 | gtx1080TI vs rtx3080 | Hi,
i've switched computers from an desktop with;
win10
gtx1080ti gpu
amd fx-8379 cpu
16gb memory
to:
win11
rtx3080 gpu
i9-12900kf cpu
32gb memory
i am trying to run model with these settings:
================== Model Summary ===================
== ==
== Model name: DF-UD384_SAEHD ==
== ==
== Current iteration: 561403 ==
== ==
==---------------- Model Options -----------------==
== ==
== resolution: 384 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: df-ud ==
== ae_dims: 352 ==
== e_dims: 88 ==
== d_dims: 88 ==
== d_mask_dims: 16 ==
== masked_training: True ==
== eyes_mouth_prio: True ==
== uniform_yaw: False ==
== adabelief: True ==
== lr_dropout: y ==
== random_warp: False ==
== true_face_power: 0.2 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: False ==
== pretrain: False ==
== autobackup_hour: 0 ==
== write_preview_history: False ==
== target_iter: 0 ==
== random_src_flip: False ==
== random_dst_flip: True ==
== batch_size: 4 ==
== gan_power: 0.0 ==
== gan_patch_size: 48 ==
== gan_dims: 16 ==
== blur_out_mask: False ==
== random_hsv_power: 0.0 ==
== ==
==------------------ Running On ------------------==
== ==
== Device index: 0 ==
== Name: NVIDIA GeForce RTX 3080 ==
== VRAM: 7.27GB ==
== ==
====================================================
**it has been working fine on the gtx1080ti for over 500k iterations. but wont run on the new rig(rtx3080).**
getting following erros:
Error: 2 root error(s) found.
(0) Resource exhausted: failed to allocate memory
[[node mul_129 (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_4/concat/_547]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: failed to allocate memory
[[node mul_129 (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node mul_129:
src_dst_opt/vs_inter/dense2/weight_0/read (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Input Source operations connected to node mul_129:
src_dst_opt/vs_inter/dense2/weight_0/read (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Original stack trace for 'mul_129':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 564, in on_initialize
src_dst_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list (gpu_G_loss_gvs))
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py", line 64, in get_update_op
v_t = self.beta_2*vs + (1.0-self.beta_2) * tf.square(g-m_t)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1076, in _run_op
return tensor_oper(a.value(), *args, **kwargs)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1400, in r_binary_op_wrapper
return func(x, y, name=name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1710, in _mul_dispatch
return multiply(x, y, name=name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 530, in multiply
return gen_math_ops.mul(x, y, name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6245, in mul
"Mul", x=x, y=y, name=name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
Traceback (most recent call last):
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: failed to allocate memory
[[{{node mul_129}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_4/concat/_547]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: failed to allocate memory
[[{{node mul_129}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 129, in trainerThread
iter, iter_time = model.train_one_iter()
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 474, in train_one_iter
losses = self.onTrainOneIter()
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
self.target_dstm_em:target_dstm_em,
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message) # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: failed to allocate memory
[[node mul_129 (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_4/concat/_547]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: failed to allocate memory
[[node mul_129 (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:64) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node mul_129:
src_dst_opt/vs_inter/dense2/weight_0/read (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Input Source operations connected to node mul_129:
src_dst_opt/vs_inter/dense2/weight_0/read (defined at D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:38)
Original stack trace for 'mul_129':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 564, in on_initialize
src_dst_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list (gpu_G_loss_gvs))
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py", line 64, in get_update_op
v_t = self.beta_2*vs + (1.0-self.beta_2) * tf.square(g-m_t)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1076, in _run_op
return tensor_oper(a.value(), *args, **kwargs)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1400, in r_binary_op_wrapper
return func(x, y, name=name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1710, in _mul_dispatch
return multiply(x, y, name=name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 530, in multiply
return gen_math_ops.mul(x, y, name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6245, in mul
"Mul", x=x, y=y, name=name)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "D:\New folder\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
Any sugestions?
paging file enabled on 500gb, and hardware accelerated gpu scheduling enabled | open | 2022-03-19T10:49:20Z | 2023-06-08T23:18:46Z | https://github.com/iperov/DeepFaceLab/issues/5497 | [] | djchairz | 2 |
sigmavirus24/github3.py | rest-api | 762 | Create Issue Integration Test | Delete me after | closed | 2018-01-01T22:22:47Z | 2018-01-04T16:07:42Z | https://github.com/sigmavirus24/github3.py/issues/762 | [] | sigmavirus24 | 0 |
aeon-toolkit/aeon | scikit-learn | 1,902 | [ENH] Implement Disjoint-CNN deep learning for classification and regression and adding mutlvariate specificity tag | ### Describe the feature or idea you want to propose
Having specific multivariate deep learning models is a good thing, i like to have Disjoint-CNN, Monash's multivariate CNN model [1] into aeon, its implementation is already quite clear and easy and is in tensorflow [here](https://github.com/Navidfoumani/Disjoint-CNN)
[1] Foumani, Seyed Navid Mohammadi, Chang Wei Tan, and Mahsa Salehi. "Disjoint-cnn for multivariate time series classification." 2021 International Conference on Data Mining Workshops (ICDMW). IEEE, 2021.
### Describe your proposed solution
- The DisjointCNNNetwork class should be added to the network class but parametrized to number of layers, number of filters, kernel size per layer. HINT: look on how already FCN ResNet and all other CNN networks are already implemented
- Adding The DisjointCNNClassifier and DisjointCNNRegressor classes to deep classification/regression models
- Add a tag for the networks module, deep classification and regression modules called "multivariate_univariate_duality" or something like that:
- 1. if this flag is True, means that it works fine with both
- 2. if its false, it means it was proposed for multivariate, it can technically work on univariate but doesnt make sense, case example: DisjointCNN
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
i already have a code ready for it, adding this issue to remember the tagging thing, assigning myself | closed | 2024-08-04T11:38:29Z | 2024-11-11T18:27:36Z | https://github.com/aeon-toolkit/aeon/issues/1902 | [
"enhancement",
"classification",
"regression",
"deep learning",
"networks"
] | hadifawaz1999 | 0 |
encode/httpx | asyncio | 2,207 | httpx got an error, but requests works | httpx code:
```
import httpx
url = "http://ip:port"
payload = OrderedDict([("key", "foo.txt"),
("acl", "public-read"),
("Content-Type", "text/plain"),
('file', ('foo.txt', 'bar'))])
r = httpx.post(url, files=payload)
```
requests code:
```
import requests
url = "http://ip:port"
payload = OrderedDict([("key", "foo.txt"),
("acl", "public-read"),
("Content-Type", "text/plain"),
('file', ('foo.txt', 'bar'))])
r = requests.post(url, files=payload)
```
| closed | 2022-05-06T09:19:25Z | 2022-09-09T15:47:41Z | https://github.com/encode/httpx/issues/2207 | [] | kisscelia | 2 |
aimhubio/aim | tensorflow | 3,180 | Export svg and other formtas in parallel coordinates plot of PARAMS | ## Proposed refactoring or deprecation
Could you provide a button to export the parallel coordinates chart in the PARAMS section?
### Motivation
It is currently impossible to export the parallel coordinates chart in the PARAMS section. Still, it's a handy tool for supporting articles and documents related to hyperparametrizing while training models or comparing algorithms.
### Pitch
Please provide a button for exporting the parallel coordinates chart in the PARAMS section, as well as all the other chart export formats, such as SVG, PNG, etc.
### Additional context
| open | 2024-07-06T19:49:50Z | 2024-07-15T14:59:16Z | https://github.com/aimhubio/aim/issues/3180 | [
"type / enhancement",
"type / code-health"
] | mmrMontes | 1 |
xinntao/Real-ESRGAN | pytorch | 177 | Help regarding contribution | Hi @xinntao, I need some help implementing the last 2 TODO features i.e. controllable restoration strength and other scales i.e. greater than 4. How can this be implemented i have been given a task from my school/university to select any research paper and make major/minor improvement in it. It would be great help if tell me how to implement these it will save a lot of my time otherwise if i had to figure out myself then it will take a lot of time, and i don't have much time. | open | 2021-12-07T09:20:15Z | 2021-12-14T10:39:27Z | https://github.com/xinntao/Real-ESRGAN/issues/177 | [] | ArishSultan | 1 |
piskvorky/gensim | machine-learning | 2,763 | FastText OOV word embedding are calculated incorrectly when passing `use_norm=True` | #### Problem description
FastText OOV word embedding are calculated incorrectly when passing `use_norm=True` (particularly, when looking for most similar words). The library first normalizes n-gram vectors, then averages them. But it should average them first; otherwise, results are inconsistent.
What happens: cosine similarities used for neighbor retrieval are different from similarities calculated directly from word vectors.
Why it happens:
* usually when calculating vectors for OOV words fasttext calculates average of n-gram vectors
* but if we pass `use_norm=True`, then fasttext calculates average of *L2-normalized* n-gram vectors ([code](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/models/keyedvectors.py#L2090)). And it is wrong!
* when we lookup for most similar words, we use just this option, `use_norm=True` ([code](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/models/keyedvectors.py#L831)), how unfortunate!
* why averaging normalized vectors is wrong: because it was never done when model was trained, and is normally never done when the model is applied, so such vectors are most probably meaningless.
* how to do it right: *first* average n-gram vectors, and *then* normalize them.
#### Call to action:
Rewrite `word_vec` method for FastTextKeyedVectors to apply normalization and averaging in the rigth order.
#### Steps/code/corpus to reproduce
```
word = 'some_oov_word'
pairs = model.most_similar(word)
top_neighbor, top_simil = pairs[0]
print(top_simil)
print(model.cosine_similarities(model[word], model[top_neighbor].reshape(1, -1))[0])
```
these two prints are expected to produce identical numbers (similarity between the oov words and its closest neighbor), but in fact the numbers are different.
This [notebook](https://gist.github.com/avidale/c6b1d13b32a36f19750cd01148560561) reproduces the problem with a particular model for Russian language, but it is relevant for any language.
#### Versions
The output of
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
is
```
Linux-4.14.137+-x86_64-with-Ubuntu-18.04-bionic
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0]
NumPy 1.17.5
SciPy 1.4.1
gensim 3.8.1
FAST_VERSION 1
```
| closed | 2020-03-03T18:33:54Z | 2020-03-21T04:34:41Z | https://github.com/piskvorky/gensim/issues/2763 | [] | avidale | 1 |
home-assistant/core | python | 141,128 | "Error while executing the action switch/turn_on . SmartThingsCommandError(ConflictError, invalid device state) | ### The problem
So, everytime I try to turn my tv on or off using the smartthings integration, I get the following error "Error while executing the action switch/turn_on . SmartThingsCommandError(ConflictError, invalid device state)". It looks like to be related to #140903 (which was fixed but generated another problem #141127).

### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
SmartThings
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-22T16:38:22Z | 2025-03-22T17:07:24Z | https://github.com/home-assistant/core/issues/141128 | [
"integration: smartthings"
] | C0dezin | 6 |
youfou/wxpy | api | 358 | 新的小功能 | 不知道能不能加入的相关的正在输入的功能 | open | 2018-12-20T07:26:33Z | 2019-04-09T01:41:54Z | https://github.com/youfou/wxpy/issues/358 | [] | 666677778888 | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 355 | How to run the app | I cannot run the app. how to do it? | closed | 2020-06-04T09:33:18Z | 2020-07-04T22:57:20Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/355 | [] | XxBlackBirdxX | 4 |
dynaconf/dynaconf | django | 1,138 | [RFC] Add description text to the fields | There will be 2 ways to add a description to a field on the schema
first: usign Doc() PEP 727 https://github.com/tiangolo/fastapi/blob/typing-doc/typing_doc.md
```py
field: Annotated[str, Doc("This field is important")]
```
The above adds a Docstring to the schema that can be extracted by Dynaconf to generate template files and documentation.
---
second: Docstrings, this method is not standardized but has an advantage of being recognized by LSPs
```py
field: str
"""This field is important"""
```
To extract this one and turn into a `Doc` inside the annotated type the following snippet can be used:
```py
import inspect
import re
class Database:
host: str
"""this is a hostname"""
port: int
"""this is port"""
def extract_docstrings(cls):
source = inspect.getsource(cls)
pattern = re.compile(r"(\w+):\s*\w+\n\s*\"\"\"(.*?)\"\"\"", re.DOTALL)
docstrings = dict(pattern.findall(source))
return docstrings
docstrings = extract_docstrings(Database)
for attr, doc in docstrings.items():
print(f"{attr}: {doc}")
# Output should be:
# host: this is a hostname
# port: this is port
```
| open | 2024-07-06T20:31:56Z | 2024-07-16T14:58:09Z | https://github.com/dynaconf/dynaconf/issues/1138 | [
"Not a Bug",
"RFC",
"typed_dynaconf"
] | rochacbruno | 1 |
indico/indico | flask | 6,039 | Update the system Check-in App with a new redirect_uri | The Check-in `OAuthApplication` in Indico needs to be updated with a new redirect-uri for the new PWA app once we decide on what domain to use for it. I'm just putting this here so we don't forget to do it before releasing the new app.
Todo:
- Remove `redirect_uris` from `__enforced_data__`
- Add the new redirect to `__default_data__` (this only affects new installations)
- Add an alembic revision to change the redirect uri on existing installations
| closed | 2023-11-17T09:49:55Z | 2023-11-30T23:35:34Z | https://github.com/indico/indico/issues/6039 | [
"enhancement"
] | tomasr8 | 2 |
nteract/papermill | jupyter | 806 | Python parameter with `None | ` union type is logged as unknown | ## 🐛 Bug
```
# template.ipynb parameters block
foo: None|str = None
```
```
$ papermill -p foo BAR ...
Passed unknown parameter: foo
```
Expected: this warning not to show up, foo should be recognized as a valid parameter.
Also note:
* Despite the warning, the above example still properly injects the parameter value
* `Optional[str]` does not suffer from this issue, just type union style
* `str|None` (change the type order) produces the same warning log
papermill version == 2.6.0 | open | 2024-09-05T23:42:49Z | 2024-09-05T23:43:20Z | https://github.com/nteract/papermill/issues/806 | [
"bug",
"help wanted"
] | calbach | 0 |
strawberry-graphql/strawberry | asyncio | 3,141 | Input with optional field | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
I'm trying to make an input filter option in Strawberry.
```
@strawberry.input
class FilterOptions:
is_: Optional[bool] = strawberry.field(default=None, name="is")
```
As "is" is a reserved keyword in Python I can't use it as a field directly, so I alias it with `strawberry.field(default=None, name="is")`
However, if I do the above I get the following error:
```
TypeError: issubclass() arg 1 must be a class
pydantic/validators.py:751: TypeError
....
RuntimeError: error checking inheritance of <strawberry.type.StrawberryOptional object at 0x7f3503564dd0> (type: StrawberryOptional)
```
But If I make the field non-optional like below, it works:
```
@strawberry.input
class FilterOptions:
is_: bool = strawberry.field(default=None, name="is")
```
## System Information
- Operating system:
- Ubuntu 22.04
- Strawberry version (if applicable):
- 0.209.6
## Additional Context
<!-- Add any other relevant information about the problem here. --> | open | 2023-10-09T11:19:52Z | 2025-03-20T15:56:24Z | https://github.com/strawberry-graphql/strawberry/issues/3141 | [
"bug"
] | ocni-dtu | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 923 | Bad images in training data | Hello,
I have a quick question about the training data. Some of my training data (approximately 1%) are defect and very different from others. Do I need to clean them or is it fine for traning?
Thank you! | open | 2020-02-18T21:18:40Z | 2020-02-25T09:21:35Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/923 | [] | zzhan127 | 1 |
koaning/scikit-lego | scikit-learn | 47 | Bug: Sphinx not installed as dependency for development | - create `dev` dependencies in `./setup.
- `pip install sphinx`
- `pip install sphinx_rtd_theme`
| closed | 2019-03-20T10:57:04Z | 2019-03-20T14:35:51Z | https://github.com/koaning/scikit-lego/issues/47 | [] | sandervandorsten | 1 |
MilesCranmer/PySR | scikit-learn | 70 | PySR could not start julia. Make sure julia is installed and on your $PATH. | Hello Miles! Ⅰ am trying to apply your PySR to some biological dataset and wish to find some interesting results (something like compact ODE/SDE). But Ⅰ am kind of new to Julia, and when Ⅰ try to run the example, this bug jumps: "PySR could not start Julia. Make sure Julia is installed and on your $PATH". Ⅰ looked for some solution (add Julia path to the current workspace?) but Ⅰ still can't solve this problem, would you mind giving a solution? Thanks in advance.
**Ⅰ am trying to run on Mac, and the Julia version is 1.6 ('/Applications/Julia-1.6.app/Contents/Resources/julia/bin/julia'). **
And also Ⅰ am curious that if PySR will be robust when data is noisy.
Thank you very much!
| closed | 2021-08-13T10:17:13Z | 2021-08-13T16:31:22Z | https://github.com/MilesCranmer/PySR/issues/70 | [
"bug"
] | Ting-TingGao | 3 |
jazzband/django-oauth-toolkit | django | 657 | Use of Content-Type "application/x-www-form-urlencoded" not enforced | The [RFC 6749](https://tools.ietf.org/html/rfc6749) seems to suggest that the Content-Type `application/x-www-form-urlencoded` should always be used in POST requests.
In a more specific way, the [RFC6750](https://tools.ietf.org/html/rfc6750#section-2.2) states that:
> When sending the access token in the HTTP request entity-body, the
> client adds the access token to the request-body using the
> "access_token" parameter. The client MUST NOT use this method unless
> all of the following conditions are met:
>
> o The HTTP request entity-header includes the "Content-Type" header
> field set to "application/x-www-form-urlencoded".
> ...
After a quick look around, many other projects seem to enforce this content type with the exception of [auth0](https://auth0.com/docs/api/authentication#authorization-code).
At the moment, django-oauth-toolkit allows other content types and it has even a [specific class](https://github.com/jazzband/django-oauth-toolkit/blob/master/oauth2_provider/oauth2_backends.py#L171) for parsing JSON requests which was introduced in [PR 234](https://github.com/jazzband/django-oauth-toolkit/pull/234).
It would probably be a good idea to enforce the use of `application/x-www-form-urlencoded` to align with the RFCs but as it requires changes that could break many existing projects relying on DOT, it would be worth discussing the issue here first. | open | 2018-10-26T14:47:29Z | 2024-06-03T20:53:08Z | https://github.com/jazzband/django-oauth-toolkit/issues/657 | [] | marcofucci | 4 |
lepture/authlib | flask | 27 | Deprecate client_model | There is no need to pass the Client model class into servers. Here is the upgrade guide.
## Flask OAuth 1 Server
Flask OAuth 1 server has been redesigned a lot. It's better to read the documentation again.
## Flask OAuth 2 Server
Pass a `query_client` function to AuthorizationServer instead of `client_model`:
```python
from authlib.flask.oauth2 import AuthorizationServer
from your_project.models import Client
def query_client(client_id):
return Client.query.filter_by(client_id=client_id).first()
server = AuthorizationServer(app, query_client)
# or lazily
server = AuthorizationServer()
server.init_app(app, query_client)
```
There is a helper function to create `query_client`:
```python
from authlib.flask.oauth2.sqla import create_query_client_func
query_client = create_query_client_func(db.session, Client)
``` | closed | 2018-02-11T03:29:39Z | 2018-02-11T12:28:52Z | https://github.com/lepture/authlib/issues/27 | [
"documentation"
] | lepture | 0 |
google-research/bert | tensorflow | 423 | No Hub in tf HUB. | I don't see any tensorflow hub in https://tfhub.dev/ for BERT.
Thanks | closed | 2019-02-08T16:33:50Z | 2019-02-08T16:51:39Z | https://github.com/google-research/bert/issues/423 | [] | simonefrancia | 2 |
littlecodersh/ItChat | api | 488 | retcode:"1102",selector:"0" | 在本地运行(Linux)可以正常执行程序,但是放到服务器(Linux)扫码登陆后则出现“retcode:"1102",selector:"0"”然后自动退出。
原issues已关闭。
| closed | 2017-08-20T02:32:40Z | 2017-09-20T02:08:28Z | https://github.com/littlecodersh/ItChat/issues/488 | [
"question"
] | XuDaShu | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,172 | Is this lib capable of TTS? | Hi, I think this is the only lib out here that can synthesize hq voice . I was wondering if it can also generate tts | open | 2023-03-10T18:15:56Z | 2023-03-10T18:15:56Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1172 | [] | destlaver | 0 |
sammchardy/python-binance | api | 944 | Websocket Disconnecting in docker | **Describe the bug**
I'm running Django with docker and using a futures account websocket in it. the websocket simply doesn't work most of the times meaning I start the websocket and everything seems fine but I receive no message. I have to restart it again and again and suddenly it starts working (if I restart it again it will not work). Also from time to time my websocket throws an exception and disconnects. I don't get any exception if I run the app with python 3.9 in my ubuntu without docker
**Environment:**
- Python 3.9-alpine
- docker
- python-binance version 1.0.12
**Logs or Additional context**
here's the exception
```
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.9/site-packages/binance/threaded_stream.py", line 52, in start_listener
web_1 | callback(msg)
web_1 | File "/usr/local/lib/python3.9/site-packages/binance/streams.py", line 216, in __aexit__
web_1 | await super().__aexit__(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.9/site-packages/binance/streams.py", line 71, in __aexit__
web_1 | await self._conn.__aexit__(exc_type, exc_val, exc_tb)
web_1 | File "/usr/local/lib/python3.9/site-packages/websockets/legacy/client.py", line 612, in __aexit__
web_1 | await self.protocol.close()
web_1 | AttributeError: 'Connect' object has no attribute 'protocol'
```
| open | 2021-06-23T14:02:56Z | 2021-07-17T12:07:28Z | https://github.com/sammchardy/python-binance/issues/944 | [] | reza-yb | 1 |
pydantic/pydantic-settings | pydantic | 390 | `TypeError: issubclass() arg 1 must be a class` when upgrading to 2.5 | Hello,
Ever since 2.5 was released, all of our CI/CD fails on Python 3.9 and 3.10 with the following error:
```
../../virtualenvs/ape_310/lib/python3.10/site-packages/pydantic_settings/sources.py:771: in _field_is_complex
if self.field_is_complex(field):
../../virtualenvs/ape_310/lib/python3.10/site-packages/pydantic_settings/sources.py:276: in field_is_complex
return _annotation_is_complex(field.annotation, field.metadata)
../../virtualenvs/ape_310/lib/python3.10/site-packages/pydantic_settings/sources.py:2118: in _annotation_is_complex
if isinstance(annotation, type) and issubclass(annotation, RootModel):
../../.pyenv/versions/3.10.0/lib/python3.10/abc.py:123: in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
```
We are using a config setup like the following (simple reproduction):
```python
from pydantic_settings import BaseSettings
class BaseConfig(BaseSettings):
pass
class Exclusion(BaseConfig):
name: str = "*"
class Wrapper(BaseConfig):
exclude: list[Exclusion] = []
class MyConfig(BaseConfig):
coverage: Wrapper = Wrapper()
cfg = MyConfig()
```
^ When I run the above script, I basically get the same error as ours. I tried to make it as simple as possible. | closed | 2024-09-10T18:00:33Z | 2024-09-11T16:07:34Z | https://github.com/pydantic/pydantic-settings/issues/390 | [
"bug"
] | antazoey | 15 |
microsoft/unilm | nlp | 1,593 | TROCR model weight file cannot be downloaded. | The TROCR model weight file cannot be downloaded. Thank you very much if you can provide me.
| open | 2024-07-04T08:14:51Z | 2024-07-04T08:14:51Z | https://github.com/microsoft/unilm/issues/1593 | [] | fuzheng1209 | 0 |
pyjanitor-devs/pyjanitor | pandas | 1,008 | [BUG] Unexpected mutation of original df by `sort_column_value_order` | # Brief Description
The `sort_column_value_order` function adds a new column `cond_order` to the original dataframe (see [here](https://github.com/pyjanitor-devs/pyjanitor/blob/9d3653f959f11150bcc9aca87476efc72affc60a/janitor/functions/sort_column_value_order.py#L55)) for implementation purposes.
But this is mutating the original dataframe in a non-expected way -- the end user shouldn't be seeing this new column since `cond_order` is only used in the internal implementation of the sorting function.
# Minimally Reproducible Code
```python
df = pd.DataFrame({
"a1": [4, 2, 9, 12],
"a2": ["x", "y", "z", "y"],
})
df.sort_column_value_order("a2", {"x": 3, "y": 2, "z": 1});
df
# original df now has an additional column...
a1 a2 cond_order
0 4 x 3
1 2 y 2
2 9 z 1
3 12 y 2
```
| closed | 2022-02-05T03:17:18Z | 2022-02-06T16:19:21Z | https://github.com/pyjanitor-devs/pyjanitor/issues/1008 | [] | thatlittleboy | 1 |
marshmallow-code/flask-marshmallow | sqlalchemy | 16 | Compatibility with marshmallow 2.0.0 | closed | 2015-04-27T12:35:45Z | 2015-04-28T01:52:52Z | https://github.com/marshmallow-code/flask-marshmallow/issues/16 | [] | sloria | 0 |
|
ml-tooling/opyrator | streamlit | 42 | How to add text input box to sidebar? | <!--
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, be sure to check our readme first.
-->
Hi, I am so happy to find very useful framework.
Now text input box is always on center. But I want to add text input box to sidebar like the picture below(Picture is from an one of the examples in README).
<img width="779" alt="스크린샷 2021-08-26 23 36 45" src="https://user-images.githubusercontent.com/40135976/130982944-262f5107-0c58-4efd-a3ed-2dbaa4701e50.png">
| closed | 2021-08-26T14:38:50Z | 2021-08-26T15:05:50Z | https://github.com/ml-tooling/opyrator/issues/42 | [
"question"
] | 589hero | 1 |
akfamily/akshare | data-science | 5,213 | AKShare 接口问题报告 | 接口都挂了?今天所有的接口都提示:
ConnectionError: HTTPConnectionPool(host='80.push2.eastmoney.com', port=80): Max retries exceeded with url: /api/qt/clist/get?pn=1&pz=50000&po=1&np=1&ut=bd1d9ddb04089700cf9c27f6f7426281&fltt=2&invt=2&fid=f3&fs=m%3A1+t%3A2%2Cm%3A1+t%3A23&fields=f12&_=1623833739532 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002080C5C3C70>: Failed to establish a new connection: [WinError 10051] 向一个无法连接的网络尝试了一个套接字操作。'))
| closed | 2024-09-30T12:16:44Z | 2024-09-30T14:25:18Z | https://github.com/akfamily/akshare/issues/5213 | [
"bug"
] | nbnc003 | 2 |
mljar/mercury | jupyter | 438 | Option to hide specific cells/output | Hello,
I am quite new with Mercury but, looking at it at first stage, trying to convert my very long notebooks into something shareable, I noticed that there is no way to hide some specific cells from the resulting webapp.
I mean, usually, in my notebooks I have some intermediate outputs between the ideal input and the final output.
Is there a way to hide some cells that I do not want to be rendered by mercury?
Like nbconver does, can we assign a custom tag to each of the cell we want to hide and then instruct mercury to skip rendering those specif ones?
SOURCE: https://stackoverflow.com/questions/49907455/hide-code-when-exporting-jupyter-notebook-to-html
| closed | 2024-03-30T10:43:15Z | 2024-04-02T07:15:53Z | https://github.com/mljar/mercury/issues/438 | [] | informatica92 | 1 |
pennersr/django-allauth | django | 3,133 | CSS styles not being applied | Hello, all!
I'm trying to load /accounts/email/ and /accounts/password/change/ , but when I try that, the styles on form are not loading properly.
There are no 404 messages on developers interface, I installed as prescribed and I don't know what's happening.
How to solve that problem?
Here are 2 screenshots showing the problem:
https://pasteboard.co/UPbFvT8qF5aG.png
https://pasteboard.co/K49rmUg1K5TO.png
Thanks! | closed | 2022-07-29T14:50:03Z | 2022-11-30T20:13:08Z | https://github.com/pennersr/django-allauth/issues/3133 | [] | lucasbracher | 1 |
d2l-ai/d2l-en | pytorch | 1,789 | Missing word in §2.1.1, ¶1 | In the first paragraph of Section 2.1.1 **Getting Started**, we have the following sentence:
> The following sections will revisit this material in the context of practical examples and it will sink.
That should be changed to:
> The following sections will revisit this material in the context of practical examples and it will **sink in**. | closed | 2021-06-12T22:39:39Z | 2021-06-13T21:58:33Z | https://github.com/d2l-ai/d2l-en/issues/1789 | [] | dowobeha | 0 |
sigmavirus24/github3.py | rest-api | 428 | Incorrect documentation for Events.org | https://github3py.readthedocs.org/en/master/events.html#github3.events.Event.org
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/25454839-incorrect-documentation-for-events-org?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2015-08-06T20:44:42Z | 2018-03-22T02:31:02Z | https://github.com/sigmavirus24/github3.py/issues/428 | [
"help wanted"
] | sigmavirus24 | 3 |
capitalone/DataProfiler | pandas | 461 | Ability to reuse the same `dp.Profiler` object for different data | **Is your feature request related to a problem? Please describe.**
We are evaluating DataProfiler as a possible way to label data in our pipeline.
We need to be able to profile many small samples of data at a high frequency. As things stand now, it appears that we need to create a new `dp.Profiler` object for each such sample. That creation takes several seconds (apparently due to TensorFlow loading) and is therefore not scalable.
At the same time, the `update_profile` method only *adds* data to the data previously submitted to the Profiler. So if we use the same Profiler object with the `update_profile` method the data inside of it keeps growing.
What we would need is a `replace_data` functionality: basically, make the Profiler forget the data it was given previously and instead receive new data.
**Describe the outcome you'd like:**
Ability to use reuse the same `dp.Profiler` object on different data samples and avoid the time costly initialization.
**Additional context:**
| open | 2022-04-28T23:53:53Z | 2022-05-03T14:43:10Z | https://github.com/capitalone/DataProfiler/issues/461 | [
"New Feature"
] | sergeypine | 3 |
fastapiutils/fastapi-utils | fastapi | 286 | [QUESTION] make repeat_tasks execute on a single worker (in case of multiple workers) | **Description**
reopening this https://github.com/dmontagu/fastapi-utils/issues/230
How can I [...]?
Is it possible to [...]?
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2024-03-07T07:46:39Z | 2024-03-07T08:07:36Z | https://github.com/fastapiutils/fastapi-utils/issues/286 | [
"question"
] | TheCodeYoda | 0 |
plotly/dash | jupyter | 2,858 | [BUG] Fix overlay_style in dcc.Loading | dash>= 2.17.0
The `overlay_style` prop in `dcc.Loading` should apply only to the background and not the spinner component. You can see it in the docs - here is the example:
This could be tagged as a "Good First Issue". If someone doesn't get to it first, I think I can fix it :slightly_smiling_face:
```python
import time
from dash import Dash, Input, Output, callback, html, dcc, no_update
import dash_bootstrap_components as dbc
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = dbc.Container(
[
dbc.Button("Start", id="loading-overlay-button", n_clicks=0),
dcc.Loading(
[dbc.Alert("My Data", id="loading-overlay-output", className="h4 p-4 mt-3")],
overlay_style={"visibility":"visible", "filter": "blur(2px)"},
type="circle",
),
]
)
@callback(
Output("loading-overlay-output", "children"),
Input("loading-overlay-button", "n_clicks"),
)
def load_output(n):
if n:
time.sleep(1)
return f"Data updated {n} times."
return no_update
if __name__ == "__main__":
app.run(debug=True)
``` | closed | 2024-05-13T17:15:04Z | 2024-05-15T18:17:08Z | https://github.com/plotly/dash/issues/2858 | [
"bug",
"sev-2"
] | AnnMarieW | 0 |
axnsan12/drf-yasg | rest-api | 305 | Inspect both serializer and the view response | Hej, thanks for maintaining this lib!
I have following problem: my view returns the django `Response` object with json content-type with custom structure. The response contains mandatory `description` and `content` views that are present in many views in my app. When generating the schema, only serializer is inspected, yet I'd like to have all my fields from the `Response`. Here's my serializer and view (simplified version for the matter of example):
```python
class UserSerializer(serializers.Serializer):
field1 = serializers.CharField()
field2 = serializers.CharField()
# (...)
class UserView(GenericAPIView)
serializer_class = UserSerializer
def post(self, request, *args, **kwargs):
serializer = self.serializer_class(
data=request.data, context={'user': request.user}
)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(
{
'description': 'User created',
'content': {
'data': serializer.data
}
},
status=201,
)
```
Here's what I get:
```json
{"field1": "string", "field2": "string"}
```
What I'd like to have:
```json
{
"description": "User created",
"content":
{
"data":
{
"field1": "string",
"field2": "string"
}
}
}
```
I tried playing with the `swagger_auto_schema` and the `responses` argument, but to be honest I'd like to avoid hardcoding any structure there, because it defeats the purpose of having automatically generated docs.
I also considered using nested serializers, but again, this is something that is complicating my code and would force me to introduce nested serializers in every view. Not DRY.
Do you have any idea how to solve this problem in smart way without resorting to define the schema in two places?
| closed | 2019-02-04T13:15:53Z | 2019-02-05T00:25:27Z | https://github.com/axnsan12/drf-yasg/issues/305 | [] | sireliah | 3 |
slackapi/bolt-python | fastapi | 1,222 | Respond and/or cancel request on global middleware | I have a service using FastApi and Slack-bolt, it has two global middlewares one to gather info from the user that did interact with the bot (info as his slack profile and other info from external APIs) and a sencond global middleware that check if the user is in a list of allowed users matching with that external info.
My problem is that my second middleware is something like this:
```
async def slackGlobalAccessMiddleware(client, context, body, say, event, action, logger, next, ack):
if event:
kind = "event"
subtype = event['type']
user = event['user']
access = myacceses.get(f"{kind}:{subtype}")
if access and hasAccess(context['login'], context['groups42'], user, access):
await publishView(client, genericBlocks.accessDeniedMessage, event['user'], event['type'])
return await next()
logger.warning("USER HAS NO ACCESS")
return
#tried also the options below
#return await say("toto")
#return BoltResponse(status=401, body={"error":"Access Denied"})
```
My problem with this code is that when a user has no access it returns an error because the middleware did not execute `next()` and then slack retries the request with a http_timeout retry_reason
This is from one side from the other side, my first middleware sometimes can take several seconds to fetch the external info what concludes in another retry from slack, despite eventually all the process is done succesfully.
I tried to use BoltResponse and/or ack in several parts of the code but i suppose im missing something.
I did some prior development in an old service we had with the old slack_sdk and i was able to solve this problem doing a fast first answer to slack and then processing the request however i thouight that was the exact porpouse of the ack function but probably im mistaken. Is there a way to answer gracefully to slack and the process the request?
And what would be the correct way to cancel the process of a request in case any local custom check failes in a middleware like an access check
Thank you so much
#### The `slack_bolt` version
slack_bolt==1.21.2
slack_sdk==3.33.3
#### Python runtime version
Python 3.10.12
#### OS info
SMP Debian 4.19.316-1 (2024-06-25) | closed | 2024-12-14T04:34:51Z | 2025-01-10T01:54:00Z | https://github.com/slackapi/bolt-python/issues/1222 | [
"question"
] | oktorok | 3 |
adap/flower | scikit-learn | 4,425 | Update Exchange Time Metric | ### What is your question?
Im looking to track parameter exchange time on a flwr system that is not simulated: Does the framework track communication time during model parameter exchange by default? If yes, which class/metrics can I access this from? If not, which specific methods should I track to measure the parameter exchange interval? | closed | 2024-11-04T16:21:38Z | 2024-11-13T04:16:55Z | https://github.com/adap/flower/issues/4425 | [] | hayekr | 1 |
slackapi/bolt-python | fastapi | 610 | Shortcut modal does not close after submit | Hi,
I developed a shortcut but after submitting by submit button, the modal is not closed.
After the submit, several validations are carried out and a message is sent to the user, but I want the modal to be closed and these processes to be in the background, as it consumes a lot of time and there is no need for the user to wait.
In the documentation it asks to be done this way:
```python
# Handle a view_submission request
@app.view("view_1")
def handle_submission(ack, body, client, view, logger):
# Assume there's an input block with `input_c` as the block_id and `dreamy_input`
hopes_and_dreams = view["state"]["values"]["input_c"]["dreamy_input"]
user = body["user"]["id"]
# Validate the inputs
errors = {}
if hopes_and_dreams is not None and len(hopes_and_dreams) <= 5:
errors["input_c"] = "The value must be longer than 5 characters"
if len(errors) > 0:
ack(response_action="errors", errors=errors)
return
# Acknowledge the view_submission request and close the modal
ack()
# Do whatever you want with the input data - here we're saving it to a DB
# then sending the user a verification of their submission
# Message to send user
msg = ""
try:
# Save to DB
msg = f"Your submission of {hopes_and_dreams} was successful"
except Exception as e:
# Handle error
msg = "There was an error with your submission"
# Message the user
try:
client.chat_postMessage(channel=user, text=msg)
except e:
logger.exception(f"Failed to post a message {e}")
```
I did it this way:
```python
@app.view("aws_access_request")
def handle_view_events(ack, body, client, view):
# Assume there's an input block with `input_c` as the block_id and `dreamy_input`
awsname = view["state"]["values"]["input_aws_select"]["aws_access_select"]["selected_option"]["value"]
awsrole = view["state"]["values"]["input_aws_role_select"]["aws_access_role_select"]["selected_option"]["value"]
managermail = view["state"]["values"]["input_aws_access_manager"]["aws_access_manager"]["value"]
squad = view["state"]["values"]["input_aws_access_squad"]["aws_access_squad"]["value"]
reason = view["state"]["values"]["input_aws_access_reason"]["aws_access_reason"]["value"]
senderid = body["user"]["id"]
senderobj = client.users_profile_get(user=senderid)
sendermail = senderobj["profile"]["email"]
sendermention = f"<@{senderid}>"
sendername = senderobj["profile"]["display_name"]
# Validate inputs
errors = {}
mincaracter = 25
inputs = {...}
if reason is not None and len(reason) <= mincaracter:
errors["input_aws_access_reason"] = f"The value must be longer than {mincaracter} characters"
elif awsname == "nothing":
errors["input_aws_select"] = "Please select valid option"
if len(errors) > 0:
ack(response_action="errors", errors=errors)
return
# Acknowledge the view_submission request and close the modal
ack()
checkinputname = Ot.check_inputs(inputs, INPUT_AWS_NAME)
checkinputmanager = Ot.check_inputs(inputs, INPUT_MANAGER_MAIL)
# Caso o usuário não tenha informado corretamente o Acesso/Gestor, não envia a solicitação
if not checkinputname or not checkinputmanager:...
else...
```
After ack() "Acknowledge the view_submission request and close the modal" It is not closing the modal and continues running the code below.
The problem is that the code below ack() takes about 10 seconds to run, so this error appears:

As I said, I wanted the form to close immediately after ack() and not wait for all the code to run to close.
| closed | 2022-03-03T16:04:39Z | 2022-03-03T22:43:07Z | https://github.com/slackapi/bolt-python/issues/610 | [
"question",
"need info"
] | BSouzaDock | 13 |
OpenInterpreter/open-interpreter | python | 1,588 | Security & Performance Issue: Unlimited Code Output in LLM Context | ## Issue Description
When executing code that produces large amounts of output (e.g., directory listings, file contents, system information), all output is sent to the LLM in its entirety before being truncated in the final response. This raises both security and performance concerns:
1. **Security Risk**:
- Sensitive information in large outputs (logs, system info, file contents) is sent to the LLM
- Even if truncated in the final response, the LLM has already processed the complete output
- This could lead to unintended data exposure
2. **Performance Impact**:
- Unnecessary token consumption when sending large outputs to the LLM
- Increased API costs
- Potential context window overflow
## Example
```python
# Simple code that generates large output
import os
for root, dirs, files in os.walk("/"):
print(f"Directory: {root}")
for file in files:
print(f" File: {file}")
```
Current behavior:
1. Code executes and generates complete output
2. Complete output is sent to LLM
3. LLM processes all output
4. Response is truncated for display
## Proposed Solution
Add output limiting at the source (code execution) level:
1. Add a configurable `max_output_lines` or `max_output_bytes` parameter
2. Implement truncation during code execution, before sending to LLM
3. Add clear indicators when output is truncated
This aligns with the project's philosophy of simplicity and security while maintaining core functionality.
## Questions
1. Would this feature align with the project's scope?
2. Should this be configurable per execution or as a global setting?
3. What would be a reasonable default limit?
## Additional Context
This issue was discovered while building a service using Open Interpreter's API. The complete output being sent to the LLM was noticed through debug logs and token usage metrics.
### Describe the solution you'd like
Add output limiting at the source (code execution) level:
1. Add a configurable `max_output_lines` or `max_output_bytes` parameter
2. Implement truncation during code execution, before sending to LLM
3. Add clear indicators when output is truncated
This aligns with the project's philosophy of simplicity and security while maintaining core functionality.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | open | 2025-02-10T22:14:17Z | 2025-02-14T13:20:51Z | https://github.com/OpenInterpreter/open-interpreter/issues/1588 | [] | ovenzeze | 1 |
huggingface/transformers | machine-learning | 36,876 | <spam> | ### Model description
<!-- Failed to upload "Screen_Recording_20250320_044456_Chrome.mp4" -->
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | closed | 2025-03-21T08:28:30Z | 2025-03-21T11:50:47Z | https://github.com/huggingface/transformers/issues/36876 | [
"New model"
] | tjsexfunmoney664 | 0 |
scikit-optimize/scikit-optimize | scikit-learn | 476 | BayesSearchCV only searches through first parameter space | I tried running a couple of toy examples, including the advanced example from the documentation, and `BayesSearchCV` seems to only search over the first parameter space in the list of tuples that I pass to `search_spaces`.
[Here](https://github.com/savvastj/ml_playground/blob/master/try_skopt_bayescv.ipynb) is a notebook where I try out `RandomForestRegressor`, `Ridge`, and `Lasso`, but `cv_results` only returns information regarding the Random Forest's parameter space. (Another strange behavior is that even though I set 50 evaluations for Random Forest, 56 were returned.)
And [here](https://github.com/savvastj/ml_playground/blob/master/adv_bayescv_skopt_example.ipynb) is a notebook containing the advanced example from the documentation. The `cv_results` only contain evaluations for `LinearSVC`, but nothing for the `DecisionTreeClassifier` or `SVC`. | closed | 2017-08-12T01:26:50Z | 2017-08-26T20:38:23Z | https://github.com/scikit-optimize/scikit-optimize/issues/476 | [] | savvastj | 4 |
microsoft/qlib | machine-learning | 1,087 | bad magic number for central directory | when I run the code :
from qlib.tests.data import GetData
GetData().qlib_data(exists_skip=True)
it reulted in a Badzipfile error: Bad magic number for central directory
my platform is : win10 + anaconda + vs code | closed | 2022-05-03T10:38:15Z | 2023-10-23T13:01:04Z | https://github.com/microsoft/qlib/issues/1087 | [
"bug"
] | lycanthropes | 1 |
paperless-ngx/paperless-ngx | django | 8,595 | [BUG] Setting certain documents as linked document causes internal server error when field set to invalid value | ### Description
Setting certain documents as a linked document (custom field) seems to cause an internal server error.
The same pdf files don't cause the issue on another instance of paperless-ngx, which leads me to believe it has something to do with certain custom fields or the UI, rather than the file itself?
### Steps to reproduce
1. Add custom field of type "Document link" to a document
2. Add certain document in that field.
3. Click save
### Webserver logs
```bash
paperless_webserver_1 | [2025-01-03 13:15:40,409] [ERROR] [django.request] Internal Server Error: /api/documents/80/
paperless_webserver_1 | Traceback (most recent call last):
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
paperless_webserver_1 | raise exc_info[1]
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/django/core/handlers/exception.py", line 42, in inner
paperless_webserver_1 | response = await get_response(request)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
paperless_webserver_1 | raise exc_info[1]
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
paperless_webserver_1 | response = await wrapped_callback(
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 468, in __call__
paperless_webserver_1 | ret = await asyncio.shield(exec_coro)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/current_thread_executor.py", line 40, in run
paperless_webserver_1 | result = self.fn(*self.args, **self.kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 522, in thread_handler
paperless_webserver_1 | return func(*args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/django/views/decorators/csrf.py", line 65, in _view_wrapper
paperless_webserver_1 | return view_func(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/viewsets.py", line 124, in view
paperless_webserver_1 | return self.dispatch(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 509, in dispatch
paperless_webserver_1 | response = self.handle_exception(exc)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 469, in handle_exception
paperless_webserver_1 | self.raise_uncaught_exception(exc)
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
paperless_webserver_1 | raise exc
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 506, in dispatch
paperless_webserver_1 | response = handler(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/src/paperless/src/documents/views.py", line 393, in update
paperless_webserver_1 | response = super().update(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/mixins.py", line 68, in update
paperless_webserver_1 | self.perform_update(serializer)
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/mixins.py", line 78, in perform_update
paperless_webserver_1 | serializer.save()
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/drf_writable_nested/mixins.py", line 233, in save
paperless_webserver_1 | return super(BaseNestedModelSerializer, self).save(**kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/serializers.py", line 203, in save
paperless_webserver_1 | self.instance = self.update(self.instance, validated_data)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 881, in update
paperless_webserver_1 | super().update(instance, validated_data)
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 332, in update
paperless_webserver_1 | return super().update(instance, validated_data)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/drf_writable_nested/mixins.py", line 290, in update
paperless_webserver_1 | self.update_or_create_reverse_relations(instance, reverse_relations)
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/drf_writable_nested/mixins.py", line 188, in update_or_create_reverse_relations
paperless_webserver_1 | related_instance = serializer.save(**save_kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/serializers.py", line 208, in save
paperless_webserver_1 | self.instance = self.create(validated_data)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 600, in create
paperless_webserver_1 | self.reflect_doclinks(document, custom_field, validated_data["value"])
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 716, in reflect_doclinks
paperless_webserver_1 | elif document.id not in target_doc_field_instance.value:
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | TypeError: 'in <string>' requires string as left operand, not int
paperless_webserver_1 | [2025-01-03 13:15:43,015] [WARNING] [django.request] Bad Request: /api/documents/80/
```
### Browser logs
```bash
Error popup in the webui:
{"headers":{"normalizedNames":{},"lazyUpdate":null},"status":500,"statusText":"Internal Server Error","url":"http://192.168.1.2:8000/api/documents/80/","ok":false,"name":"HttpErrorResponse","message":"Http failure response for http://192.168.1.2:8000/api/documents/80/: 500 Internal Server Error","error":"\n<!doctype html>\n<html lang=\"en\">\n<head>\n <title>Server Error (500)</title>\n</head>\n<body>\n <h1>Server Error (500)</h1><p></p>\n</body>\n</html>\n"}
```
### Paperless-ngx version
2.13.5
### Host OS
Debian 12
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.13.5",
"server_os": "Linux-6.1.0-25-amd64-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 104348344320,
"available": 35895164928
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0011_remove_mailrule_assign_tag_squashed_0024_alter_mailrule_name_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-01-03T13:44:34.479716+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2025-01-03T12:05:00.048963Z",
"classifier_error": null
}
}
```
### Browser
Chrome, Safari
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2025-01-03T13:03:27Z | 2025-02-04T03:05:14Z | https://github.com/paperless-ngx/paperless-ngx/issues/8595 | [
"bug",
"backend"
] | dwapps | 8 |
yeongpin/cursor-free-vip | automation | 211 | [Bug]: 无法获取Cursor路径导致获取版本号失败 | ### 提交前检查
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我已经查看了置顶 Issue 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues)和[已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
- [x] 我填写了简短且清晰明确的标题,以便开发者在翻阅 Issue 列表时能快速确定大致问题。而不是“一个建议”、“卡住了”等。
### 平台
Windows x64
### 版本
1.7.06
### 错误描述
最新版(1.7.06)CursorFreeVip无法获取Cursor路径,而1.7.02版本是正常的。
### 相关日志输出
```shell
堆栈跟踪: Traceback (most recent call last):
File "reset_machine_manual.py", line 183, in check_cursor_version
pkg_path, _ = get_cursor_paths(translator)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "reset_machine_manual.py", line 61, in get_cursor_paths
raise OSError(translator.get('reset.path_not_found', path=base_path) if translator else f"找不到 Cursor 路徑: {base_path}")
OSError: reset.path_not_found
ℹ️ Cursor版本 < 0.45.0,跳过getMachineId修补
```
### 附加信息
_No response_ | closed | 2025-03-12T13:41:22Z | 2025-03-13T03:49:49Z | https://github.com/yeongpin/cursor-free-vip/issues/211 | [
"bug"
] | ZGerXu | 1 |
sinaptik-ai/pandas-ai | data-science | 1,554 | API_KEY Issue | ### System Info
Version: 2.4.2 , Atlas OS (Win 11 pro) , Python 3.12.2
### 🐛 Describe the bug
Code :
##As in the Example
import pandasai as pai
pai.api_key.set("api_key")
file_df = pai.read_csv("student_clustering.csv") # correct path
response = file_df.chat("What is the correlation between iq and cgpa")
print(response)
Error:
AttributeError: module 'pandasai' has no attribute 'api_key'
| closed | 2025-01-28T16:17:45Z | 2025-01-28T22:44:42Z | https://github.com/sinaptik-ai/pandas-ai/issues/1554 | [] | Haseebasif7 | 1 |
keras-team/autokeras | tensorflow | 1,541 | Died - after 7.5 hours | ### Bug Description
<!---
Died after 7.5 hours. 72 successful trials. Then this...
NotImplementedError: Cannot convert a symbolic Tensor (random_rotation/rotation_matrix/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
-->
### Bug Reproduction
Code for reproducing the bug:
reg = ak.ImageRegressor(overwrite=True,
max_trials=150, # 100 is default - first 15 take roughly an hour
metrics=['mae'], # mean average error is more useful, though as a loss need mse
loss='mse',
project_name="GMA pallet pose estimation",
directory="autoK",
seed=42)
logdir = "aklogs"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1)
removeFilesFromFolder("./aklogs") # clean up for new tensorboard
# Feed the image regressor with training data.
reg.fit(x_train, y_train,
validation_data=(x_val, y_val),
callbacks=[tensorboard_callback])
Data used by the code:
X = 250M of images, Y = 3 floats per image
### Expected Behavior
<!---
If not so obvious to see the bug from the running results,
please briefly describe the expected behavior.
-->
### Setup Details
Include the details about the versions of:
- OS type and version: ubuntu 20.04
- Python: 3.9.2
- autokeras: <!--- e.g. 0.4.0, 1.0.2, master-->
- keras-tuner: whatever the current install is
- scikit-learn:
- numpy:
- pandas: not used
- tensorflow: 2.4.1
### Additional context
<!---
If applicable, add any other context about the problem.
-->
Just wondering if it makes sense to keep trying this out... My loss wasn't getting better for over 4 hours.
Thanks!
p
| closed | 2021-03-31T18:45:23Z | 2021-06-10T04:15:34Z | https://github.com/keras-team/autokeras/issues/1541 | [
"bug report",
"wontfix"
] | pgaston | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 19,905 | AttributeError: type object 'Trainer' has no attribute 'add_argparse_args' | ### Bug description
When trying to run
```
parser = argparse.ArgumentParser()
parser = Trainer.add_argparse_args(parser)
args = parser.parse_args([])
```
AttributeError: type object 'Trainer' has no attribute 'add_argparse_args'
That error message appears.
### What version are you seeing the problem on?
v2.0, v2.1, v2.2
### How to reproduce the bug
```python
import argparse
import pytorch_lightning
from pytorch_lightning.trainer import Trainer
from pytorch_lightning.callbacks import ModelCheckpoint, Callback, LearningRateMonitor
parser = argparse.ArgumentParser()
parser.add_argument(
"--gpus",
type=int,
default=1,
help="num of gpus to use",
)
parser = Trainer.add_argparse_args(parser)
args = parser.parse_args()
```
```
AttributeError: type object 'Trainer' has no attribute 'add_argparse_args'
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): Trainer
#- PyTorch Lightning Version (e.g., 1.5.0): 2.0.0
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0): 2.0.1
#- Python version (e.g., 3.9): 3.8.8
#- OS (e.g., Linux): Linux
#- CUDA/cuDNN version: 12.0
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source): conda install pytorch-lightning=2.0.0 -c conda-forge
#- Running environment of LightningApp (e.g. local, cloud):
### More info
When I first encountered the error, I thought it might be a version conflict issue between internal programs, so I installed 'pytorch_lightning' anew by referring to the link('https://lightning.ai/docs/pytorch/stable/versioning.html').
I finished reinstalling, but that didn't solve the problem. What's the problem?
(At first, I've installed ver 2.2 then, I convert to 2.0. For test I've done the same thing at ver 2.1) | closed | 2024-05-24T08:20:23Z | 2024-06-06T18:57:59Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19905 | [
"question",
"working as intended",
"ver: 2.0.x",
"ver: 2.1.x"
] | Park-yebin | 1 |
pydata/pandas-datareader | pandas | 374 | What are the limitation in in ticker names? | When trying to following tickers from Yahoo only ^OMX is found although they are all available to view on Yahoo finance:
```["XDWD.ST",'^OMX','^HOXFLATGBG']```
Why are not all downloaded with pandas-datareader? | closed | 2017-07-25T13:44:30Z | 2018-01-24T22:25:04Z | https://github.com/pydata/pandas-datareader/issues/374 | [] | favetelinguis | 2 |
nerfstudio-project/nerfstudio | computer-vision | 3,484 | What is the coordinate convention for the saved dataparser_transforms.json ? | It is not clear if the `dataparser_transforms.json` "transform" matrix is in OpenCV or OpenGL convention, if the data pre-processing, say, was done using COLMAP (which would be using OpenCV).
This makes it hard to correctly apply this transformation, if I am trying to convert camera poses in the input coordinate system (ie. whatever COLMAP gives) into the coordinate system used by the trained Nerf. | open | 2024-10-16T17:59:10Z | 2024-10-23T15:27:53Z | https://github.com/nerfstudio-project/nerfstudio/issues/3484 | [] | AruniRC | 5 |
art049/odmantic | pydantic | 233 | Suggested Documentation Fix | # Feature request
### Context
Currently, it is suggested that we globally set the engine at the module level, and I suggest that this be removed from the documentation. Mongo suggests that you scale horizontally, actually, all NoSQL solutions should scale horizontally. Therefore, you can have many databases with the same data, just organized based on some other key. Like year, month, week, etc. If we call the engine at the global level, you remove the ability to control this properly.
### Solution
I suggest that this be removed from the documentation. If it were SQL, we have a different situation here. SQL is vertically scalable, so a global call is possible. But for mongo, the solution that I am using looks like this:
```python
from typing import Any, Iterator
from bson import ObjectId
from odmantic import AIOEngine, Model
from pydantic import BaseModel
class PyObjectId(ObjectId):
@classmethod
def __get_validators__(cls) -> Iterator:
yield cls.validate
@classmethod
def validate(cls, v: Any) -> ObjectId:
if not ObjectId.is_valid(v):
raise ValueError("Invalid objectid")
return ObjectId(v)
@classmethod
def __modify_schema__(cls, field_schema: Any) -> None:
field_schema.update(type="string")
class Entries(Model):
rank: int
username: str
points: float
contest_id: str
class CreateEntriesModel(BaseModel):
rank: int
username: str
points: float
contest_id: str
season: int
class Config:
allow_population_by_field_name = True
arbitrary_types_allowed = True
json_encoders = {ObjectId: str}
schema_extra = {
"example": {
"rank": "1",
"username": "minervaengineering",
"points": "199.15",
"contest_id": "123456789",
"season": "2022"
}
}
@router.post("/", response_description="Add new entry", response_model=Entries)
async def create_entry(entry: CreateEntriesModel):
engine = AIOEngine(motor_client=client, database=f"ContestEntries_{entry.season}")
entry = Entries(
rank=entry.rank,
username=entry.username,
points=entry.points,
contest_id=entry.contest_id
)
await engine.save(entry)
return entry
```
#### Alternative solutions
I attempted to make the global call work, but it's a sketchy solution. And really, unstable. This is a basic setup I decided to rainman on for the last 3 hours. it's now past 2am here, and I quite literally don't know why I felt the need to do this. You did a great job on this package, and I'd love for this to be truly scalable, and for others to not run into a brick wall because of the documentation.
### Additional context
That's all! Good work!
| open | 2022-07-12T07:06:11Z | 2022-07-12T15:52:48Z | https://github.com/art049/odmantic/issues/233 | [
"enhancement"
] | johnson2427 | 1 |
Layout-Parser/layout-parser | computer-vision | 52 | Error in Instantiating lp.GCVAgent |

| closed | 2021-08-02T05:15:17Z | 2021-08-04T05:12:17Z | https://github.com/Layout-Parser/layout-parser/issues/52 | [
"bug"
] | shumchiken | 3 |
aminalaee/sqladmin | fastapi | 201 | nullable=False Boolean field cannot be set to False in UI | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
if you have in model field like
is_deleted = Column(Boolean, nullable=False, server_default=text("False"))
in UI impossible to leave it unchecked (unchecked mean == False)
<img width="337" alt="Screen Shot 2022-06-21 at 9 54 38 PM" src="https://user-images.githubusercontent.com/15959809/174877013-cc8fea48-ba2a-4906-8d02-b0761dc03684.png">
when I try to click to 'save' it is always return me to this field & allow save only if I check this field as True
### Steps to reproduce the bug
_No response_
### Expected behavior
Boolean fields can be in both checked (True) and Unchecked (False) (False != None for boolean)
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
sqladmin==0.1.9
### Additional context
_No response_ | closed | 2022-06-21T18:58:33Z | 2022-06-21T19:15:05Z | https://github.com/aminalaee/sqladmin/issues/201 | [] | xnuinside | 1 |
public-apis/public-apis | api | 3,800 | A | closed | 2024-03-13T19:33:11Z | 2024-03-13T19:40:06Z | https://github.com/public-apis/public-apis/issues/3800 | [] | maru9990 | 0 |
|
PaddlePaddle/models | computer-vision | 5,360 | paddle.utils.run_check() 报错 | cuda版本是10.1, cudnn是7.6.5,paddle 安装的是2.1.3。
使用 paddle.utils.run_check() 验证安装是否成功时,报错。不知道问题出在哪里了,麻烦帮看下。
具体报错如下:
`
Running verify PaddlePaddle program ...
W1023 17:00:07.024209 6741 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 6.1, Driver API Version: 10.1, Runtime API Version: 10.1
W1023 17:00:07.032667 6741 device_context.cc:422] device: 0, cuDNN Version: 7.6.
PaddlePaddle works well on 1 GPU.
W1023 17:00:11.081976 6741 parallel_executor.cc:601] Cannot enable P2P access from 0 to 4
W1023 17:00:11.082031 6741 parallel_executor.cc:601] Cannot enable P2P access from 0 to 5
W1023 17:00:11.082046 6741 parallel_executor.cc:601] Cannot enable P2P access from 0 to 6
W1023 17:00:11.082060 6741 parallel_executor.cc:601] Cannot enable P2P access from 0 to 7
W1023 17:00:12.972504 6741 parallel_executor.cc:601] Cannot enable P2P access from 1 to 4
W1023 17:00:12.972553 6741 parallel_executor.cc:601] Cannot enable P2P access from 1 to 5
W1023 17:00:12.972563 6741 parallel_executor.cc:601] Cannot enable P2P access from 1 to 6
W1023 17:00:12.972571 6741 parallel_executor.cc:601] Cannot enable P2P access from 1 to 7
W1023 17:00:14.508332 6741 parallel_executor.cc:601] Cannot enable P2P access from 2 to 4
W1023 17:00:14.508384 6741 parallel_executor.cc:601] Cannot enable P2P access from 2 to 5
W1023 17:00:14.508422 6741 parallel_executor.cc:601] Cannot enable P2P access from 2 to 6
W1023 17:00:14.508442 6741 parallel_executor.cc:601] Cannot enable P2P access from 2 to 7
W1023 17:00:16.406250 6741 parallel_executor.cc:601] Cannot enable P2P access from 3 to 4
W1023 17:00:16.406316 6741 parallel_executor.cc:601] Cannot enable P2P access from 3 to 5
W1023 17:00:16.406330 6741 parallel_executor.cc:601] Cannot enable P2P access from 3 to 6
W1023 17:00:16.406342 6741 parallel_executor.cc:601] Cannot enable P2P access from 3 to 7
W1023 17:00:16.406355 6741 parallel_executor.cc:601] Cannot enable P2P access from 4 to 0
W1023 17:00:16.406368 6741 parallel_executor.cc:601] Cannot enable P2P access from 4 to 1
W1023 17:00:16.406388 6741 parallel_executor.cc:601] Cannot enable P2P access from 4 to 2
W1023 17:00:16.406400 6741 parallel_executor.cc:601] Cannot enable P2P access from 4 to 3
W1023 17:00:18.666070 6741 parallel_executor.cc:601] Cannot enable P2P access from 5 to 0
W1023 17:00:18.666117 6741 parallel_executor.cc:601] Cannot enable P2P access from 5 to 1
W1023 17:00:18.666129 6741 parallel_executor.cc:601] Cannot enable P2P access from 5 to 2
W1023 17:00:18.666141 6741 parallel_executor.cc:601] Cannot enable P2P access from 5 to 3
W1023 17:00:20.616923 6741 parallel_executor.cc:601] Cannot enable P2P access from 6 to 0
W1023 17:00:20.616972 6741 parallel_executor.cc:601] Cannot enable P2P access from 6 to 1
W1023 17:00:20.616983 6741 parallel_executor.cc:601] Cannot enable P2P access from 6 to 2
W1023 17:00:20.616993 6741 parallel_executor.cc:601] Cannot enable P2P access from 6 to 3
W1023 17:00:22.165275 6741 parallel_executor.cc:601] Cannot enable P2P access from 7 to 0
W1023 17:00:22.165328 6741 parallel_executor.cc:601] Cannot enable P2P access from 7 to 1
W1023 17:00:22.165338 6741 parallel_executor.cc:601] Cannot enable P2P access from 7 to 2
W1023 17:00:22.165349 6741 parallel_executor.cc:601] Cannot enable P2P access from 7 to 3
C++ Traceback (most recent call last):
0 paddle::framework::SignalHandle(char const*, int)
1 paddle::platform::GetCurrentTraceBackString[abi:cxx11]()
Error Message Summary:
FatalError: `Segmentation fault` is detected by the operating system.
[TimeInfo: *** Aborted at 1634979634 (unix time) try "date -d @1634979634" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 6741 (TID 0x7f117c717700) from PID 0 ***]
Segmentation fault (core dumped)
`
| open | 2021-10-23T09:36:43Z | 2024-02-26T05:08:36Z | https://github.com/PaddlePaddle/models/issues/5360 | [] | 128ve900 | 1 |
pennersr/django-allauth | django | 3,567 | Typo in locales | Extra ")" symbol in the translation:
https://github.com/pennersr/django-allauth/blame/f356138d8903d55755a5157e2992d5b40ba93205/allauth/locale/sk/LC_MESSAGES/django.po#L718
```
#: templates/account/messages/password_set.txt:2
msgid "Password successfully set."
msgstr ")Nastavenie hesla bolo úspešné."
``` | closed | 2023-12-14T20:56:17Z | 2023-12-14T21:31:46Z | https://github.com/pennersr/django-allauth/issues/3567 | [] | eriktelepovsky | 1 |
computationalmodelling/nbval | pytest | 91 | Disable colors for junit reports | We use pytest and nbval to generate junit-xml files. Unfortunately there is no option to disable the coloring of output messages. Here is an example junit xml file:
```
<testsuite errors="0" failures="1" name="pytest" skips="0" tests="3" time="3.044">
<testcase classname="notebooks.tests.test_error.ipynb" file="notebooks/tests/test_error.ipynb" line="0" name="Cell 0" time="1.4594926834106445">
<failure message="#x1B[91mNotebook cell execution failed#x1B[0m #x1B[94mCell 0: Cell execution caused an exception Input: #x1B[0mfoo = "Hase" assert foo == "Igel" #x1B[94mTraceback:#x1B[0m #x1B[0;31m---------------------------------------------------------------------------#x1B[0m #x1B[0;31mAssertionError#x1B[0m Traceback (most recent call last) #x1B[0;32m<ipython-input-1-24658342da6f>#x1B[0m in #x1B[0;36m<module>#x1B[0;34m()#x1B[0m #x1B[1;32m 1#x1B[0m #x1B[0mfoo#x1B[0m #x1B[0;34m=#x1B[0m #x1B[0;34m"Hase"#x1B[0m#x1B[0;34m#x1B[0m#x1B[0m #x1B[1;32m 2#x1B[0m #x1B[0;34m#x1B[0m#x1B[0m #x1B[0;32m----> 3#x1B[0;31m #x1B[0;32massert#x1B[0m #x1B[0mfoo#x1B[0m #x1B[0;34m==#x1B[0m #x1B[0;34m"Igel"#x1B[0m#x1B[0;34m#x1B[0m#x1B[0m #x1B[0m #x1B[0;31mAssertionError#x1B[0m: ">
#x1B[91mNotebook cell execution failed#x1B[0m #x1B[94mCell 0: Cell execution caused an exception Input: #x1B[0mfoo = "Hase" assert foo == "Igel" #x1B[94mTraceback:#x1B[0m #x1B[0;31m---------------------------------------------------------------------------#x1B[0m #x1B[0;31mAssertionError#x1B[0m Traceback (most recent call last) #x1B[0;32m<ipython-input-1-24658342da6f>#x1B[0m in #x1B[0;36m<module>#x1B[0;34m()#x1B[0m #x1B[1;32m 1#x1B[0m #x1B[0mfoo#x1B[0m #x1B[0;34m=#x1B[0m #x1B[0;34m"Hase"#x1B[0m#x1B[0;34m#x1B[0m#x1B[0m #x1B[1;32m 2#x1B[0m #x1B[0;34m#x1B[0m#x1B[0m #x1B[0;32m----> 3#x1B[0;31m #x1B[0;32massert#x1B[0m #x1B[0mfoo#x1B[0m #x1B[0;34m==#x1B[0m #x1B[0;34m"Igel"#x1B[0m#x1B[0;34m#x1B[0m#x1B[0m #x1B[0m #x1B[0;31mAssertionError#x1B[0m:
</failure>
</testcase>
<testcase classname="notebooks.tests.test_error.ipynb" file="notebooks/tests/test_error.ipynb" line="0" name="Cell 1" time="1.2278406620025635"/>
<testcase classname="notebooks.tests.test_error.ipynb" file="notebooks/tests/test_error.ipynb" line="0" name="Cell 2" time="0.34316015243530273"/>
</testsuite>
```
| closed | 2018-01-30T15:35:52Z | 2018-02-10T15:58:38Z | https://github.com/computationalmodelling/nbval/issues/91 | [] | gartentrio | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.